I was wondering whether anyone has any idea what is happening here. I'm using simple code to compare 2 tab delimited files based on column 1 values. If the column1 value of file1 exists in file2, then I'm to print the column4 value in file2 in column3 of file1. Here is my code:
1st I have to produce file1 by concatenating columns 4&5 of the input file:
INPUT FILE:
Using this simple code, I reorder columns in the INPUT file and concatenate columns 4 &5:
This produces the following file1 which looks fine:
This is file2:
Using the code:
Yields the following output file, RESULTS, which so far seems to look fine:
For final processing we need to print all rows for which column3 = column4. So I used this simple code:
Thus RESULTS2 should look like this:
Instead what I get is this:
Any ideas as to what is causing the column4 value to print to the following row?
Last edited by Geneanalyst; 10-29-2018 at 05:01 PM..
Reason: forgot a step
HI guys,
I have created a script to read 1 column in a csv file and then place it in text file.
However, when i checked out the text file, it is not in a column format...
Example:
CSV file contains
name,age
aa,11
bb,22
cc,33
After using awk to get first column
TXT file... (1 Reply)
I am using the following command:
nawk -F"," 'NR==FNR {a=$1;next} a {print a,$1,$2,$3}' file1 file2
I am getting 40 records output.
But when i import file1 and file2 in MS Access i get 140 records.
And i know 140 is correct count.
Appreciate your help on correcting the above script (5 Replies)
I have a number of unix text files containing fixed-length records (normal unix linefeed terminator) where I need to find odd records which are an incorrect length.
The data is not validated and records can contain odd backslash characters and control characters which makes them awkward to process... (2 Replies)
Hi Experts,
I am adding a column of numbers with awk , however not getting correct output:
# awk '{sum+=$1} END {print sum}' datafile
2.15291e+06
How can I getthe output like : 2152910
Thank you..
# awk '{sum+=$1} END {print sum}' datafile
2.15079e+06 (3 Replies)
Hello friends,
I searched in forums for similar threads but what I want is to have a single awk code to perform followings;
I have a big log file going like this;
...
7450494 1724465 -47 003A98B710C0
7450492 1724461 -69 003A98B710C0
7450488 1724459 001DA1915B70 trafo_14:3
7450482... (5 Replies)
I want to extract a web page to a temporary file as a source document. I tried: wget $webPgURL > /tmp/tmpfil
but it says I have a missing URL. I have echoed $webPgURL just prior to the wget command and it is correct. If I use: firefox $webPbURL it brings up firefox with the correct page. Can... (3 Replies)
cat T|awk -v format=$format '{ SUM += $1} END { printf format,SUM}'
the file T has below data
usghrt45tf:hrguat:/home/hrguat $ cat T
-1363000.00123456789
-95000.00789456123
-986000.0045612378
-594000.0015978
-368939.54159753258415
-310259.0578945612
-133197.37123456789... (4 Replies)
Running solaris 9, on issuing the follwing command
df -h | awk '$5 > 45 {print}'
Filesystems with utilisation > 45% are being displayed as well as those between
5 and-9%!!! (3 Replies)
Hi All,
I am looking to filter out filesystems which are greter than a specific value.
I use the command
df -h | awk '$4 >=70.00 {print $4,$5}'
But this results out as below, which also gives for lower values.
9% /u01
86% /home
8% /u01/data
82% /install
70% /u01/app
Looks... (3 Replies)