my goal is see output of # of columns such as wc -lc.
I have understood that. I suggest you first try the command i wrote above, then modify the input (the string behind "echo") and see how that changes the output. Especially the line:
I'm not sure the FS is getting set in the right place. I get the following:-
The only difference is that I've removed the spaces. The FS="," seemingly has no effect:-
Not being too experienced in awk, I did get more sucess with these:-
Have I missed something important in your solution? I'm using RHEL 6
Kind regards,
Robin
Last edited by rbatte1; 06-18-2014 at 01:04 PM..
Reason: Added OS version
I'm not sure the FS is getting set in the right place. I get the following:-
If this is your result your awk is definitely working different than mine. I used AIX (7.1.3 SP3) awk.
Quote:
Originally Posted by rbatte1
The only difference is that I've removed the spaces.
This is in line with my documentation. Per default the field separator (internal variable "FS") is set to a blank (which is translated to any whitespace in the current locale by awk). You got 11 fields because it treated each word and each comma as a separate field. You got 1 on the second try, because with blanks used as field separators there is only one field once you remove all the blanks. So it seems your awk just ignored the FS-declaration somehow and fell back to its defaults.
Quote:
Originally Posted by rbatte1
I did get more sucess with these:-
According to my documentation the "-F" parameter sets the value for the internal FS variable the same way i did and i suppose that means the two ways ought to produce the same result.
Addendum: i just checked with a Ubuntu system and the awk there works the same way you described it. The version is:
Its documentation says that the variable FS can only be set to a ERE, so i tried
which finally did the trick. Seems like mawk is more picky about where FS is set and only allows EREs whereas AIX's awk allows it to be set anywhere and allows single characters, blanks or EREs.
Hello all,
I would like to ask your help here:
I've a huge file that has 2 columns. A part of it is:
sorted.txt:
kss23 rml.67lkj
kss23 zhh.6gf
kss23 nhd.09.fdd
kss23 hp.767.88.89
fl67 nmdsfs.56.df.67
fl67 kk.fgf.98.56.n
fl67 bgdgdfg.hjj.879.d
fl66 kl..hfh.76.ghg
fl66... (5 Replies)
Hello
How can I get a number of occurrence count for this file;
ERR315389.1000156 CTTGAAGAAGAATTGAAAACTGTGACGAACAACTTGAAGTCACTGGAGGCTCAGGCTGAGAAGTACTCGCAGAAGGAAGACAGATATGAGGAAGAG
ERR315389.1000281 ... (3 Replies)
Hi all,
I have file like this:
FID IID MISS_PHENO N_MISS N_GENO F_MISS
AU4103 AU4103201 Y 15473 66858 0.2314
AU4142 AU4142303 Y 15464 66858 0.2313
AU4128 AU4128304 Y 15458 66858 0.2312
AU4129 AU4129202 Y 15451 66858 0.2311
AU3934 AU3934201 Y 15441 66858 0.231
AU3934 AU3934304 Y 15448 66858... (2 Replies)
Hi, i have data like
a
b
a
a
b
c
d
...
I have to output info of each distinct value that appears in the column and the count of it
a-3,b-2,c-1,d-1
Is there a single line command using awk or sed to accomplish this?
Thanks,
-srinivas yelamanchili (7 Replies)
I am making a script in bash shell and need to find out how many columns are in each row. I tried using awk '{print NF}' which will give me the number of columns for each row of the file all at once which is not what i want. I am trying to read each line individual from a file and want to know... (6 Replies)
Hi,
I have a file with the contents as below,
10:23:10 GOOD 10.30.50.60
10:23:11 GOOD 10.30.50.62
10:23:12 Hello 10.30.50.60
10:23:12 BAD 10.30.50.60
10:23:13 GOOD 10.30.50.66
10:23:14 BAD 10.30.50.62... (3 Replies)