If you're willing to accept output with values in each column presented in the same order as they were seen in encountered in the input, you could try something like:
which, with the sample data you provided, produces the output:
Code:
joe allen ibm
rachel smith google
If you need the last column to be in a different order, you need to more clearly explain your requirements. Hopefully, you can use the above code as a base to get something that will do what you're trying to do.
If you want to try this on a Solaris/SunOS system, change awk to /usr/xpg4/bin/awk or nawk.
These 2 Users Gave Thanks to Don Cragun For This Post:
Jumping in as Don Cragun seems to be logged out at the moment:
Code:
awk '
function AddFieldData(field) {if ( (field, $field) in data) return # if field No. and field contents occurred (and registered) before, quit function
# data is an array whose elements'' contents don''t count, only its indices
data[ field, FieldValue [field, ++count[field] ] = $field] # this is quite complex (array elements are defined on first reference):
# register field No. and field contents as index into data array, and
# at the same time, create FieldValue array element holding field contents and indexed
# by field No. and an incremented counter for the field No.
if (count[field] > maxr) maxr = count[field] # keep max count across ALL fields (i.e. the No. of lines to be printed
# e.g. for three unique values in field2 we need to print three lines)
if (field > maxc) maxc = field # keep max of NF across all lines
}
{for(i = 1; i <= NF; i++) AddFieldData(i) # main: call above function for ALL fields in ALL lines filling the necessary arrays
}
END {for (i = 1; i <= maxr; i++) # print maxr lines with
for (j = 1; j <= maxc; j++) # maxc fields, each
printf("%s%s", FieldValue[j, i], (j == maxc) ? ORS : OFS) # a sort of random distribution of unique field values ; after last (maxc)
# field print line feed (ORS) else field separator (OFS, space)
}
' file
Jumping in as Don Cragun seems to be logged out at the moment:
Code:
awk '
function AddFieldData(field) {if ( (field, $field) in data) return # if field No. and field contents occurred (and registered) before, quit function
# data is an array whose elements'' contents don''t count, only its indices
data[ field, FieldValue [field, ++count[field] ] = $field] # this is quite complex (array elements are defined on first reference):
# register field No. and field contents as index into data array, and
# at the same time, create FieldValue array element holding field contents and indexed
# by field No. and an incremented counter for the field No.
if (count[field] > maxr) maxr = count[field] # keep max count across ALL fields (i.e. the No. of lines to be printed
# e.g. for three unique values in field2 we need to print three lines)
if (field > maxc) maxc = field # keep max of NF across all lines
}
{for(i = 1; i <= NF; i++) AddFieldData(i) # main: call above function for ALL fields in ALL lines filling the necessary arrays
}
END {for (i = 1; i <= maxr; i++) # print maxr lines with
for (j = 1; j <= maxc; j++) # maxc fields, each
printf("%s%s", FieldValue[j, i], (j == maxc) ? ORS : OFS) # a sort of random distribution of unique field values ; after last (maxc)
# field print line feed (ORS) else field separator (OFS, space)
}
' file
Hi RudiC,
Thank you for filling in while I was sleeping. I do have a couple of comments that may help explain what I was thinking when I wrote this code...
is complex code, but it certainly is dense. I just started by figuring out what I want to print after I had accumulated all of the data. Although the sample data provided happened to have the same number of unique values in each input column, there doesn't seem to be any reason to assume that that will be true with real input data. So, I need:
a count of the number of unique values that have been found in each column (which is handled by the array ++count[column], with the ++ incrementing the number of unique values seen in this column),
the unique values to be displayed in each output row and column (which is handled by the array FieldValue[column, output_row] = field_value),
and a quick way to determine whether or not we have seen a given value in a given field before (which is handled by the array data[column, field_value], and as RudiC said we don't need any value to be assigned to elements of this array; we just need to know whether a given column and field_value pair have been entered into this array).
And, second, I wouldn't say that the output is in random order. The 1st row of the output will contain the 1st unique value found in each input column. The 2nd row of the output will contain the 2nd unique value found in each input column. Etc.
Note that if an output row does not have a value for a given column, that field will be an empty string (printed as just a field separator). This works because referencing an array element that has not been assigned a value will return an empty string as its value.
Note also that since I used the awk variable OFS (instead of an explicit <space> as a field separator), you can change the character that appears in the output to separate fields by setting a different value for OFS before naming the file to be processed on the last line of the script.
Note also that there is no requirement that every input row contain the same number of fields, but the output will have the same number of fields in every row.
For example, if we had a file named numbers containing:
Code:
11 12 13 14
21
31 32 33 34 35
41 42 43
and we wanted the output field separator to be a <comma> instead of a <space>, we could change the last line of the script from:
Location: Saint Paul, MN USA / BSD, CentOS, Debian, OS X, Solaris
Posts: 2,288
Thanks Given: 430
Thanked 480 Times in 395 Posts
Hi.
I like modular codes as well as generalized codes, so as an experiment I wrote a perl script that extracts the fields simultaneously to (effectively) files. Here is a simple example, and then the solution with the user data:
Code:
#!/usr/bin/env bash
# @(#) s3 Demonstrate display unique values from fields.
# Utility functions: print-as-echo, print-line-with-visual-space, debug.
# export PATH="/usr/local/bin:/usr/bin:/bin"
LC_ALL=C ; LANG=C ; export LC_ALL LANG
pe() { for _i;do printf "%s" "$_i";done; printf "\n"; }
pl() { pe;pe "-----" ;pe "$*"; }
em() { pe "$*" >&2 ; }
db() { ( printf " db, ";for _i;do printf "%s" "$_i";done;printf "\n" ) >&2 ; }
db() { : ; }
C=$HOME/bin/context && [ -f $C ] && $C cutofi
pl " Sample easy dataset data2:"
cat data2
pl " Results, field 2 extracted to file f2:"
cutofi data2
cat f2
pl " Results, field 2 piped into \"sort|uniq\" to file f2:"
cutofi -c='sort|uniq' data2
cat f2
pe
pe " Results, all files f? pasted together:"
paste f?
# Solve the user problem.
FILE=${1-data1}
E=expected-output.txt
pl " Input data file $FILE:"
cat $FILE
pl " Expected output:"
cat $E
pl " Results:"
cutofi -c='sort -u' -p=xx $FILE
paste xx? |
tee f1
pl " Verify results if possible:"
C=$HOME/bin/pass-fail
[ -f $C ] && $C || ( pe; pe " Results cannot be verified." ) >&2
exit 0
producing:
Code:
$ ./s3
Environment: LC_ALL = C, LANG = C
(Versions displayed with local utility "version")
OS, ker|rel, machine: Linux, 3.16.0-4-amd64, x86_64
Distribution : Debian 8.6 (jessie)
bash GNU bash 4.3.30
cutofi (local) 1.4
-----
Sample easy dataset data2:
1 1 1
1 1 1
1 1 1
2 3 4
-----
Results, field 2 extracted to file f2:
1
1
1
3
-----
Results, field 2 piped into "sort|uniq" to file f2:
1
3
Results, all files f? pasted together:
1 1 1
2 3 4
-----
Input data file data1:
joe allen ibm
joe smith ibm
joe allen google
joe smith google
rachel allen google
-----
Expected output:
joe allen google
rachel smith ibm
-----
Results:
joe allen google
rachel smith ibm
-----
Verify results if possible:
-----
Comparison of 2 created lines with 2 lines of desired results:
f1 expected-output.txt differ: char 4, line 1
Failed -- files f1 and expected-output.txt not identical -- detailed comparison follows.
Succeeded by ignoring whitespace differences.
There may be a small amount of parallelism seen because the separated fields are really written to pipes, which is supplied by the --command or -c option, resulting in child processes.
The final files may be then combined to produce the output similar to that desired by the OP.
Of course, this can also be done with separate awk processes as well.
The documentation for the experimental code:
Code:
$ ./cutofi -h
SYNOPSIS
cutofi - cut (split) fields from text file to separate filters, files.
PURPOSE
Separate fields in text files by streaming them to a pipeline. This
allows some parallel processing, albeit at the slight risk of having
too many processes active.
This allows operations on the field as a separate dataset so
that, for example, each field could be sorted without regard to
the other fields.
Maximum number of processes for a user in a specific distribution:
OS, ker|rel, machine: Linux, 3.16.0-4-amd64, x86_64
Distribution : Debian 8.6 (jessie)
$ ulimit -u
78528
See also the Missing Text Routine:
colwise Cut a column, run command, paste results.
http://www1.cuni.cz/~obo/textutils/ (2016.08)
USAGE
cutofi [options] -- [files]
options (read "|" as "or"):
-h | --help
print this message and quit.
-d | --debug
turn on debugging output. Default is off.
-p | --prefix=str
set the output file base string as "str". Default is "f".
-f | --fields=list
set specific fields to be split in comma-separated list: f1,f2, ... fn.
Default is all strings.
-s | --separator=string
set the separator to "string". Default is \s+, whitespace.
-c | --command=string
set the output filter as "string", e.g. "sort|uniq". Default
is (essentially) "cat > prefix<i>" where i is the field number.
Best wishes ... cheers, drl
Last edited by drl; 10-31-2016 at 08:50 AM..
Reason: Correct minor typos.
Hi All,
I have a directory and sub-directory that having ‘n' number of .log file in nearly 1GB.
The file is comma separated file. I need to recursively grep and uniq first column values only.
I did in perl. But i wish to know more command line utilities to calculate the time for grep and... (4 Replies)
I want to bring values in the second column into single line for uniq value in the first column.
My input
jvm01, Web 2.0 Feature Pack Library
jvm01, IBM WebSphere JAX-RS
jvm01, Custom01 Shared Library
jvm02, Web 2.0 Feature Pack Library
jvm02, IBM WebSphere JAX-RS
jvm03, Web 2.0 Feature... (10 Replies)
Hello
How can I get a number of occurrence count for this file;
ERR315389.1000156 CTTGAAGAAGAATTGAAAACTGTGACGAACAACTTGAAGTCACTGGAGGCTCAGGCTGAGAAGTACTCGCAGAAGGAAGACAGATATGAGGAAGAG
ERR315389.1000281 ... (3 Replies)
I met a challenge to filter ~70 millions of sequence rows and I want using awk with conditions:
1) longest string of each pattern in column 2, ignore any sub-string, as the index;
2) all the unique patterns after 1);
3) print the whole row;
input:
1 ABCDEFGHI longest_sequence1
2 ABCDEFGH... (12 Replies)
Hi,
I have a file as listed below.. What I want to get is for each unique value in column 1 the corresponding values in the rest of the columns should be summed up..
AAK1 0 1 0 11
AAK1 0 0 1 1
AAK1 0 0 1 2... (2 Replies)
Hi Gurus,
I have a tab separated text file with two columns. I would like to make the first column values as headings for the second column values.
Ex.
>value1 subjects
>value2 priorities
>value3 requirements
...etc
and I want to have a file
>value1
subjects
>value2
priorities... (4 Replies)
Hi ,
Can You Please let Know How use unix uniq command on a single column for deleting records from file
with Below Structure.Pipe Delimter File .
Source
Name | Account_Id
A | 101
B... (2 Replies)
Hi All,
I have a file which is having 3 columns as (string string integer)
a b 1
x y 2
p k 5
y y 4
.....
.....
Question:
I want get the unique value of column 2 in a sorted way(on column 2) and the sum of the 3rd column of the corresponding rows. e.g the above file should return the... (6 Replies)
Dear All,
I want to get help for below case.
I have a file like this.
saman 1
gihan 2
saman 4
ravi 1
ravi 2
so i want to get the result,
saman 5
gihan 2
ravi 3 like this.
Pls help me. (17 Replies)