08-20-2010
Thanks!
I just can't believe how much you are helping me solve a major issue!!!
I have another couple of questions.
I needed to rearrange some of the columns in the datafile.
I was able to do this using awk (i am proud of myself!)
But, there are a coiple of places where I am stumped
so for example,
if I have two files
fileA
12 test
and fileB
44 junk
and I want a line in my output that is
12 44
how do I go about that with awk?
I know that i want to do something like this
awk '{print $1}' fileA > outputA
awk '{print $1}' fileB > outputB
but then how do i get the outputA and outputB onto the same line?
you have been amazingly helpful thus far...i hope that you don't mind another couple of questions!
mikey
Last edited by mikey11415; 08-20-2010 at 11:33 PM..
Reason: i figured out the second part
10 More Discussions You Might Find Interesting
1. UNIX for Advanced & Expert Users
Hi,
I have a file with duplicate lines in it. I want to keep only the duplicate lines and delete the non duplicates. Can some one please help me?
Regards
Narayana Gupta (3 Replies)
Discussion started by: guptan
3 Replies
2. Shell Programming and Scripting
Hi Guys and Girls
I'm having trouble outputing from a sorted file... i have a looooong list of PVIDs and need to only output only those which occur 4 times!! Any suggestions?
ie I need to uniq (but not uniq (i've been through the man pg) this:
cat /tmp/disk.out|awk '{print $3}' |grep -v... (6 Replies)
Discussion started by: serm
6 Replies
3. UNIX for Dummies Questions & Answers
Hi all,
I have a tab-delimited file and want to remove identical lines, i.e. all of line 1,2,4 because the columns are the same as the columns in other lines. Any input is appreciated.
abc gi4597 9997 cgcgtgcg $%^&*()()*
abc gi4597 9997 cgcgtgcg $%^&*()()*
ttt ... (1 Reply)
Discussion started by: dr_sabz
1 Replies
4. Shell Programming and Scripting
Hi All,
I am trying to remove the duplicate entries in a file and print them just once. For example, if my input file has:
00:44,37,67,56,15,12
00:44,34,67,56,15,12
00:44,58,67,56,15,12
00:44,35,67,56,15,12
00:59,37,67,56,15,12
00:59,34,67,56,15,12
00:59,35,67,56,15,12... (7 Replies)
Discussion started by: faiz1985
7 Replies
5. Shell Programming and Scripting
I have a file where some of the lines are duplicates.
How do I use bash to print all the lines that have duplicates? (2 Replies)
Discussion started by: locoroco
2 Replies
6. UNIX for Advanced & Expert Users
Hi All,
I have a very huge file (4GB) which has duplicate lines. I want to delete duplicate lines leaving unique lines. Sort, uniq, awk '!x++' are not working as its running out of buffer space.
I dont know if this works : I want to read each line of the File in a For Loop, and want to... (16 Replies)
Discussion started by: krishnix
16 Replies
7. Shell Programming and Scripting
Hello,
I'm trying to write an script that in a txt with lines with 2 or more columns separated by commas, like
hello, one, two
bye, goal
first, second, third, fourth
hard, difficult.strong, word.line
will create another in which if a line has more than 2 columns, it will have another... (4 Replies)
Discussion started by: clinisbud
4 Replies
8. UNIX for Dummies Questions & Answers
I have a file with following data
A
B
C
I would like to print like this n times(For eg:5 times)
A
B
C
A
B
C
A
B
C
A
B
C
A (7 Replies)
Discussion started by: nsuresh316
7 Replies
9. Shell Programming and Scripting
Dear All,
I have a two-column data file and want to duplicate data in second column w.r.t. first column.
My file looks like:
2 5.672
1 3.593
3 8.260
...
And the desired format:
5.672
5.672
3.593
8.260
8.260
8.260
...
How may I do so please? I appreciate any help you may... (2 Replies)
Discussion started by: sxiong
2 Replies
10. Shell Programming and Scripting
Hi All,
I am storing the result in the variable result_text using the below code.
result_text=$(printf "$result_text\t\n$name") The result_text is having the below text. Which is having duplicate lines.
file and time for the interval 03:30 - 03:45
file and time for the interval 03:30 - 03:45 ... (4 Replies)
Discussion started by: nalu
4 Replies
UNIQ(1) FSF UNIQ(1)
NAME
uniq - remove duplicate lines from a sorted file
SYNOPSIS
uniq [OPTION]... [INPUT [OUTPUT]]
DESCRIPTION
Discard all but one of successive identical lines from INPUT (or standard input), writing to OUTPUT (or standard output).
Mandatory arguments to long options are mandatory for short options too.
-c, --count
prefix lines by the number of occurrences
-d, --repeated
only print duplicate lines
-D, --all-repeated[=delimit-method] print all duplicate lines
delimit-method={none(default),prepend,separate} Delimiting is done with blank lines.
-f, --skip-fields=N
avoid comparing the first N fields
-i, --ignore-case
ignore differences in case when comparing
-s, --skip-chars=N
avoid comparing the first N characters
-u, --unique
only print unique lines
-w, --check-chars=N
compare no more than N characters in lines
--help display this help and exit
--version
output version information and exit
A field is a run of whitespace, then non-whitespace characters. Fields are skipped before chars.
AUTHOR
Written by Richard Stallman and David MacKenzie.
REPORTING BUGS
Report bugs to <bug-coreutils@gnu.org>.
COPYRIGHT
Copyright (C) 2002 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICU-
LAR PURPOSE.
SEE ALSO
The full documentation for uniq is maintained as a Texinfo manual. If the info and uniq programs are properly installed at your site, the
command
info uniq
should give you access to the complete manual.
uniq (coreutils) 4.5.3 February 2003 UNIQ(1)