The best way to see the difference is to diff both files.
diff ID_file1.txt ID_file2.txt says the files differ.
To find out the difference, I issued an hexdump on both files and we see the difference quite easily in the end of each string:
As you can see, there is an 0xd followed by 0xa in the end of ID_file2.txt and just a 0xa in ID_file1.txt
PS: the output of hexdump should be read as follows:
1234 5678 9123 4567 -> 34, 12, 78, 56, 23, 91, 67, 45.
So, 6f43 746e 6769 0d31 430a 6e6f 6974 3267 is 0x43 0x6f 0x6e 0x74 0x69 0x67 0x31 0xd 0xa 0x43 0x6f ...
yeah.. thanks for clarifying )) Excuse my ignorance - is this difference with defining line ends only? does that mean the (e.g. perl) scripts will treat such lines as different? If yes, how can the endings be fixed?
I have a file:
Fred
Fred
Fred
Jim
Fred
Jim
Jim
If sort is executed on the listed file, shouldn't the output be?:
Fred
Fred
Fred
Fred
Jim
Jim
Jim (3 Replies)
Input File is :
-------------
25060008,0040,03,
25136437,0030,03,
25069457,0040,02,
80303438,0014,03,1st
80321837,0009,03,1st
80321977,0009,03,1st
80341345,0007,03,1st
84176527,0047,03,1st
84176527,0047,03,
20000735,0018,03,1st
25060008,0040,03,
I am using the following in the script... (5 Replies)
Hi
Just wondering whether or not I can remove duplicated lines without sort
For example, I use the command who, which shows users who are logging on. In some cases, it shows duplicated lines of users who are logging on more than one terminal.
Normally, I would do
who | cut -d" " -f1 |... (6 Replies)
So, I have a file that has some duplicate lines. The file has a header line that I would like to keep at the top.
I could do this by extracting the header from the file, 'sort -u' the remaining lines, and recombine them. But they are quite big, so if there is a way to do it with a single... (1 Reply)
I have a flatfile A.txt
2012/12/04 14:06:07 |trees|Boards 2, 3|denver|mekong|mekong12
2012/12/04 17:07:22 |trees|Boards 2, 3|denver|mekong|mekong12
2012/12/04 17:13:27 |trees|Boards 2, 3|denver|mekong|mekong12
2012/12/04 14:07:39 |rain|Boards 1|tampa|merced|merced11
How do i sort and get... (3 Replies)
hello, I have a large file (about 1gb) that is in a file similar to the following:
I want to make it so that I can put all the duplicates where column 3 (delimited by the commas) are shown on top. Meaning all people with the same age are listed at the top.
The command I used was ... (3 Replies)
Hi !
I am trying to remove doubbled entrys in a textfile only between delimiters.
Like that example but i dont know how to do that with sort or similar.
input:
{
aaa
aaa
}
{
aaa
aaa
}
output:
{
aaa
}
{ (8 Replies)
Hello all,
Need to pick your brains,
I have a 10Gb file where each row is a name, I am expecting about 50 names in total. So there are a lot of repetitions in clusters.
So I want to do a
sort -u file
Will it be considerably faster or slower to use a uniq before piping it to sort... (3 Replies)
Hi All,
Below the actual file which i like to sort and Uniq -u
/opt/oracle/work/Antony/Shell_Script> cat emp.1st
2233|a.k. shukula |g.m. |sales |12/12/52 |6000
1006|chanchal singhvi |director |sales |03/09/38 |6700... (8 Replies)
Discussion started by: Antony Ankrose
8 Replies
LEARN ABOUT MOJAVE
uniq
UNIQ(1) BSD General Commands Manual UNIQ(1)NAME
uniq -- report or filter out repeated lines in a file
SYNOPSIS
uniq [-c | -d | -u] [-i] [-f num] [-s chars] [input_file [output_file]]
DESCRIPTION
The uniq utility reads the specified input_file comparing adjacent lines, and writes a copy of each unique input line to the output_file. If
input_file is a single dash ('-') or absent, the standard input is read. If output_file is absent, standard output is used for output. The
second and succeeding copies of identical adjacent input lines are not written. Repeated lines in the input will not be detected if they are
not adjacent, so it may be necessary to sort the files first.
The following options are available:
-c Precede each output line with the count of the number of times the line occurred in the input, followed by a single space.
-d Only output lines that are repeated in the input.
-f num Ignore the first num fields in each input line when doing comparisons. A field is a string of non-blank characters separated from
adjacent fields by blanks. Field numbers are one based, i.e., the first field is field one.
-s chars
Ignore the first chars characters in each input line when doing comparisons. If specified in conjunction with the -f option, the
first chars characters after the first num fields will be ignored. Character numbers are one based, i.e., the first character is
character one.
-u Only output lines that are not repeated in the input.
-i Case insensitive comparison of lines.
ENVIRONMENT
The LANG, LC_ALL, LC_COLLATE and LC_CTYPE environment variables affect the execution of uniq as described in environ(7).
EXIT STATUS
The uniq utility exits 0 on success, and >0 if an error occurs.
COMPATIBILITY
The historic +number and -number options have been deprecated but are still supported in this implementation.
SEE ALSO sort(1)STANDARDS
The uniq utility conforms to IEEE Std 1003.1-2001 (``POSIX.1'') as amended by Cor. 1-2002.
HISTORY
A uniq command appeared in Version 3 AT&T UNIX.
BSD December 17, 2009 BSD