Concatenate and sort to remove duplicates


 
Thread Tools Search this Thread
Top Forums Shell Programming and Scripting Concatenate and sort to remove duplicates
# 1  
Old 12-17-2018
Concatenate and sort to remove duplicates

Following is the input. 1st and 3rd block are same(block starts here with '*' and ends before blank line) , 2nd and 4th blocks are also the same:

Code:
cat <file>
* Wed Feb 24  2016 Tariq Saeed <tariq.x.saeed@mail.com> 2.0.7-1.0.7
- add vmcore dump support for ocfs2 [bug: 22822573]

* Mon Jun 8 2015 Brian Maly <brian.maly@mail.com> 2.0.7-1.0.3
- Fix stall on failure in kdump init script [bug: 21111440]
- kexec-tools: fix fail to find mem hole failure on i386  [bug: 21111440]

* Wed Feb 24  2016 Tariq Saeed <tariq.x.saeed@mail.com> 2.0.7-1.0.7
- add vmcore dump support for ocfs2 [bug: 22822573]

* Mon Jun 8 2015 Brian Maly <brian.maly@mail.com> 2.0.7-1.0.3
- Fix stall on failure in kdump init script [bug: 21111440]
- kexec-tools: fix fail to find mem hole failure on i386  [bug: 21111440]

Expected Output:

Code:
* Wed Feb 24  2016 Tariq Saeed <tariq.x.saeed@mail.com> 2.0.7-1.0.7
- add vmcore dump support for ocfs2 [bug: 22822573]

* Mon Jun 8 2015 Brian Maly <brian.maly@mail.com> 2.0.7-1.0.3
- Fix stall on failure in kdump init script [bug: 21111440]
- kexec-tools: fix fail to find mem hole failure on i386  [bug: 21111440]

I have picked only four blocks of the file. There are myriad entries with duplicates.

I thought of combining lines and run uniq command but some blocks have two lines but some blocks have 3 lines or more.

Code:
cat | paste -d - - | uniq

(This may not work for files more than 2 lines)

Could someone tell how to achieve the desired output?


Moderator's Comments:
Mod Comment Please use CODE tags as required by forum rules!

Last edited by RudiC; 12-17-2018 at 07:43 AM.. Reason: Added CODE tags.
# 2  
Old 12-17-2018
Good idea, but, as you said, cat and paste aren't too flexible. How about this sed / sort approach that collects lines up to an empty one, replaces <newline> with a token (here: <CR> = \r), sorts the output, and undoes the replacement? Be aware that uniq also needs sorted input to work correctly.

Code:
sed -n '/^ *$/ !{H; $!b;}; {x; s/^\n//; s/\n/\r/g; s/$/\r/p;}; ' file3  | sort -u | sed 's/\r/\n/g'
* Mon Jun 8 2015 Brian Maly <brian.maly@mail.com> 2.0.7-1.0.3
- Fix stall on failure in kdump init script [bug: 21111440]
- kexec-tools: fix fail to find mem hole failure on i386  [bug: 21111440]

* Wed Feb 24  2016 Tariq Saeed <tariq.x.saeed@mail.com> 2.0.7-1.0.7
- add vmcore dump support for ocfs2 [bug: 22822573]

. The original order is lost, though, which might not be a problem because the multiple entries seem randomly distributed. To get to something like an order by date, you could try (given your sort version provides the -M option)

Code:
sort -uM -k5,5 -k3,4

instead.

Last edited by RudiC; 12-17-2018 at 08:02 AM..
# 3  
Old 12-17-2018
How about using sed instead of paste to pre-process the file? We would first create single lines of the blocks, then transform it back into blocks again after processing it through uniq. Here is a naive try which might need refinement:

Transform the blocks to lines:
(edited - see RudiC's post, the same idea. Basically you replace all newline characters inside a block with a temporary replacement character to get one line, RudiC used "\r", but you can use any other string as well.)

or, even simpler, using fmt ("1000" is a number higher than the number of characters a resulting line could grow, replace it with a higher number if it does not suffice). Notice, though, that transforming this back into blocks is a bit more effort because there is no replacement character for the newlines:
Code:
fmt -1000 /path/to/file > newfile

Transform the lines back to blocks (enter the ^M literally as an <ENTER>):
Code:
sed s/<replacement-for-newline>/^M/g' /path/to/newfile > file

I hope this helps.

bakunin
This User Gave Thanks to bakunin For This Post:
# 4  
Old 12-17-2018
I can't understand. Why not to make it easy?
Code:
awk '!t[$0]++' file

--- Post updated at 19:08 ---

I think I understood. Blocks can be separated in different ways
Code:
sed -rz 's/\n([^\n])/\1/g' file


Last edited by nezabudka; 12-17-2018 at 03:13 PM..
This User Gave Thanks to nezabudka For This Post:
# 5  
Old 12-17-2018
I partially agree with nezabudka, awk is the way to go here; but we need to make each block a record, not each line. By setting RS to an empty string, we can tell awk that records are separated by sequences of a <newline> followed by one or more blank lines. Given this fact, the following should work:
Code:
awk '
BEGIN {	RS = ""
}
!($0 in seen) {
	seen[$0]
	printf("%s%s\n", (NR == 1) ? "" : "\n", $0)
}' file

which will print the first occurrence of each duplicated record found in the file named file to its output. Note that the above code does not print an empty line before the 1st output record or after the last output record. The code could be simplified if you always want to print an empty line after each output record.

If you want to try this on a Solaris/SunOS system, change awk to /usr/xpg4/bin/awk or nawk.
Login or Register to Ask a Question

Previous Thread | Next Thread

10 More Discussions You Might Find Interesting

1. UNIX for Beginners Questions & Answers

Sort and remove duplicates in directory based on first 5 columns:

I have /tmp dir with filename as: 010020001_S-FOR-Sort-SYEXC_20160229_2212101.marker 010020001_S-FOR-Sort-SYEXC_20160229_2212102.marker 010020001-S-XOR-Sort-SYEXC_20160229_2212104.marker 010020001-S-XOR-Sort-SYEXC_20160229_2212105.marker 010020001_S-ZOR-Sort-SYEXC_20160229_2212106.marker... (4 Replies)
Discussion started by: gnnsprapa
4 Replies

2. Shell Programming and Scripting

Sort and Remove duplicates

Here is my task : I need to sort two input files and remove duplicates in the output files : Sort by 13 characters from 97 Ascending Sort by 1 characters from 96 Ascending If duplicates are found retain the first value in the file the input files are variable length, convert... (4 Replies)
Discussion started by: ysvsr1
4 Replies

3. Shell Programming and Scripting

Bash - remove duplicates without sort

I need to use bash to remove duplicates without using sort first. I can not use: cat file | sort | uniq But when I use only cat file | uniq some duplicates are not removed. (4 Replies)
Discussion started by: locoroco
4 Replies

4. Shell Programming and Scripting

Sort data by date first and then remove duplicates

Hi , I have below data inside a file named ref.psv . I want to create a shell script which will do the below 2 points : (1) sort the file content first based on the latest date which is the last column in the file (actual file its the 175th column) (2)after sorting the file based on latest date... (3 Replies)
Discussion started by: samrat dutta
3 Replies

5. Shell Programming and Scripting

remove duplicates and sort

Hi, I'm using the below command to sort and remove duplicates in a file. But, i need to make this applied to the same file instead of directing it to another. Thanks (6 Replies)
Discussion started by: dvah
6 Replies

6. Solaris

concatenate/sort/cut

I have the following requirement. 1. I have to concatenate the 10 fixed width files. 2. sort based on first 10 characters 3. after that i have remove first 10 chacters from the file. can you please tell me how to do it. Thanks in Advance Samba (1 Reply)
Discussion started by: samba
1 Replies

7. UNIX for Dummies Questions & Answers

sort by date and concatenate first three

Hi: I am trying to create some script that sorts the files in a subdirectory by date and concatenates the thre most recently created files. SAy, file1 date1 file2 date2 file3 date3 file4 date4 file5 date5 file6 date6 i only want to concatenate the first three which are the most... (4 Replies)
Discussion started by: jlarios
4 Replies

8. UNIX for Dummies Questions & Answers

removing duplicates and sort -k

Hello experts, I am trying to remove all lines in a csv file where the 2nd columns is a duplicate. I am try to use sort with the key parameter sort -u -k 2,2 File.csv > Output.csv File.csv File Name|Document Name|Document Title|Organization Word Doc 1.doc|Word Document|Sample... (3 Replies)
Discussion started by: orahi001
3 Replies

9. Shell Programming and Scripting

Sort, Uniq, Duplicates

Input File is : ------------- 25060008,0040,03, 25136437,0030,03, 25069457,0040,02, 80303438,0014,03,1st 80321837,0009,03,1st 80321977,0009,03,1st 80341345,0007,03,1st 84176527,0047,03,1st 84176527,0047,03, 20000735,0018,03,1st 25060008,0040,03, I am using the following in the script... (5 Replies)
Discussion started by: Amruta Pitkar
5 Replies

10. Shell Programming and Scripting

Removing duplicates [sort , uniq]

Hey Guys, I have file which looks like this, Contig201#numbPA Contig1452#nmdynD6PA dm022p15.r#CG6461PA dm005e16.f#SpatPA IGU001_0015_A06.f#CG17593PA I need to remove duplicates based on the chracter matching upto '#'. for example if we consider this.. Contig201#numbPA... (4 Replies)
Discussion started by: sharatz83
4 Replies
Login or Register to Ask a Question