04-03-2012
Thanks bartus, but it is printing the whole file again. It is not removing the duplicates.
10 More Discussions You Might Find Interesting
1. Shell Programming and Scripting
hi all
can anyone please let me know if there is a way to find out duplicate rows in a file. i have a file that has hundreds of numbers(all in next row).
i want to find out the numbers that are repeted in the file.
eg.
123434
534
5575
4746767
347624
5575
i want 5575
please help (3 Replies)
Discussion started by: infyanurag
3 Replies
2. Shell Programming and Scripting
I have a file content like below.
"0000000","ABLNCYI","BOTH",1049,2058,"XYZ","5711002","","Y","","","","","","","",""
"0000000","ABLNCYI","BOTH",1049,2058,"XYZ","5711002","","Y","","","","","","","",""
"0000000","ABLNCYI","BOTH",1049,2058,"XYZ","5711002","","Y","","","","","","","",""... (5 Replies)
Discussion started by: vamshikrishnab
5 Replies
3. Shell Programming and Scripting
I have searched the internet for duplicate row extracting.
All I have seen is extracting good rows or eliminating duplicate rows.
How do I extract duplicate rows from a flat file in unix.
I'm using Korn shell on HP Unix.
For.eg.
FlatFile.txt
========
123:456:678
123:456:678
123:456:876... (5 Replies)
Discussion started by: bobbygsk
5 Replies
4. HP-UX
Hi all,
I have written one shell script. The output file of this script is having sql output.
In that file, I want to extract the rows which are having multiple entries(duplicate rows).
For example, the output file will be like the following way.
... (7 Replies)
Discussion started by: raghu.iv85
7 Replies
5. Shell Programming and Scripting
hii i have a huge amt of data stored in a file.Here in this file i need to remove duplicates rows in such a way that the last column has different data & i must check for greatest among last colmn data & print the largest data along with other entries but just one of other duplicate entries is... (16 Replies)
Discussion started by: reva
16 Replies
6. Ubuntu
Hi every body
I have some text file with a lots of duplicate rows like this:
165.179.568.197
154.893.836.174
242.473.396.153
165.179.568.197
165.179.568.197
165.179.568.197
154.893.836.174
how can I delete the repeated rows?
Thanks
Saeideh (2 Replies)
Discussion started by: sashtari
2 Replies
7. Shell Programming and Scripting
Hi! I have a file as below:
line1
line2
line2
line3
line3
line3
line4
line4
line4
line4
I would like to extract duplicate lines (not unique, triplicate or quadruplicate lines). Output will be as below:
line2
line2
I would appreciate if anyone can help. Thanks. (4 Replies)
Discussion started by: chromatin
4 Replies
8. Shell Programming and Scripting
Hi all,
plz help me with this, I want to to extract the duplicate rows (column 1) in a file which at least repeat 4 times. then I want to summarize them by getting the max , mean, median and min. The file is sorted by column 1, all the repeated rows appear together.
If number of elements is... (5 Replies)
Discussion started by: ritakadm
5 Replies
9. Shell Programming and Scripting
Gents
Can you help please.
Input file
5490921425 1 7 1310342 54909214251
5490921425 2 1 1 54909214252
5491120937 1 1 3 54911209371
5491120937 3 1 1 54911209373
5491320785 1 ... (4 Replies)
Discussion started by: jiam912
4 Replies
10. Shell Programming and Scripting
Dear folks
I have a map file of around 54K lines and some of the values in the second column have the same value and I want to find them and delete all of the same values. I looked over duplicate commands but my case is not to keep one of the duplicate values. I want to remove all of the same... (4 Replies)
Discussion started by: sajmar
4 Replies
LEARN ABOUT DEBIAN
bup-margin
bup-margin(1) General Commands Manual bup-margin(1)
NAME
bup-margin - figure out your deduplication safety margin
SYNOPSIS
bup margin [options...]
DESCRIPTION
bup margin iterates through all objects in your bup repository, calculating the largest number of prefix bits shared between any two
entries. This number, n, identifies the longest subset of SHA-1 you could use and still encounter a collision between your object ids.
For example, one system that was tested had a collection of 11 million objects (70 GB), and bup margin returned 45. That means a 46-bit
hash would be sufficient to avoid all collisions among that set of objects; each object in that repository could be uniquely identified by
its first 46 bits.
The number of bits needed seems to increase by about 1 or 2 for every doubling of the number of objects. Since SHA-1 hashes have 160 bits,
that leaves 115 bits of margin. Of course, because SHA-1 hashes are essentially random, it's theoretically possible to use many more bits
with far fewer objects.
If you're paranoid about the possibility of SHA-1 collisions, you can monitor your repository by running bup margin occasionally to see if
you're getting dangerously close to 160 bits.
OPTIONS
--predict
Guess the offset into each index file where a particular object will appear, and report the maximum deviation of the correct answer
from the guess. This is potentially useful for tuning an interpolation search algorithm.
--ignore-midx
don't use .midx files, use only .idx files. This is only really useful when used with --predict.
EXAMPLE
$ bup margin
Reading indexes: 100.00% (1612581/1612581), done.
40
40 matching prefix bits
1.94 bits per doubling
120 bits (61.86 doublings) remaining
4.19338e+18 times larger is possible
Everyone on earth could have 625878182 data sets
like yours, all in one repository, and we would
expect 1 object collision.
$ bup margin --predict
PackIdxList: using 1 index.
Reading indexes: 100.00% (1612581/1612581), done.
915 of 1612581 (0.057%)
SEE ALSO
bup-midx(1), bup-save(1)
BUP
Part of the bup(1) suite.
AUTHORS
Avery Pennarun <apenwarr@gmail.com>.
Bup unknown- bup-margin(1)