I have a file:
Fred
Fred
Fred
Jim
Fred
Jim
Jim
If sort is executed on the listed file, shouldn't the output be?:
Fred
Fred
Fred
Fred
Jim
Jim
Jim (3 Replies)
Using the last, uniq, sort and cut commands, determine how many times the different users have logged in.
I know how to use the last command and cut command...
i came up with last | cut -f1 -d" " | uniq
i dont know if this is right, can someone please help me... thanks (1 Reply)
Input File is :
-------------
25060008,0040,03,
25136437,0030,03,
25069457,0040,02,
80303438,0014,03,1st
80321837,0009,03,1st
80321977,0009,03,1st
80341345,0007,03,1st
84176527,0047,03,1st
84176527,0047,03,
20000735,0018,03,1st
25060008,0040,03,
I am using the following in the script... (5 Replies)
Hello,
I have a large data file:
1234 8888 bbb
2745 8888 bbb
9489 8888 bbb
1234 8888 aaa
4838 8888 aaa
3977 8888 aaa
I need to remove duplicate lines (where the first column is the duplicate). I have been using:
sort file.txt | uniq -w4 > newfile.txt
However, it seems to keep the... (11 Replies)
Hi again,
I have files with the following contents
datetime,ip1,port1,ip2,port2,number
How would I find out how many times ip1 field shows up a particular file? Then how would I find out how many time ip1 and port 2 shows up?
Please mind the file may contain 100k lines. (8 Replies)
Hi All,
I am searching for a script which will produce an output file with the uniq first field with the second field having highest value among all the duplicates..
The output file will produce only the uniqs which are duplicate 3 times..
Input file
X 9
B 5
A 1
Z 9
T 4
C 9
A 4... (13 Replies)
Hi !
I am trying to remove doubbled entrys in a textfile only between delimiters.
Like that example but i dont know how to do that with sort or similar.
input:
{
aaa
aaa
}
{
aaa
aaa
}
output:
{
aaa
}
{ (8 Replies)
Hello all,
Need to pick your brains,
I have a 10Gb file where each row is a name, I am expecting about 50 names in total. So there are a lot of repetitions in clusters.
So I want to do a
sort -u file
Will it be considerably faster or slower to use a uniq before piping it to sort... (3 Replies)
Hi All,
Below the actual file which i like to sort and Uniq -u
/opt/oracle/work/Antony/Shell_Script> cat emp.1st
2233|a.k. shukula |g.m. |sales |12/12/52 |6000
1006|chanchal singhvi |director |sales |03/09/38 |6700... (8 Replies)
Discussion started by: Antony Ankrose
8 Replies
LEARN ABOUT DEBIAN
bup-margin
bup-margin(1) General Commands Manual bup-margin(1)NAME
bup-margin - figure out your deduplication safety margin
SYNOPSIS
bup margin [options...]
DESCRIPTION
bup margin iterates through all objects in your bup repository, calculating the largest number of prefix bits shared between any two
entries. This number, n, identifies the longest subset of SHA-1 you could use and still encounter a collision between your object ids.
For example, one system that was tested had a collection of 11 million objects (70 GB), and bup margin returned 45. That means a 46-bit
hash would be sufficient to avoid all collisions among that set of objects; each object in that repository could be uniquely identified by
its first 46 bits.
The number of bits needed seems to increase by about 1 or 2 for every doubling of the number of objects. Since SHA-1 hashes have 160 bits,
that leaves 115 bits of margin. Of course, because SHA-1 hashes are essentially random, it's theoretically possible to use many more bits
with far fewer objects.
If you're paranoid about the possibility of SHA-1 collisions, you can monitor your repository by running bup margin occasionally to see if
you're getting dangerously close to 160 bits.
OPTIONS --predict
Guess the offset into each index file where a particular object will appear, and report the maximum deviation of the correct answer
from the guess. This is potentially useful for tuning an interpolation search algorithm.
--ignore-midx
don't use .midx files, use only .idx files. This is only really useful when used with --predict.
EXAMPLE
$ bup margin
Reading indexes: 100.00% (1612581/1612581), done.
40
40 matching prefix bits
1.94 bits per doubling
120 bits (61.86 doublings) remaining
4.19338e+18 times larger is possible
Everyone on earth could have 625878182 data sets
like yours, all in one repository, and we would
expect 1 object collision.
$ bup margin --predict
PackIdxList: using 1 index.
Reading indexes: 100.00% (1612581/1612581), done.
915 of 1612581 (0.057%)
SEE ALSO bup-midx(1), bup-save(1)BUP
Part of the bup(1) suite.
AUTHORS
Avery Pennarun <apenwarr@gmail.com>.
Bup unknown-bup-margin(1)