Filtering out duplicates with the highest version number
Hi,
I have a huge text file with filenames which which looks like the following ie uniquenumber_version_filename:
e.g.
What I need to do is examine the file, look for duplicate uniquenumbers and then filter out the uniquenumber with the highest version, so for example in the above it would be the following:
Is there a scripted method by which I can do this?
Thanks in advance
Mantis
Last edited by Franklin52; 10-30-2012 at 07:12 AM..
Reason: Please use code tags for data and code samples
Hi,
I have a file a.txt and it has values in it
Eg :-
I need to read through the file and find the number that is the greatest in them all.
Can any one assit me on this.
Thanks (30 Replies)
Hi Gurus,
I've using HPUX B.11.23 U ia64 with shell = sh.
I've been having some problem get the highest number of this script.
Actually I wanted to get the highest number from this listing (TEST123 data and based on this highest number, there will be email being sent out.
For example,... (6 Replies)
Hello all, I am new to this and need some help or maybe steer me to the right direction!
I wrote a script to get the highest number and prints it on the screen, the script basically asks the user to input numbers, and then prints the highest number! very simple
it works like this
$sh max.sh... (8 Replies)
Just want to say this is great resources for all thing Unix!!
cat tmp.txt
A 3
C 19
A 2
B 5
A 1
A 0
C 13
B 9
C 1
Desired output:
A 3
B 9
C 19
The following work but I am wondering if there is a better way to do it: (4 Replies)
Hello,
I want to filter all the duplicates of a record to one place. Sample input and output will give you better idea.
I am new to unix. Can some one help me on this?
Input:
7488 7389 chr1.fa chr1.fa
3546 9887 chr5.fa chr9.fa
7387 7898 chrX.fa chr3.fa
7488 7389 chr1.fa chr1.fa... (2 Replies)
There are some duplicate field on description column .I want to print duplicate row along with highest version of number and corresponding description column.
file1.txt
number Description
=== ============
34567 nl21a00is-centerdb001:ncdbareq:Error in loading init
34577 ... (7 Replies)
Hi folks,
I have a log file in the below format and trying to get the output of the unique ones based on mnemonic IN PERL.
Could any one please let me know with the code and the logic ?
Severity Mnemonic Log Message
7 CLI_SCHEDULER Logfile for scheduled CLI... (3 Replies)
please help solving the following. I have access to redhat linux cluster having 32gigs of ram.
I have duplicate ids for variable names, in the file 1,2 are duplicates;3,4 and 5 are duplicates;6 and 7 are duplicates. My objective is to use only the first occurrence of these duplicates.
Lookup... (4 Replies)
Hi Guys,
I am looking for a way to sort the output below from the "Inuse" count from Highest to Lowest. Is it possible?
Thanks in advance.
user1 0.12 0.06 0 0.12
User Inuse Pin Pgsp Virtual
Unit:... (4 Replies)
Hi All,
Need help here, can you tell me the syntax to line grep the highest file version?
0 04-05-2016 08:00 lib/SBSSchemaProject.jar/schemas/
0 04-05-2016 08:00 lib/SBSSchemaProject.jar/schemas/airprice/
0 04-05-2016 08:00 ... (2 Replies)
Discussion started by: 100rin
2 Replies
LEARN ABOUT DEBIAN
bup-margin
bup-margin(1) General Commands Manual bup-margin(1)NAME
bup-margin - figure out your deduplication safety margin
SYNOPSIS
bup margin [options...]
DESCRIPTION
bup margin iterates through all objects in your bup repository, calculating the largest number of prefix bits shared between any two
entries. This number, n, identifies the longest subset of SHA-1 you could use and still encounter a collision between your object ids.
For example, one system that was tested had a collection of 11 million objects (70 GB), and bup margin returned 45. That means a 46-bit
hash would be sufficient to avoid all collisions among that set of objects; each object in that repository could be uniquely identified by
its first 46 bits.
The number of bits needed seems to increase by about 1 or 2 for every doubling of the number of objects. Since SHA-1 hashes have 160 bits,
that leaves 115 bits of margin. Of course, because SHA-1 hashes are essentially random, it's theoretically possible to use many more bits
with far fewer objects.
If you're paranoid about the possibility of SHA-1 collisions, you can monitor your repository by running bup margin occasionally to see if
you're getting dangerously close to 160 bits.
OPTIONS --predict
Guess the offset into each index file where a particular object will appear, and report the maximum deviation of the correct answer
from the guess. This is potentially useful for tuning an interpolation search algorithm.
--ignore-midx
don't use .midx files, use only .idx files. This is only really useful when used with --predict.
EXAMPLE
$ bup margin
Reading indexes: 100.00% (1612581/1612581), done.
40
40 matching prefix bits
1.94 bits per doubling
120 bits (61.86 doublings) remaining
4.19338e+18 times larger is possible
Everyone on earth could have 625878182 data sets
like yours, all in one repository, and we would
expect 1 object collision.
$ bup margin --predict
PackIdxList: using 1 index.
Reading indexes: 100.00% (1612581/1612581), done.
915 of 1612581 (0.057%)
SEE ALSO bup-midx(1), bup-save(1)BUP
Part of the bup(1) suite.
AUTHORS
Avery Pennarun <apenwarr@gmail.com>.
Bup unknown-bup-margin(1)