Consider that the NULL (empty) string "" and the numeric value 0 are evaluated as false in boolean context.
So in this case you're saying: perform the default action (print the current record) if the following expression _[$1]++ returns false (not true, hence the exclamation mark !).
Hi All,
My question is if the simple but powerful shell scripts can extract data from a big data file by using a list of identifier. I used to put everything in the database and do joining, which sounds stupid but only way I knew. For example, my data file looks like,
... (3 Replies)
I have a very long string (millions of characters).
I have a file with start location and length that is thousands of rows long:
Start Length
5 10
16 21
44 100
215 37
...
I'd like to extract the substring that corresponds to the start and length from each row of the list:
I tried... (7 Replies)
HI,
i HAVE THE PROFILE LIST
ldapsearch -D cn=root -w root -p 389 mail=dgmp_ndl@bsnl.in
mail=dgmp_ndl@bsnl.in,o=Data Networks,o=Data Networks,o=Data Networks,o=Messagingdb,dc=bsnl,dc=in
I NEED TO SEPERATE ONLY THE MAIL ID
I m USING bash-2.05b$ ldapsearch -D cn=root -w root -p 389... (3 Replies)
Need to search a pattern occurrence (count) in a specified file.
Below is the details
$ cat fruits
apple apple
ball ball
apple
ball ball ball
apple apple apple
cat cat
cat cat cat
apple
apple
Note: If I'll use the grep command with -c option then it'll count the 1st occurrence in... (6 Replies)
Hi Gents,
I have a file 1 like this
1 1000 20
2 2000 30
3 1000 40
5 1000 50
And I have other file 1 like
2 1
I would like to get from the file 1 the complete line which are in file 2, the key to compare is the column 2 then output should be.
2 2000 30.
I was trying to get it... (5 Replies)
Hi,
I have a file with more than 28000 records and it looks like below..
>mm10_refflat_ABCD range=chr1:1234567-2345678
tgtgcacactacacatgactagtacatgactagac....so on
>mm10_refflat_BCD range=chr1:3234567-4545678...
tgtgcacactacacatgactagtatgtgcacactacacatgactagta
.
.
.
.
.
so on
... (2 Replies)
Please can someone help with this?
I have a file with lines as follows:
word1 word2 word3 word4 word5 word6 word7 word8
word1 word2 word3 word4 word5 word6 word7 word8
word1 word2 word3 word4 word5 word6 word7 word8
word1 word2 word3 word4 word5 word6 word7 word8
When I use the... (7 Replies)
Assume a string that contains one or multiple occurrences of three different keywords (abbreviated as "kw"). I would like to replace kw2 with some other string, say "qux". Specifically, I would like to replace that occurrence of kw2 that is the first one that is preceded by kw1 somewhere in the... (4 Replies)
I am trying to use awk to extract and print the first ocurrence of NM_ and NP_ with a : before in each line. The input file is tab-delimeted, but the output does not need to be. The below does execute but prints all the lines in the file not just the patterns. Thank you :).
file tab-delimeted
... (2 Replies)
Hi, i have file file.txt with data like:
START
03:11:30 a
03:11:40 b
END
START
03:13:30 eee
03:13:35 fff
END
jjjjjjjjjjjjjjjjjjjjj
START
03:14:30 eee
03:15:30 fff
END
ggggggggggg
iiiiiiiiiiiiiiiiiiiiiiiii
I want the below output
START (13 Replies)
Discussion started by: Jyotshna
13 Replies
LEARN ABOUT DEBIAN
bup-margin
bup-margin(1) General Commands Manual bup-margin(1)NAME
bup-margin - figure out your deduplication safety margin
SYNOPSIS
bup margin [options...]
DESCRIPTION
bup margin iterates through all objects in your bup repository, calculating the largest number of prefix bits shared between any two
entries. This number, n, identifies the longest subset of SHA-1 you could use and still encounter a collision between your object ids.
For example, one system that was tested had a collection of 11 million objects (70 GB), and bup margin returned 45. That means a 46-bit
hash would be sufficient to avoid all collisions among that set of objects; each object in that repository could be uniquely identified by
its first 46 bits.
The number of bits needed seems to increase by about 1 or 2 for every doubling of the number of objects. Since SHA-1 hashes have 160 bits,
that leaves 115 bits of margin. Of course, because SHA-1 hashes are essentially random, it's theoretically possible to use many more bits
with far fewer objects.
If you're paranoid about the possibility of SHA-1 collisions, you can monitor your repository by running bup margin occasionally to see if
you're getting dangerously close to 160 bits.
OPTIONS --predict
Guess the offset into each index file where a particular object will appear, and report the maximum deviation of the correct answer
from the guess. This is potentially useful for tuning an interpolation search algorithm.
--ignore-midx
don't use .midx files, use only .idx files. This is only really useful when used with --predict.
EXAMPLE
$ bup margin
Reading indexes: 100.00% (1612581/1612581), done.
40
40 matching prefix bits
1.94 bits per doubling
120 bits (61.86 doublings) remaining
4.19338e+18 times larger is possible
Everyone on earth could have 625878182 data sets
like yours, all in one repository, and we would
expect 1 object collision.
$ bup margin --predict
PackIdxList: using 1 index.
Reading indexes: 100.00% (1612581/1612581), done.
915 of 1612581 (0.057%)
SEE ALSO bup-midx(1), bup-save(1)BUP
Part of the bup(1) suite.
AUTHORS
Avery Pennarun <apenwarr@gmail.com>.
Bup unknown-bup-margin(1)