07-08-2010
yes you can do that if you have multiplle files in a directory...
10 More Discussions You Might Find Interesting
1. Shell Programming and Scripting
Hi
I have a file having 1000 rows. Now I would like to remove 10 rows from it. Plz give me the script.
Eg:
input file like
4 1 4500.0 1
5 1 1.0 30
6 1 1.0 4500
7 1 4.0 730
7 2 500000.0 730
8 1 785460.0 45
8 7 94255.0 30
9 1 31800.0 30
9 4 36000.0 30
10 1 15000.0 30... (5 Replies)
Discussion started by: suresh3566
5 Replies
2. Shell Programming and Scripting
Hi Guys,
I need help in modifying a large text file containing more than 1-2 lakh rows of data using unix commands. I am quite new to the unix language
the text file contains data in a pipe delimited format
sdfsdfs
sdfsdfsd
START_ROW
sdfsd|sdfsdfsd|sdfsdfasdf|sdfsadf|sdfasdf... (9 Replies)
Discussion started by: manish2009
9 Replies
3. Shell Programming and Scripting
I need to delete rows based on the number of lines in a different file, I have a piece of code with me working but when I merge with my C application, it doesnt work.
sed '1,'\"`wc -l < /tmp/fileyyyy`\"'d' /tmp/fileA > /tmp/filexxxx
Can anyone give me an alternate solution for the above (2 Replies)
Discussion started by: Muthuraj K
2 Replies
4. Ubuntu
Hi every body
I have some text file with a lots of duplicate rows like this:
165.179.568.197
154.893.836.174
242.473.396.153
165.179.568.197
165.179.568.197
165.179.568.197
154.893.836.174
how can I delete the repeated rows?
Thanks
Saeideh (2 Replies)
Discussion started by: sashtari
2 Replies
5. UNIX for Dummies Questions & Answers
Hello,
Merry Christmas to all! I wish you the best for these holidays and the best for the next year 2011.
I'd like your help please, I need to delete all the rows in the third column of my file, but without touching nor changing the first and last value position, this is an example of my... (2 Replies)
Discussion started by: Gery
2 Replies
6. Shell Programming and Scripting
Hi,
This is a followup to my earlier post
him mno klm 20 76 . + . klm_mango unix_00000001;
alp fdc klm 123 456 . + . klm_mango unix_0000103;
her tkr klm 415 439 . + . klm_mango unix_00001043;
abc tvr klm 20 76 . + . klm_mango unix_00000001;
abc def klm 83 84 . + . klm_mango... (5 Replies)
Discussion started by: jacobs.smith
5 Replies
7. UNIX for Dummies Questions & Answers
Hi,
I would like to know how can I delete rows of a text file if from the 3rd column onwards there is only zeros?
Thanks in advance (7 Replies)
Discussion started by: fadista
7 Replies
8. Shell Programming and Scripting
each individual (row) has genotype expressed for each SNP (column)
file1.txt
1 1 A G A T G T A A A A A A A A A A A A A A A A A A A A A
2 2 G A A A A A A A A A A A A A A A A A A A A A A A A A A
3 3 A A A A A A A A A A A A A A A A A A A A A A A A A A A
4 4 G A G T A T A A A A A A A A A A A... (3 Replies)
Discussion started by: johnkim0806
3 Replies
9. UNIX for Dummies Questions & Answers
I have an Output file which has the result
YYYY 95,77
YYYY
YYYY 95
YYYY 95
YYYY 95
YYYY 95
YYYY 95
YYYY 95
YYYY 95
YYYY 95
YYYY
YYYY
YYYY
YYYY
I would like to display the above along with a single line with above info. Final output should be
YYYY 95 (3 Replies)
Discussion started by: priyanka.premra
3 Replies
10. Shell Programming and Scripting
Hi everyone,
I will appreciate a lot if anyone can help me about a simple issue.
I have a data file, and I need to remove some rows with a given condition.
So here is a part of my data file:
5,14,1,3,3,0,0,-0.29977188269E+01
5,16,1,4,4,0,0,0.30394279900E+02... (4 Replies)
Discussion started by: hayreter
4 Replies
LEARN ABOUT DEBIAN
bup-margin
bup-margin(1) General Commands Manual bup-margin(1)
NAME
bup-margin - figure out your deduplication safety margin
SYNOPSIS
bup margin [options...]
DESCRIPTION
bup margin iterates through all objects in your bup repository, calculating the largest number of prefix bits shared between any two
entries. This number, n, identifies the longest subset of SHA-1 you could use and still encounter a collision between your object ids.
For example, one system that was tested had a collection of 11 million objects (70 GB), and bup margin returned 45. That means a 46-bit
hash would be sufficient to avoid all collisions among that set of objects; each object in that repository could be uniquely identified by
its first 46 bits.
The number of bits needed seems to increase by about 1 or 2 for every doubling of the number of objects. Since SHA-1 hashes have 160 bits,
that leaves 115 bits of margin. Of course, because SHA-1 hashes are essentially random, it's theoretically possible to use many more bits
with far fewer objects.
If you're paranoid about the possibility of SHA-1 collisions, you can monitor your repository by running bup margin occasionally to see if
you're getting dangerously close to 160 bits.
OPTIONS
--predict
Guess the offset into each index file where a particular object will appear, and report the maximum deviation of the correct answer
from the guess. This is potentially useful for tuning an interpolation search algorithm.
--ignore-midx
don't use .midx files, use only .idx files. This is only really useful when used with --predict.
EXAMPLE
$ bup margin
Reading indexes: 100.00% (1612581/1612581), done.
40
40 matching prefix bits
1.94 bits per doubling
120 bits (61.86 doublings) remaining
4.19338e+18 times larger is possible
Everyone on earth could have 625878182 data sets
like yours, all in one repository, and we would
expect 1 object collision.
$ bup margin --predict
PackIdxList: using 1 index.
Reading indexes: 100.00% (1612581/1612581), done.
915 of 1612581 (0.057%)
SEE ALSO
bup-midx(1), bup-save(1)
BUP
Part of the bup(1) suite.
AUTHORS
Avery Pennarun <apenwarr@gmail.com>.
Bup unknown- bup-margin(1)