07-10-2010
i tried not to use the cat in for and i did with head -line no | tail -1 and it also worked but still too slow....i can't print specific range because the lines not know i run script to get the line no. which differs from file to file.
i really don't know what is the problem with it !?
---------- Post updated at 09:11 PM ---------- Previous update was at 09:09 PM ----------
dr house, how many lines per file do u think it will be good to split the fatty file?
9 More Discussions You Might Find Interesting
1. Shell Programming and Scripting
Hi Friends,
Getting an error while processing a very large file using an sqlloader........
The file is larger than 2 GB. Now need to change the compiler to 64-bit so that the file can be processed.
Is there any command for the same.
Thanks in advance. (1 Reply)
Discussion started by: Rohini Vijay
1 Replies
2. Shell Programming and Scripting
how to remove all zero byte files in a particular directory and also files that are morew than 1GB. pLEASE let me know (3 Replies)
Discussion started by: dsravan
3 Replies
3. UNIX for Advanced & Expert Users
We are experiencing a problem on a lengthy data transfer by FTP through a firewall. Since there are two ports in use on a ftp transfer (data and control), one sits idle while the other's transfering data. The idle port (control) will get timed out and the data transfer won't know that it's... (3 Replies)
Discussion started by: rprajendran
3 Replies
4. UNIX for Dummies Questions & Answers
Hi all;
I'm having a problem when want to list a large number of files in current directory using find together with the prune option.
First i used this command but it list all the files including those in sub directories:
find . -name "*.dat" | xargs ls -ltr
Then i modified the command... (2 Replies)
Discussion started by: ashikin_8119
2 Replies
5. UNIX for Dummies Questions & Answers
I have a large file, around 570 gb that I want to copy to tape. However, my tape drive will load only up to 500 gb. I don't have enough space on disk to compress it before copying to tape. Can I compress and tar to tape in one command without writing a compressed disk file?
Any suggestions... (8 Replies)
Discussion started by: iancrozier
8 Replies
6. UNIX for Dummies Questions & Answers
Hi, I am a torrent-maniak and I use Transmission.
All things were good but Nautilus begun to show problem while I was runnning Transmission.Its situation was becoming worse and worse.
Now, when I boot I can hardly open a nautilus window and browse my files.It will "stack" in seconds for sure!
I... (2 Replies)
Discussion started by: hakermania
2 Replies
7. Shell Programming and Scripting
Hello everyone!
I have 2 types of files in the following format:
1) *.fa
>1234
...some text...
>2345
...some text...
>3456
...some text...
.
.
.
.
2) *.info
>1234 (7 Replies)
Discussion started by: ad23
7 Replies
8. Solaris
Hello everyone. Need some help copying a filesystem. The situation is this: I have an oracle DB mounted on /u01 and need to copy it to /u02. /u01 is 500 Gb and /u02 is 300 Gb. The size used on /u01 is 187 Gb. This is running on solaris 9 and both filesystems are UFS.
I have tried to do it using:... (14 Replies)
Discussion started by: dragonov7
14 Replies
9. Shell Programming and Scripting
Hello everyone,
I have two matrices at same sizes. I need to re-calculate the numbers in matrix A according to the percentages in martix B
it is like
matrix A is
10.00 20.00 30.00 40.00
60.00 70.00 80.00 90.00
20.00 30.00 80.00 50.00
martix B is
00.08 00.05 ... (2 Replies)
Discussion started by: miriammiriam
2 Replies
LEARN ABOUT DEBIAN
bup-margin
bup-margin(1) General Commands Manual bup-margin(1)
NAME
bup-margin - figure out your deduplication safety margin
SYNOPSIS
bup margin [options...]
DESCRIPTION
bup margin iterates through all objects in your bup repository, calculating the largest number of prefix bits shared between any two
entries. This number, n, identifies the longest subset of SHA-1 you could use and still encounter a collision between your object ids.
For example, one system that was tested had a collection of 11 million objects (70 GB), and bup margin returned 45. That means a 46-bit
hash would be sufficient to avoid all collisions among that set of objects; each object in that repository could be uniquely identified by
its first 46 bits.
The number of bits needed seems to increase by about 1 or 2 for every doubling of the number of objects. Since SHA-1 hashes have 160 bits,
that leaves 115 bits of margin. Of course, because SHA-1 hashes are essentially random, it's theoretically possible to use many more bits
with far fewer objects.
If you're paranoid about the possibility of SHA-1 collisions, you can monitor your repository by running bup margin occasionally to see if
you're getting dangerously close to 160 bits.
OPTIONS
--predict
Guess the offset into each index file where a particular object will appear, and report the maximum deviation of the correct answer
from the guess. This is potentially useful for tuning an interpolation search algorithm.
--ignore-midx
don't use .midx files, use only .idx files. This is only really useful when used with --predict.
EXAMPLE
$ bup margin
Reading indexes: 100.00% (1612581/1612581), done.
40
40 matching prefix bits
1.94 bits per doubling
120 bits (61.86 doublings) remaining
4.19338e+18 times larger is possible
Everyone on earth could have 625878182 data sets
like yours, all in one repository, and we would
expect 1 object collision.
$ bup margin --predict
PackIdxList: using 1 index.
Reading indexes: 100.00% (1612581/1612581), done.
915 of 1612581 (0.057%)
SEE ALSO
bup-midx(1), bup-save(1)
BUP
Part of the bup(1) suite.
AUTHORS
Avery Pennarun <apenwarr@gmail.com>.
Bup unknown- bup-margin(1)