Hi,
I have a large file(csv format) that I need to split into 2 files. The file looks something like
Original_file.txt
first name, family name, address
a, b, c,
d, e, f,
and so on for over 100,00 lines
I need to create two files from this one file. The condition is i need to ensure... (4 Replies)
HI,
i've to split a large file which inputs seems like :
Input file name_file.txt
00001|AAAA|MAIL|DATEOFBIRTHT|.......
00001|AAAA|MAIL|DATEOFBIRTHT|.......
00002|BBBB|MAIL|DATEOFBIRTHT|.......
00002|BBBB|MAIL|DATEOFBIRTHT|.......
00003|CCCC|MAIL|DATEOFBIRTHT|.......... (1 Reply)
I have a 3 GB text file that I would like to split. How can I do this?
It's a giant comma-separated list of numbers. I would like to make it into about 20 files of ~100 MB each, with a custom header and footer. The file can only be split on commas, but they're plentiful.
Something like... (3 Replies)
Hi Gurus,
I need to cut single record in the file(asdf) to multile records based on the number of bytes..(44 characters). So every record will have 44 characters. All the records should be in the same file..to each of these lines I need to add the folder(<date>) name.
I have a dir. in which... (20 Replies)
Hi All,
I'm a newbie here, I'm just wondering on how to delete a single record in a large file in unix.
ex.
file1.txt is 1000 records
nikki1
nikki2
nikki3
what i want to do is delete the nikki2 record in file1.txt. is it possible?
Please advise,
Thanks, (3 Replies)
Hi Friends,
source
....
col1,col2,col3
a,b,1;2;3
here colom delimeter is comma(,).
here we dont know what is the max length of col3 means now we have 1;2;3 next time i will receive 1;2;3;4;5;etc...
required output
..............
col1,col2,col3
a,b,1
a,b,2
a,b,3
please give me... (5 Replies)
I need to amend the code blow such that it reads a "black list" before the "print" statement; if "substr($1,1,6)" is found in the "blacklist" it will ignore that record and continue. the code is from an awk script that is being called from shell script which passes the input values.
BEGIN { "date... (5 Replies)
Hi,
I have one tab delimited file which is having multiple store_ids in first column seprated by pipe.I want to split the file on the basis of store_id(separating 1st record in to 2 records ).
I tried some more options like below with using split,awk etc ,But not able to get proper output. can... (1 Reply)
Hi,
I have received a file which is 20 GB. We would like to split the file into 4 equal parts and process it to avoid memory issues.
If the record delimiter is unix new line, I could use split command either with option l or b.
The problem is that the line terminator is |##|
How to use... (5 Replies)
Trying to split a 35gb file into 1000mb parts. My research shows I should you this. split -b 1000m file.txt and my return is "split: cannot open 'crunch1.txt' for reading: No such file or directory" so I tried split -b 1000m Documents/Wordlists/file.txt and I get nothing other than the curser just... (3 Replies)
Discussion started by: sub terra
3 Replies
LEARN ABOUT DEBIAN
bup-margin
bup-margin(1) General Commands Manual bup-margin(1)NAME
bup-margin - figure out your deduplication safety margin
SYNOPSIS
bup margin [options...]
DESCRIPTION
bup margin iterates through all objects in your bup repository, calculating the largest number of prefix bits shared between any two
entries. This number, n, identifies the longest subset of SHA-1 you could use and still encounter a collision between your object ids.
For example, one system that was tested had a collection of 11 million objects (70 GB), and bup margin returned 45. That means a 46-bit
hash would be sufficient to avoid all collisions among that set of objects; each object in that repository could be uniquely identified by
its first 46 bits.
The number of bits needed seems to increase by about 1 or 2 for every doubling of the number of objects. Since SHA-1 hashes have 160 bits,
that leaves 115 bits of margin. Of course, because SHA-1 hashes are essentially random, it's theoretically possible to use many more bits
with far fewer objects.
If you're paranoid about the possibility of SHA-1 collisions, you can monitor your repository by running bup margin occasionally to see if
you're getting dangerously close to 160 bits.
OPTIONS --predict
Guess the offset into each index file where a particular object will appear, and report the maximum deviation of the correct answer
from the guess. This is potentially useful for tuning an interpolation search algorithm.
--ignore-midx
don't use .midx files, use only .idx files. This is only really useful when used with --predict.
EXAMPLE
$ bup margin
Reading indexes: 100.00% (1612581/1612581), done.
40
40 matching prefix bits
1.94 bits per doubling
120 bits (61.86 doublings) remaining
4.19338e+18 times larger is possible
Everyone on earth could have 625878182 data sets
like yours, all in one repository, and we would
expect 1 object collision.
$ bup margin --predict
PackIdxList: using 1 index.
Reading indexes: 100.00% (1612581/1612581), done.
915 of 1612581 (0.057%)
SEE ALSO bup-midx(1), bup-save(1)BUP
Part of the bup(1) suite.
AUTHORS
Avery Pennarun <apenwarr@gmail.com>.
Bup unknown-bup-margin(1)