Good points Don Cragun, my assumption was that the value in the data line was less than then number of lines and any additional lines need to be discarded.
In my solution, any processing could be done in the NR <=R block:
The O.P. said the output "can be directed to a gnuplot ot (sic) output to a file.", but didn't specify any options for gnuplot and didn't specify filenames nor whether the headers should be included in the files or stripped from the files. There is no gnuplot man page in the Man Pages section of this forum, but some references I found said that it is very picky about having its input in tab separated fields (and the data lines given in the sample input have no tabs). Should all data be sent to one instantiation of gnuplot or should the data under each header be sent to a different instantiation of gnuplot?
I don't think I can do much more until we get a clarification on the requirements.
I have a large list of filenames from an Excel sheet, which I then translate into a simple text file. I'd like to use this list, which contains various file extensions , to archive these files and then remove them recursively through multiple directories and subdirectories. So far, it looks like... (5 Replies)
Witam wszystkich ,
Jest to moj pierwszy post i już prośba ale gdybym potrafił zaradzić problemowi to nie zawracałbym nikomu głowy .
mianowicie :
Mam jakis 'plik' w ktorym są osadzone pojedyncze i zmienne słowa po jednym w lini czyli :
test1
tekttw
resst
.... itd.
Moje... (6 Replies)
hi all,
I have this file with some user data.
example:
$cat myfile.txt
FName|LName|Gender|Company|Branch|Bday|Salary|Age
aaaa|bbbb|male|cccc|dddd|19900814|15000|20|
eeee|asdg|male|gggg|ksgu|19911216|||
aara|bdbm|male|kkkk|acke|19931018||23|
asad|kfjg|male|kkkc|gkgg|19921213|14000|24|... (4 Replies)
I am attempting to insert multiple lines of text into a specific place in a text file based on the lines above or below it.
For example, Here is a portion of a zone file.
IN NS ns1.domain.tld.
IN NS ns2.domain.tld.
IN ... (2 Replies)
Hi,
I am trying to extract lines from a text file given a text file containing line numbers to be extracted from the first file. How do I go about doing this? Thanks! (1 Reply)
Hello I'm having a little difficulty in writing a shell script for a few simple tasks.
First I have two files "file1.txt" and "file2.txt" and I want to read and compare the last line of each file. The files look like this.
File1.txt
File2.txt
After comparing the two lines I would... (2 Replies)
Hello,
I have a file ff.txt that looks as follows
*ABNA.txt
356
24
36
112
*AC24.txt
457
458
321
2
ABNA.txt and AC24.txt are the files in the folder named foo1. Based on the numbers in the ff.txt file, I want to extract the lines from the corresponding files in the foo1 folder and... (2 Replies)
Hi,
I have a huge file that has data something like shown below:
huge_file.txt
start regexp
Name=Name1
Title=Analyst
Address=Address1
Department=Finance
end regexp
some text
some text
start regexp
Name=Name2
Title=Controller
Address=Address2
Department=Finance
end regexp (7 Replies)
hi all,
trying this using shell/bash with sed/awk/grep
I have two files, one containing one column, the other containing multiple columns (comma delimited).
file1.txt
abc12345
def12345
ghi54321
...
file2.txt
abc1,text1,texta
abc,text2,textb
def123,text3,textc
gh,text4,textd... (6 Replies)
Discussion started by: shogun1970
6 Replies
LEARN ABOUT DEBIAN
bup-margin
bup-margin(1) General Commands Manual bup-margin(1)NAME
bup-margin - figure out your deduplication safety margin
SYNOPSIS
bup margin [options...]
DESCRIPTION
bup margin iterates through all objects in your bup repository, calculating the largest number of prefix bits shared between any two
entries. This number, n, identifies the longest subset of SHA-1 you could use and still encounter a collision between your object ids.
For example, one system that was tested had a collection of 11 million objects (70 GB), and bup margin returned 45. That means a 46-bit
hash would be sufficient to avoid all collisions among that set of objects; each object in that repository could be uniquely identified by
its first 46 bits.
The number of bits needed seems to increase by about 1 or 2 for every doubling of the number of objects. Since SHA-1 hashes have 160 bits,
that leaves 115 bits of margin. Of course, because SHA-1 hashes are essentially random, it's theoretically possible to use many more bits
with far fewer objects.
If you're paranoid about the possibility of SHA-1 collisions, you can monitor your repository by running bup margin occasionally to see if
you're getting dangerously close to 160 bits.
OPTIONS --predict
Guess the offset into each index file where a particular object will appear, and report the maximum deviation of the correct answer
from the guess. This is potentially useful for tuning an interpolation search algorithm.
--ignore-midx
don't use .midx files, use only .idx files. This is only really useful when used with --predict.
EXAMPLE
$ bup margin
Reading indexes: 100.00% (1612581/1612581), done.
40
40 matching prefix bits
1.94 bits per doubling
120 bits (61.86 doublings) remaining
4.19338e+18 times larger is possible
Everyone on earth could have 625878182 data sets
like yours, all in one repository, and we would
expect 1 object collision.
$ bup margin --predict
PackIdxList: using 1 index.
Reading indexes: 100.00% (1612581/1612581), done.
915 of 1612581 (0.057%)
SEE ALSO bup-midx(1), bup-save(1)BUP
Part of the bup(1) suite.
AUTHORS
Avery Pennarun <apenwarr@gmail.com>.
Bup unknown-bup-margin(1)