fields twice you shouldn't be surprised them showing up twice in the output. Not understanding what you REALLY want, from your sparse spec and non-matching input - output files, how about
Hi,
One silly question. I would like to add statement like below and append to a file. I used the below code; however, it does not work. Can anyone please tell me what mistakes I have made?
awk '
{ for (i=1;i<=563;i++)
print i
}'>>output.txt
Thanks.
-Jason (1 Reply)
ive input file contains to clums a and b spreated by pipe
a | b
123|456
323|455
and
xyz contains other info about a and b
now i want to print as follows:
a | b | "info from xyz"
but "info from xyz" might be more than 1 line and i want to keep the format to 3 cloums.
how to do it?... (3 Replies)
I want to print the 1st field in a comma seperated file to lower case and the rest the case they are.
I tried this
nawk -F"," '{print tolower($0)}' OFS="," file
this converts whole line in to lower case i just want the first column to be converted.
The below doesnt work because in... (11 Replies)
Hello all,
this should really be easy for you... I need AWK to print column maxima for each column of such input:
Input:
1 2 3 1
2 1 1 3
2 1 1 2
Output should be:
2 2 3 3
This does the sum, but i need max instead:
{ for(i=1; i<=NF; i++)
sum +=$i }
END {for(i=1; i in sum;... (3 Replies)
Hi, I'm using a while-loop in an awk script. If it matches a regular expression, it prints a line. Unfortunately, each line that is printed in this loop is followed by an extra character, "1".
While-statement extracted from my script:
getline temp;
while (temp ~ /.* x .*/) print temp... (3 Replies)
Hi everyone,
Ok here's the scenario.
I have a control file like this.
component1,file1,file2,file3,file4,file5
component2,file1,file2,file3,file4,file5I want to do a while loop here to read all files for each component.
file_count=2
while ]
do
file_name=`cat list.txt | grep... (2 Replies)
Hi, I have data of the following type,
chr1 234 678 39 852 638 abcd 7895
chr1 526 326 33 887 965 kilj 5849
Now, I would like to have something like this
chr1 234 678 39 852 638 abcd 7895 <a href="http://unix.com/thread=chr1:234-678">Link</a>
chr1 526 326 33 887 965 kilj 5849 <a... (5 Replies)
I was trying to simplify this from what I'm actually doing, but I started getting even more confused so I gave up. Here is the content of my input file:
Academic year,Term,Course name,Period,Last name,Nickname
2012-2013,First Semester,English 12,7th Period,Davis,Lucille
When I do this:
... (3 Replies)
Experts,
i have a following file containing data in following manner.
1 2480434.4 885618.6 0.00 1948.00
40.00 1952.00
... (6 Replies)
Discussion started by: Amit.saini333
6 Replies
LEARN ABOUT DEBIAN
bup-margin
bup-margin(1) General Commands Manual bup-margin(1)NAME
bup-margin - figure out your deduplication safety margin
SYNOPSIS
bup margin [options...]
DESCRIPTION
bup margin iterates through all objects in your bup repository, calculating the largest number of prefix bits shared between any two
entries. This number, n, identifies the longest subset of SHA-1 you could use and still encounter a collision between your object ids.
For example, one system that was tested had a collection of 11 million objects (70 GB), and bup margin returned 45. That means a 46-bit
hash would be sufficient to avoid all collisions among that set of objects; each object in that repository could be uniquely identified by
its first 46 bits.
The number of bits needed seems to increase by about 1 or 2 for every doubling of the number of objects. Since SHA-1 hashes have 160 bits,
that leaves 115 bits of margin. Of course, because SHA-1 hashes are essentially random, it's theoretically possible to use many more bits
with far fewer objects.
If you're paranoid about the possibility of SHA-1 collisions, you can monitor your repository by running bup margin occasionally to see if
you're getting dangerously close to 160 bits.
OPTIONS --predict
Guess the offset into each index file where a particular object will appear, and report the maximum deviation of the correct answer
from the guess. This is potentially useful for tuning an interpolation search algorithm.
--ignore-midx
don't use .midx files, use only .idx files. This is only really useful when used with --predict.
EXAMPLE
$ bup margin
Reading indexes: 100.00% (1612581/1612581), done.
40
40 matching prefix bits
1.94 bits per doubling
120 bits (61.86 doublings) remaining
4.19338e+18 times larger is possible
Everyone on earth could have 625878182 data sets
like yours, all in one repository, and we would
expect 1 object collision.
$ bup margin --predict
PackIdxList: using 1 index.
Reading indexes: 100.00% (1612581/1612581), done.
915 of 1612581 (0.057%)
SEE ALSO bup-midx(1), bup-save(1)BUP
Part of the bup(1) suite.
AUTHORS
Avery Pennarun <apenwarr@gmail.com>.
Bup unknown-bup-margin(1)