Hi,
I have a small requirement in chainging the rows to columns. The below example.txt contains info as shown
Name:Person1
Age:30
Name:Person2
Age:40
Name:Person3
Age:50
I want to make it displayed as hown below
Name:Person1 Age:30
Name:person2 Age:40
Name:Person3 Age:50
I... (4 Replies)
Hi
I have a input file in the format
ABC,111,2008Q2, 49K
ABC,111,2008Q3, 0K
ABC,111,2008Q4, 0K
ABC,222,2008Q2, 49K
ABC,222,2008Q3, 0K
ABC,222,2008Q4, 0K
XYZ,111,2008Q2, 49K
XYZ,111,2008Q3, 0K
XYZ,111,2008Q4, 0K
XYZ,222,2008Q2, 49K
XYZ,222,2008Q3, 0K
XYZ,222,2008Q4, 0K
The output file... (3 Replies)
Hi I have an input file and I want to transpose it but I need to take care that if any field is missing for a record it should be popoulated with space for that field - using a shell script
INFILE
----------
emp=1
sal=2
loc=abc
emp=2
sal=21
sal=22
loc=xyz
emp=5
loc=abc
OUTFILE... (10 Replies)
I have done a couple of searches on this and have found many threads but I don't think I've found one that is useful to me - probably because I have very basic comprehension of perl and beginners shell so trying to manipulate a script already posted maybe beyond my capabilities....
Anyway - I... (26 Replies)
I have this text file with a very large number of columns (10,000+) and I want to move the first column to the position of the six column so that the text file looks like this:
Before cutting and pasting
ID Family Mother Father Trait Phenotype
aaa bbb ... (5 Replies)
Hi All,
I have two sets of files.
One set with extension .txt This set has file names with numbers like these. 1.txt, 2.txt, 3.txt until extactly 100.txt.
The .txt files look like these:
0.38701788 93750
0.38622013 94456
0.38350296 94440
0.38282126 94057
0.38282126 94439
0.35847232... (1 Reply)
Hello,
I have a huge tab delimited file with around 40,000 columns and 900 rows I want to convert columns to a row.
INPUT file look like this.
the first line is a headed of a file.
ID marker1 marker2 marker3 marker4
b1 A G A C ... (5 Replies)
Hi Friends,
I have come across some files where some of the columns don not have data.
Key, Data1,Data2,Data3,Data4,Data5
A,5,6,,10,,
A,3,4,,3,,
B,1,,4,5,,
B,2,,3,4,,
If we see the above data on Data5 column do not have any row got filled. So remove only that column(Here Data5) and... (4 Replies)
Hi - I have a file "file1" of below format. Its a comma seperated file. Note that each string is enclosed in double quotes.
"abc","-0.15","10,000.00","IJK"
"xyz","1,000.01","1,000,000.50","OPR"
I want the result as:
"abc","-0.15","10000.00","IJK"
"xyz","1,000.01","1000000.50","OPR"
I... (8 Replies)
bup-margin(1) General Commands Manual bup-margin(1)NAME
bup-margin - figure out your deduplication safety margin
SYNOPSIS
bup margin [options...]
DESCRIPTION
bup margin iterates through all objects in your bup repository, calculating the largest number of prefix bits shared between any two
entries. This number, n, identifies the longest subset of SHA-1 you could use and still encounter a collision between your object ids.
For example, one system that was tested had a collection of 11 million objects (70 GB), and bup margin returned 45. That means a 46-bit
hash would be sufficient to avoid all collisions among that set of objects; each object in that repository could be uniquely identified by
its first 46 bits.
The number of bits needed seems to increase by about 1 or 2 for every doubling of the number of objects. Since SHA-1 hashes have 160 bits,
that leaves 115 bits of margin. Of course, because SHA-1 hashes are essentially random, it's theoretically possible to use many more bits
with far fewer objects.
If you're paranoid about the possibility of SHA-1 collisions, you can monitor your repository by running bup margin occasionally to see if
you're getting dangerously close to 160 bits.
OPTIONS --predict
Guess the offset into each index file where a particular object will appear, and report the maximum deviation of the correct answer
from the guess. This is potentially useful for tuning an interpolation search algorithm.
--ignore-midx
don't use .midx files, use only .idx files. This is only really useful when used with --predict.
EXAMPLE
$ bup margin
Reading indexes: 100.00% (1612581/1612581), done.
40
40 matching prefix bits
1.94 bits per doubling
120 bits (61.86 doublings) remaining
4.19338e+18 times larger is possible
Everyone on earth could have 625878182 data sets
like yours, all in one repository, and we would
expect 1 object collision.
$ bup margin --predict
PackIdxList: using 1 index.
Reading indexes: 100.00% (1612581/1612581), done.
915 of 1612581 (0.057%)
SEE ALSO bup-midx(1), bup-save(1)BUP
Part of the bup(1) suite.
AUTHORS
Avery Pennarun <apenwarr@gmail.com>.
Bup unknown-bup-margin(1)