I have a text file in unix with a layout like this
Column 1 - 1-12
Column 2 - 13-39
Column 3 - 40-58
Column 4 - 59-85
Column 5 - 86-120
Columbn 6 - 121-131
The file also has a header on the first 6 lines of each page. Each page is 51 lines long. So I want to remove the header from each... (30 Replies)
Hi ,
I have a typical situation. I have 4 files and with different headers (number of headers is varible ).
I need to make such a merged file which will have headers combined from all files (comman coluns should appear once only).
For example -
File 1
H1|H2|H3|H4
11|12|13|14
21|22|23|23... (1 Reply)
Hi ,
Pls help on this.
Input file:
NAME1 BSC1
TEXT ID 1
MAINSFAIL
TEXT ID 2
DGON
TEXT ID 3
lOADONDG
NAME2 BSC2
TEXT ID 1
DGON
TEXT ID 3
lOADONG (1 Reply)
Hi All,
I'm looking for a script which can transpose field names from column headers to values in one column.
for example, the input is:
IDa;IDb;IDc;PARAM1;PARAM2;PARAM3;
a;b;c;p1val;p2val;p3val;
d;e;f;p4val;p5val;p6val;
g;h;i;p7val;p8val;p9val;
into the output like this:
... (6 Replies)
Hello Everyone!
I am new on this forum and this is my first post. I wish to apologize for my, not canonical, English.
I would like to solve this problem but I have no clue of how do it!I will be grateful if someone could help me!
I have a table like this:
gene TF1 TF2 TF3 TF4
gene1 1 2 3 4... (5 Replies)
Hi All,
The below sar -u command generates multiple column headers in csv file
Expected output should print column headers only once in the csv file
shell script:
$cat sar_cpu_EBS.sh
#!/bin/bash
while ; do
sar -u 15 1 | awk '/^/ {print $1,$2,$4,$6,$7}' | tr -s ' ' ',' >>... (6 Replies)
hello gurus,
Somebody must have done this before, I couldn't find anything. Please redirect me if this was solved before, and if not please help.
To the problem now, I have multiple csv files (about 1000) which I need to concatenate by column header. The final file should have a superset... (4 Replies)
Hello,
I am processing a tab delimited text file and need to grab all of the column headers in an array.
The input looks like,
num Name PCA_A1 PCA_A2 PCA_A3
0 compound_00 -3.5054 -1.1207 -2.4372
1 compound_01 -2.2641 0.4287 ... (5 Replies)
All,
I guess by this time someone asked this kind of question, but sorry I am unable to find after a deep search.
Here is my request
I have many files out of which 2 sample files provided below.
File-1 (with A,B as column headers)
A,B
1,2
File-2 (with C, D as column headers)
C,D
4,5
I... (7 Replies)
Discussion started by: ks_reddy
7 Replies
LEARN ABOUT DEBIAN
bup-margin
bup-margin(1) General Commands Manual bup-margin(1)NAME
bup-margin - figure out your deduplication safety margin
SYNOPSIS
bup margin [options...]
DESCRIPTION
bup margin iterates through all objects in your bup repository, calculating the largest number of prefix bits shared between any two
entries. This number, n, identifies the longest subset of SHA-1 you could use and still encounter a collision between your object ids.
For example, one system that was tested had a collection of 11 million objects (70 GB), and bup margin returned 45. That means a 46-bit
hash would be sufficient to avoid all collisions among that set of objects; each object in that repository could be uniquely identified by
its first 46 bits.
The number of bits needed seems to increase by about 1 or 2 for every doubling of the number of objects. Since SHA-1 hashes have 160 bits,
that leaves 115 bits of margin. Of course, because SHA-1 hashes are essentially random, it's theoretically possible to use many more bits
with far fewer objects.
If you're paranoid about the possibility of SHA-1 collisions, you can monitor your repository by running bup margin occasionally to see if
you're getting dangerously close to 160 bits.
OPTIONS --predict
Guess the offset into each index file where a particular object will appear, and report the maximum deviation of the correct answer
from the guess. This is potentially useful for tuning an interpolation search algorithm.
--ignore-midx
don't use .midx files, use only .idx files. This is only really useful when used with --predict.
EXAMPLE
$ bup margin
Reading indexes: 100.00% (1612581/1612581), done.
40
40 matching prefix bits
1.94 bits per doubling
120 bits (61.86 doublings) remaining
4.19338e+18 times larger is possible
Everyone on earth could have 625878182 data sets
like yours, all in one repository, and we would
expect 1 object collision.
$ bup margin --predict
PackIdxList: using 1 index.
Reading indexes: 100.00% (1612581/1612581), done.
915 of 1612581 (0.057%)
SEE ALSO bup-midx(1), bup-save(1)BUP
Part of the bup(1) suite.
AUTHORS
Avery Pennarun <apenwarr@gmail.com>.
Bup unknown-bup-margin(1)