Hi all,
I have created a script which adding two columns and removing two columns for all files.
Filename: Cust_information_1200_201010.txt
Source Data:
"1","Cust information","123","106001","street","1-203 high street"
"1","Cust information","124","105001","street","1-203 high street"
... (0 Replies)
Hi,
I am new to unix and would greatly appreciate some help.
I have a file containing multiple colums containing different sets of data e.g.
File 1:
John Ireland 27_December_69
Mary England 13_March_55
Mike France 02_June_80
I am currently using the awk... (10 Replies)
Hi All,
i have a .Csv file in the below format
startTime, endTime, delta, gName, rName, rNumber, m2239max, m2239min, m2239avg, m100016509avg, m100019240max, metric3min, m100019240avg, propValues
11-Mar-2012 00:00:00, 11-Mar-2012 00:05:00, 300.0, vma3550a, a-1_CPU Index<1>, 200237463, 0.0,... (9 Replies)
Hi friends,
I have one file like below. (.csv type)
SNo,data1,data2
1,1,2
2,2,3
3,3,2
and another file like below.
Exclude
data1
where Exclude should be treated as column name in file2.
I want the output shown below.
SNo,data2
1,2
2,3
3,2
Where my data1 column got removed from... (2 Replies)
Hello everyone,
I searched the forum looking for answers to this but I could not pinpoint exactly what I need as I keep having trouble.
I have many files each having two columns and hundreds of rows.
first column is a string (can have many words) and the second column is a number.The files are... (5 Replies)
Hi Friends,
I have come across some files where some of the columns don not have data.
Key, Data1,Data2,Data3,Data4,Data5
A,5,6,,10,,
A,3,4,,3,,
B,1,,4,5,,
B,2,,3,4,,
If we see the above data on Data5 column do not have any row got filled. So remove only that column(Here Data5) and... (4 Replies)
Hi all, I know this sounds suspiciously like a homework course; but, it is not.
My goal is to take a file, and match my "ID" column to the "Date" column, if those conditions are true, add the total number of minutes worked and place it in this file, while not printing the original rows that I... (6 Replies)
Hi all, I'm pretty much a newbie to UNIX. I would appreciate any help with UNIX coding on comparing two large csv files (greater than 10 GB in size), and output a file with matching columns.
I want to compare file1 and file2 by 'id' and 'chain' columns, then extract exact matching rows'... (5 Replies)
Hello All,
I have a requirement in which i will be given a sql query as input in a file with dynamic number of columns. For example some times i will get 5 columns, some times 8 columns etc up to 20 columns.
So my requirement is to generate a output query which will have 20 columns all the... (7 Replies)
HI All,
I'm embedding SQL query in Script which gives following output:
Assignee Group Total
ABC Group1 17
PQR Group2 5
PQR Group3 6
XYZ Group1 10
XYZ Group3 5
I have saved the above output in a file.
How do i sum up the contents of this output so as to get following output:
... (4 Replies)
Discussion started by: Khushbu
4 Replies
LEARN ABOUT DEBIAN
bup-margin
bup-margin(1) General Commands Manual bup-margin(1)NAME
bup-margin - figure out your deduplication safety margin
SYNOPSIS
bup margin [options...]
DESCRIPTION
bup margin iterates through all objects in your bup repository, calculating the largest number of prefix bits shared between any two
entries. This number, n, identifies the longest subset of SHA-1 you could use and still encounter a collision between your object ids.
For example, one system that was tested had a collection of 11 million objects (70 GB), and bup margin returned 45. That means a 46-bit
hash would be sufficient to avoid all collisions among that set of objects; each object in that repository could be uniquely identified by
its first 46 bits.
The number of bits needed seems to increase by about 1 or 2 for every doubling of the number of objects. Since SHA-1 hashes have 160 bits,
that leaves 115 bits of margin. Of course, because SHA-1 hashes are essentially random, it's theoretically possible to use many more bits
with far fewer objects.
If you're paranoid about the possibility of SHA-1 collisions, you can monitor your repository by running bup margin occasionally to see if
you're getting dangerously close to 160 bits.
OPTIONS --predict
Guess the offset into each index file where a particular object will appear, and report the maximum deviation of the correct answer
from the guess. This is potentially useful for tuning an interpolation search algorithm.
--ignore-midx
don't use .midx files, use only .idx files. This is only really useful when used with --predict.
EXAMPLE
$ bup margin
Reading indexes: 100.00% (1612581/1612581), done.
40
40 matching prefix bits
1.94 bits per doubling
120 bits (61.86 doublings) remaining
4.19338e+18 times larger is possible
Everyone on earth could have 625878182 data sets
like yours, all in one repository, and we would
expect 1 object collision.
$ bup margin --predict
PackIdxList: using 1 index.
Reading indexes: 100.00% (1612581/1612581), done.
915 of 1612581 (0.057%)
SEE ALSO bup-midx(1), bup-save(1)BUP
Part of the bup(1) suite.
AUTHORS
Avery Pennarun <apenwarr@gmail.com>.
Bup unknown-bup-margin(1)