I have a file called clientname_filename.csv
whose contents are like
col1|col2|col3|col4|
510|abc|xxx|450|
510|abc11|yyy|350
510|pqr99|zzz| 670
512|222|439|110
Here i have check the contents of column for data type.
i have a constraints that col1 always contain Numeric value column 2... (12 Replies)
Howdy,
I need to convert an association data matrix, currently in a two-column format, into a matrix with numbers indicating the number of associations. I've been looking around for AWK code in the list, but could not find anything. Here's an example of what I want to perform:
original... (10 Replies)
Hi everyone, just some simple question...
i've been using a awk script to calculate my data...
i have 3 files:
file a1.txt:
2
3
4
5
3
4
file a2.txt:
4
5
6
7
8 (1 Reply)
Looks at the most efficient way to add up the column of data based off of the rows.
Random data
Name-Number-ID
Sarah-2.0-15
Bob-6.3-15
Sally-1.0-10
James-1.0-10
Scotty-10.7-15
So I would select all those who have ID = 15 and then add up total number
read - p "Enter ID number" Num
... (3 Replies)
Hi, I have data of the following type,
chr1 234 678 39 852 638 abcd 7895
chr1 526 326 33 887 965 kilj 5849
Now, I would like to have something like this
chr1 234 678 39 852 638 abcd 7895 <a href="http://unix.com/thread=chr1:234-678">Link</a>
chr1 526 326 33 887 965 kilj 5849 <a... (5 Replies)
Hello experts,
Please help me in achieving this in an easier way possible. I have 2 csv files with following data:
File1
08/23/2012 12:35:47,JOB_5330
08/23/2012 12:35:47,JOB_5330
08/23/2012 12:36:09,JOB_5340
08/23/2012 12:36:14,JOB_5340
08/23/2012 12:36:22,JOB_5350
08/23/2012... (5 Replies)
Dear Unix Gurus,
I have a text file with multiple columns, for example, see sample.txt below
0 1 301
1 4 250
2 6 140
3 2 610
7 1 180I want to find the maximum in, say, column 3, normalise all the values to this maximum value (to 4 decimal places) and spit everything into a new... (2 Replies)
Hi guys,
I have problem to append new data at the end of each line of the files where it takes whole value of the nth column. My expected result i just want to take a specific value only. This new data is based on substring of 11th, 12th 13th column that has comma seperated value.
My code:
awk... (4 Replies)
Hi team,
I have below sample file.
$ cat sample
dn: MSISDN=400512345677,dc=msisdn,ou=NPSD,serv=CSPS,ou=servCommonData,dc=stc
structuralObjectClass: NphData
objectClass: NphData
objectClass: MSISDN
entryDS: 0
nodeId: 35
createTimestamp: 20170216121047Z
modifyTimestamp: 20170216121047Z... (3 Replies)
Discussion started by: shanul karim
3 Replies
LEARN ABOUT DEBIAN
bup-margin
bup-margin(1) General Commands Manual bup-margin(1)NAME
bup-margin - figure out your deduplication safety margin
SYNOPSIS
bup margin [options...]
DESCRIPTION
bup margin iterates through all objects in your bup repository, calculating the largest number of prefix bits shared between any two
entries. This number, n, identifies the longest subset of SHA-1 you could use and still encounter a collision between your object ids.
For example, one system that was tested had a collection of 11 million objects (70 GB), and bup margin returned 45. That means a 46-bit
hash would be sufficient to avoid all collisions among that set of objects; each object in that repository could be uniquely identified by
its first 46 bits.
The number of bits needed seems to increase by about 1 or 2 for every doubling of the number of objects. Since SHA-1 hashes have 160 bits,
that leaves 115 bits of margin. Of course, because SHA-1 hashes are essentially random, it's theoretically possible to use many more bits
with far fewer objects.
If you're paranoid about the possibility of SHA-1 collisions, you can monitor your repository by running bup margin occasionally to see if
you're getting dangerously close to 160 bits.
OPTIONS --predict
Guess the offset into each index file where a particular object will appear, and report the maximum deviation of the correct answer
from the guess. This is potentially useful for tuning an interpolation search algorithm.
--ignore-midx
don't use .midx files, use only .idx files. This is only really useful when used with --predict.
EXAMPLE
$ bup margin
Reading indexes: 100.00% (1612581/1612581), done.
40
40 matching prefix bits
1.94 bits per doubling
120 bits (61.86 doublings) remaining
4.19338e+18 times larger is possible
Everyone on earth could have 625878182 data sets
like yours, all in one repository, and we would
expect 1 object collision.
$ bup margin --predict
PackIdxList: using 1 index.
Reading indexes: 100.00% (1612581/1612581), done.
915 of 1612581 (0.057%)
SEE ALSO bup-midx(1), bup-save(1)BUP
Part of the bup(1) suite.
AUTHORS
Avery Pennarun <apenwarr@gmail.com>.
Bup unknown-bup-margin(1)