@Scrutinizer: That works fine for quadratic matrices, but would run into difficulties if row count and column count differ. If column count is greater, you'll have empty lines at result end. If it's less, you'll miss last lines.
In the END section, you'll need to print max(NF) lines, not NR lines. Check this small amended version of your above script:
Greetings all:
I am still new to Unix environment and I need help with the following requirement.
I have a large sequential file sorted on a field (say store#) that is being split into several smaller files, one for each store. That means if there are 500 stores, there will be 500 files. This... (1 Reply)
Hi,
I have an input data file :-
Test4599,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,2,2,Rain
Test90,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,1,1,0,Not Rain
etc....
I wanted to transpose these data to:-... (2 Replies)
Hello. very new to shell scripting and would like to know if anyone could help me.
I have data thats being pulled into a txt file and currently have to manually transpose the data which is taking a long time to do.
here is what the data looks like.
Server1 -- Date -- Other -- value... (7 Replies)
Hi,
I have table in sql ..from this table im storing the first coloumn values in shell array variable ...
after this passing this variable as an arugument in SQL procedure.
But the proc. is running fine only for 1024 values in array ...
How to store more than 1024 values in the array... (5 Replies)
I can no longer find my commands, but I use to be able to transpose data with common fields from a single column to rows using a command line. My data is separated as follows:
NAME=BOB
ADDRESS=COLORADO
PET=CAT
NAME=SUSAN
ADDRESS=TEXAS
PET=BIRD
NAME=TOM
ADDRESS=UTAH
PET=DOG
I would... (7 Replies)
Hi I have below requirement, need help
One file contains the meta data information and other file would have the data, match the column from file1 and with file2 and extract corresponding column value and display in another file
File1:
CUSTTYPECD
COSTCENTER
FNAME
LNAME
SERVICELVL
... (1 Reply)
I have a messy, pipe-delimited ("|") input dataset.
I would like to create a file of ID plus each component of field 4 which is delimited by ";" into a long, skinny shape for easier processing.
A couple of complications are that field 4 may contain both commas and linefeed characters from the... (9 Replies)
Hi All,
I have sort of a case to transpose data from rows to column
input data
Afghanistan|10000|1
Albania|25000|4
Algeria|25000|7
Andorra|10000|4
Angola|25000|47
Antigua and Barbuda|25000|23
Argentina|5000|3
Armenia|100000|12
Aruba|20000|2
Australia|50000|2
I need to transpose... (3 Replies)
Discussion started by: radius
3 Replies
LEARN ABOUT DEBIAN
bup-margin
bup-margin(1) General Commands Manual bup-margin(1)NAME
bup-margin - figure out your deduplication safety margin
SYNOPSIS
bup margin [options...]
DESCRIPTION
bup margin iterates through all objects in your bup repository, calculating the largest number of prefix bits shared between any two
entries. This number, n, identifies the longest subset of SHA-1 you could use and still encounter a collision between your object ids.
For example, one system that was tested had a collection of 11 million objects (70 GB), and bup margin returned 45. That means a 46-bit
hash would be sufficient to avoid all collisions among that set of objects; each object in that repository could be uniquely identified by
its first 46 bits.
The number of bits needed seems to increase by about 1 or 2 for every doubling of the number of objects. Since SHA-1 hashes have 160 bits,
that leaves 115 bits of margin. Of course, because SHA-1 hashes are essentially random, it's theoretically possible to use many more bits
with far fewer objects.
If you're paranoid about the possibility of SHA-1 collisions, you can monitor your repository by running bup margin occasionally to see if
you're getting dangerously close to 160 bits.
OPTIONS --predict
Guess the offset into each index file where a particular object will appear, and report the maximum deviation of the correct answer
from the guess. This is potentially useful for tuning an interpolation search algorithm.
--ignore-midx
don't use .midx files, use only .idx files. This is only really useful when used with --predict.
EXAMPLE
$ bup margin
Reading indexes: 100.00% (1612581/1612581), done.
40
40 matching prefix bits
1.94 bits per doubling
120 bits (61.86 doublings) remaining
4.19338e+18 times larger is possible
Everyone on earth could have 625878182 data sets
like yours, all in one repository, and we would
expect 1 object collision.
$ bup margin --predict
PackIdxList: using 1 index.
Reading indexes: 100.00% (1612581/1612581), done.
915 of 1612581 (0.057%)
SEE ALSO bup-midx(1), bup-save(1)BUP
Part of the bup(1) suite.
AUTHORS
Avery Pennarun <apenwarr@gmail.com>.
Bup unknown-bup-margin(1)