Sponsored Content
Full Discussion: Matrix to 3 col sorted
Top Forums Shell Programming and Scripting Matrix to 3 col sorted Post 303003886 by ctsgnb on Friday 22nd of September 2017 06:08:49 AM
Old 09-22-2017
Whatever strategy you choose, according to your expectation, you will get an output file that is near 5 times bigger than your input file.

input file :
rows = 400k
columns = 3000
Total amount of datas = 400k x 3000 = 1,2 billions

output file :
rows = (400k -1) x ( 3000 -3 )
"-1" because of the header row
"-3" because the 3 first contains the datas you want
columns = 5 (i take into account the datas, not the formatting and "__" stuff)
Total amount of datas = 5 x (400k -1) x ( 3000 -3 ) = 5,99...billions

Indeed, since you will be repeating the datas of the first 3 column for each and every subsequent column. (so (3 + 2 ) x (3000 - 3) ) rather than having it once for all of them ( "+ 2" is because you even want to add the header and the values of the subsequent column).

So just because of your prerequisits and expectations, for sure you will have to write more data, and thus you will need the corresponding number of I/O writing, independently of the strategy you choose.

If you have a huge amount of data to write, then there is and uncompressible amount of time to do it.
Of course I/O operations are quicker to do in RAM or SSD than on a standard hard drive, but still ...

It would be cheaper to save the transposed matrix and request on it
So you will have it twice : in line and in columns ... of course it will cost you 1,2 billions of datas more.
But 1,2 billions x 2 = 2,4 billions is still more than twice smaller than 5,9 billions of datas !

Anyway, processing such an amount of data using files is inappropriate : this is what Database have been designed for...
My 2 cents ...

Smilie

PS// The reality might even be worse since i didn't even take into account the size of the datas, just the number of them ...

Last edited by ctsgnb; 09-22-2017 at 12:18 PM..
This User Gave Thanks to ctsgnb For This Post:
 

10 More Discussions You Might Find Interesting

1. UNIX for Advanced & Expert Users

Help On col command

Hello Can Any1 tell me the difference between the col command and the col command with the -f option. I tried running both of them but i can't see any difference. Please guide me. (1 Reply)
Discussion started by: rahulrathod
1 Replies

2. Ubuntu

Match col 1 of File 1 with col 1 File 2 and create a 3rd file

Hello, I have a 1.6 GB file that I would like to modify by matching some ids in col1 with the ids in col 1 of file2.txt and save the results into a 3rd file. For example: File 1 has 1411 rows, I ignore how many columns it has (thousands) File 2 has 311 rows, 1 column Would like to... (7 Replies)
Discussion started by: sogi
7 Replies

3. Shell Programming and Scripting

diagonal matrix to square matrix

Hello, all! I am struggling with a short script to read a diagonal matrix for later retrieval. 1.000 0.234 0.435 0.123 0.012 0.102 0.325 0.412 0.087 0.098 1.000 0.111 0.412 0.115 0.058 0.091 0.190 0.045 0.058 1.000 0.205 0.542 0.335 0.054 0.117 0.203 0.125 1.000 0.587 0.159 0.357... (11 Replies)
Discussion started by: yifangt
11 Replies

4. Shell Programming and Scripting

i can't cut the third col

SW_dist_intr false Enable SW distribution of interrupts True autorestart true Automatically REBOOT OS after a crash True boottype disk N/A False capacity_inc 1.00 ... (7 Replies)
Discussion started by: maxim42
7 Replies

5. Shell Programming and Scripting

how to add new col in a file

Hi, Experts, I have a requirement as following: my source file: a a a b b c c c c I need add one more colume as following: 1 a 2 a 3 a 1 b 2 b 1 c 2 c (4 Replies)
Discussion started by: ken002
4 Replies

6. Ubuntu

How to convert full data matrix to linearised left data matrix?

Hi all, Is there a way to convert full data matrix to linearised left data matrix? e.g full data matrix Bh1 Bh2 Bh3 Bh4 Bh5 Bh6 Bh7 Bh1 0 0.241058 0.236129 0.244397 0.237479 0.240767 0.245245 Bh2 0.241058 0 0.240594 0.241931 0.241975 ... (8 Replies)
Discussion started by: evoll
8 Replies

7. UNIX for Advanced & Expert Users

Print line based on highest value of col (B) and repetion of values in col (A)

Hello everyone, I am writing a script to process data from the ATP world tour. I have a file which contains: t=540 y=2011 r=1 p=N409 t=540 y=2011 r=2 p=N409 t=540 y=2011 r=3 p=N409 t=540 y=2011 r=4 p=N409 t=520 y=2011 r=1 p=N409 t=520 y=2011 r=2 p=N409 t=520 y=2011 r=3 p=N409 The... (4 Replies)
Discussion started by: imahmoud
4 Replies

8. Shell Programming and Scripting

awk? adjacency matrix to adjacency list / correlation matrix to list

Hi everyone I am very new at awk but think that that might be the best strategy for this. I have a matrix very similar to a correlation matrix and in practical terms I need to convert it into a list containing the values from the matrix (one value per line) with the first field of the line (row... (5 Replies)
Discussion started by: stonemonkey
5 Replies

9. Shell Programming and Scripting

Printing from col x to end of line, except last col

Hello, I have some tab delimited data and I need to move the last col. I could hard code it, awk '{ print $1,$NF,$2,$3,$4,etc }' infile > outfile but it would be nice to know the syntax to print a range cols. I know in cut you can do, cut -f 1,4-8,11- to print fields 1,... (8 Replies)
Discussion started by: LMHmedchem
8 Replies

10. Shell Programming and Scripting

Modifying col values based on another col

Hi, Please help with this. I have several excel files (with and .xlsx format) with 10-15 columns each. They all have the same type of data but the columns are not ordered in the same way. Here is a 3 column example. What I want to do add the alphabet from column 2 to column 3, provided... (9 Replies)
Discussion started by: newbie83
9 Replies
bup-margin(1)						      General Commands Manual						     bup-margin(1)

NAME
bup-margin - figure out your deduplication safety margin SYNOPSIS
bup margin [options...] DESCRIPTION
bup margin iterates through all objects in your bup repository, calculating the largest number of prefix bits shared between any two entries. This number, n, identifies the longest subset of SHA-1 you could use and still encounter a collision between your object ids. For example, one system that was tested had a collection of 11 million objects (70 GB), and bup margin returned 45. That means a 46-bit hash would be sufficient to avoid all collisions among that set of objects; each object in that repository could be uniquely identified by its first 46 bits. The number of bits needed seems to increase by about 1 or 2 for every doubling of the number of objects. Since SHA-1 hashes have 160 bits, that leaves 115 bits of margin. Of course, because SHA-1 hashes are essentially random, it's theoretically possible to use many more bits with far fewer objects. If you're paranoid about the possibility of SHA-1 collisions, you can monitor your repository by running bup margin occasionally to see if you're getting dangerously close to 160 bits. OPTIONS
--predict Guess the offset into each index file where a particular object will appear, and report the maximum deviation of the correct answer from the guess. This is potentially useful for tuning an interpolation search algorithm. --ignore-midx don't use .midx files, use only .idx files. This is only really useful when used with --predict. EXAMPLE
$ bup margin Reading indexes: 100.00% (1612581/1612581), done. 40 40 matching prefix bits 1.94 bits per doubling 120 bits (61.86 doublings) remaining 4.19338e+18 times larger is possible Everyone on earth could have 625878182 data sets like yours, all in one repository, and we would expect 1 object collision. $ bup margin --predict PackIdxList: using 1 index. Reading indexes: 100.00% (1612581/1612581), done. 915 of 1612581 (0.057%) SEE ALSO
bup-midx(1), bup-save(1) BUP
Part of the bup(1) suite. AUTHORS
Avery Pennarun <apenwarr@gmail.com>. Bup unknown- bup-margin(1)
All times are GMT -4. The time now is 09:28 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy