Sponsored Content
Full Discussion: Data transformation
Top Forums Shell Programming and Scripting Data transformation Post 302939304 by Kanja on Tuesday 24th of March 2015 02:09:02 PM
Old 03-24-2015
Data transformation

I do have an input text file of the following format with 1000's of lines

input file:
Code:
3386(11:11,Ani:0,Bri:1,ch:1,Jwe:0,Jor:0,LP:0,Lo:0,NS:1,al:1,bo:0,boy:0,bru:0,sh:0,cor:1,dum:0,ery:0,mac:0,mic:0)
3387(11:11,Ani:1,Bri:0,ch:1,Jwe:2,Jor:0,LP:0,Lo:0,NS:3,al:1,bo:0,boy:0,bru:0,sh:0,cor:4,dum:0,ery:1,mac:0,mic:0)
386(11:11,Ani:1,Bri:1,ch:1,Jwe:4,Jor:0,LP:0,Lo:3,NS:1,al:1,bo:7,boy:0,bru:9,sh:0,cor:1,dum:0,ery:0,mac:0,mic:0)
.....
.......
1000's lines

Would like to transform this data into a tab delimiited file of the following format:
Basically the first number on each file will be a column and the rows (horizontal) content on the input file will be the row. The number after each of the : will be value in the matrix

Desired output

Code:
      3386  3387 386
Ani  0       1       1
Bri   1       0       1
ch    1       1       1
Jwe  0       2       4
Jor   0       0       0
LP    0       0       0
Lo    0       0       3
NS   1       3       1
al     1       1       1
bo    0       0       7
boy  0       0       0
bru  0        0      9
sh   0        0       0
cor  1        4       1
dum 0        0       0
ery   0        1       0
mac  0        0       0
mic   0        0       0

It would be great if I could get some help in awk or another language to do this data transformation
 

9 More Discussions You Might Find Interesting

1. UNIX for Dummies Questions & Answers

Transformation capital letter

:confused: Hye everybody i would like to know if exist a internet site where i can founs some interesting shell script very usefull I need to transform hundreds names of files escribed in CAPITAL letter in minuscule letter do oyu know a mean o do that that thanks to a script or a shell... (1 Reply)
Discussion started by: Dark Angel
1 Replies

2. Shell Programming and Scripting

file name transformation

I've got a multitude of text data files that carry exactly the same kind of data. Unfortunately some of them have a different filename format some are: 'category'_'month'-'year'_act.txt an example being: daf_Apr-1961_act.txt and some are: 'category'_ 'year'-'month'_act.txt an... (16 Replies)
Discussion started by: vrms
16 Replies

3. Shell Programming and Scripting

text transformation with sed or awk

Hi there, I'm trying to extract automatically opening hours from a website. The page displaying the schedules is http://www.natureetdecouvertes.com/pages/gener/view_FO_STORE_corgen.asp?mag_cod=xxx with xxx going from 101 to 174 I managed to get the following output : le lundi de 10.30 à... (4 Replies)
Discussion started by: chebarbudo
4 Replies

4. Shell Programming and Scripting

XML to csv transformation

Hi, I want to write a perl script. Which should accept the xml file, one xsl file and the loaction. The perl script should process the xml file using the xsl file and puts the out put in specified location. For example: My.perl is perls cript. my.xml is like this <?xml version="1.0"... (2 Replies)
Discussion started by: siba.s.nayak
2 Replies

5. Shell Programming and Scripting

xslt transformation through Unix

Hi .. I have one input XML and I want to convert into another XML using parameter mapping through Database through Unix shell script. But I dont have idea how to do that. And how can I create xsl sheet if mapping is through database tables. Please help me on this. (1 Reply)
Discussion started by: srinu19
1 Replies

6. UNIX for Advanced & Expert Users

Need help in xslt transformation

Hi I have one input xml file <param name="EXTR_COL" valueDesc="AUTHD_RFLL" value="rx.AUTHD_RFLL" /> There is a mapping parameters in Database. if EXTR_COL is present in input XML then it is mapped to fieldlist. so the o/p XML looks like <fieldlist> <datasource... (1 Reply)
Discussion started by: srinu19
1 Replies

7. Shell Programming and Scripting

XML file transformation

Hi all, I have to transform a XML file like this: <?xml version="1.0"?> <vocabulary> <voc_id>102</voc_id> <name>Vocabulary Name</name> <description>Voc description</description> <relations>3</relations> <hierarchy>5</hierarchy> <word> <word_id>1</word_id> ... (1 Reply)
Discussion started by: aLittleBeat
1 Replies

8. Shell Programming and Scripting

Clipboard transformation scripting

Hello all, I've done a bit of clipboard transformation scripting using xclip before, piping contents with " xclip -o -selection clipboard " to grep, sed, awk, then back into the clipboard with " xclip -i -selection clipboard " ... but I am not a fantastically skilled user of either of the three... (4 Replies)
Discussion started by: la2ar0
4 Replies

9. Shell Programming and Scripting

Row to Column transformation

Hello Experts, I need to transform rows into column using awk. I tried few things but failed to obtain desired output, as I'm fairly new to awk. i/p file 100, READ, 12 100, WRITE, 8 100, SEEK, 1 142, READ, 2 142, WRITE, 34 142, SEEK, 3 O/p Needed PROC_ID 100 142 READ 12 ... (2 Replies)
Discussion started by: sybadm
2 Replies
bup-margin(1)						      General Commands Manual						     bup-margin(1)

NAME
bup-margin - figure out your deduplication safety margin SYNOPSIS
bup margin [options...] DESCRIPTION
bup margin iterates through all objects in your bup repository, calculating the largest number of prefix bits shared between any two entries. This number, n, identifies the longest subset of SHA-1 you could use and still encounter a collision between your object ids. For example, one system that was tested had a collection of 11 million objects (70 GB), and bup margin returned 45. That means a 46-bit hash would be sufficient to avoid all collisions among that set of objects; each object in that repository could be uniquely identified by its first 46 bits. The number of bits needed seems to increase by about 1 or 2 for every doubling of the number of objects. Since SHA-1 hashes have 160 bits, that leaves 115 bits of margin. Of course, because SHA-1 hashes are essentially random, it's theoretically possible to use many more bits with far fewer objects. If you're paranoid about the possibility of SHA-1 collisions, you can monitor your repository by running bup margin occasionally to see if you're getting dangerously close to 160 bits. OPTIONS
--predict Guess the offset into each index file where a particular object will appear, and report the maximum deviation of the correct answer from the guess. This is potentially useful for tuning an interpolation search algorithm. --ignore-midx don't use .midx files, use only .idx files. This is only really useful when used with --predict. EXAMPLE
$ bup margin Reading indexes: 100.00% (1612581/1612581), done. 40 40 matching prefix bits 1.94 bits per doubling 120 bits (61.86 doublings) remaining 4.19338e+18 times larger is possible Everyone on earth could have 625878182 data sets like yours, all in one repository, and we would expect 1 object collision. $ bup margin --predict PackIdxList: using 1 index. Reading indexes: 100.00% (1612581/1612581), done. 915 of 1612581 (0.057%) SEE ALSO
bup-midx(1), bup-save(1) BUP
Part of the bup(1) suite. AUTHORS
Avery Pennarun <apenwarr@gmail.com>. Bup unknown- bup-margin(1)
All times are GMT -4. The time now is 11:11 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy