Sponsored Content
Top Forums Shell Programming and Scripting AWK:Split fields separated by semicolon Post 302596460 by mehar on Tuesday 7th of February 2012 12:39:15 PM
Old 02-07-2012
Thanks for your answer.

I am sorry that i was not clear with the question.

The file is a tab-delimited text file with 8 columns and the 8th column having the text DP=51;VDB=0.0000;AF1=1;AC1=2;DP4=3,0,47,1;MQ=31;FQ=-99;PV4=1,1,0.31,1

I just need to split the text under INFO into columns, which means the text under INFO should be split into individual coulmns

CHROM POS ID REF ALT QUAL FILTER DP VDB AF1 AC1..................PV4
1 3000012 . A G 126 . 51 0.000 1 2 1,1,0.31,1.
 

10 More Discussions You Might Find Interesting

1. Shell Programming and Scripting

Extract semicolon separated delimiter

The log reads as follows. fname1;lname1;eid1;addr;pincode1; fname2;lname2;eid2;addr2;pincode2; fname3;lname3;eid3;addr3;pincode3; fname4;lname4;eid;addr4;pincode4; how do i extract only fname and save it in an array similarly for lname and so on i tried reading a file and cutting each... (5 Replies)
Discussion started by: vkca
5 Replies

2. Shell Programming and Scripting

Compare Tab Separated Field with AWK to all and print lines of unique fields.

Hi. I have a tab separated file that has a couple nearly identical lines. When doing: sort file | uniq > file.new It passes through the nearly identical lines because, well, they still are unique. a) I want to look only at field x for uniqueness and if the content in field x is the... (1 Reply)
Discussion started by: rocket_dog
1 Replies

3. Shell Programming and Scripting

Compare files with fields separated with semicolon

Dear experts I have files like ABD : 5869 events, relative ratio : 1.173800E-01 , sum of ratios : 1.173800E-01 VBD : 12147 events, relative ratio : 2.429400E-01 , sum of ratios : 3.603200E-01 SDF : 17000 events, relative ratio : 3.400000E-01 , sum of ratios : 7.003200E-01 OIP: 14984... (9 Replies)
Discussion started by: Alkass
9 Replies

4. Shell Programming and Scripting

awk to split one field and print the last two fields within the split part.

Hello; I have a file consists of 4 columns separated by tab. The problem is the third fields. Some of the them are very long but can be split by the vertical bar "|". Also some of them do not contain the string "UniProt", but I could ignore it at this moment, and sort the file afterwards. Here is... (5 Replies)
Discussion started by: yifangt
5 Replies

5. Shell Programming and Scripting

grep lines separated with semicolon

Hello, I would like to kindly ask you for help. I have a file with some lines in one row separated by semicolon. I need to find out, if the line I have in different variable is included in this file. e.g I have a file foo.txt with lines A=hello there;hello world;hello there world In... (6 Replies)
Discussion started by: satin1321
6 Replies

6. Shell Programming and Scripting

awk split lines without knowing the number of fields a-priori

I want to use awk to split fields and put them into a file but I don't know the number of fields for example, in the following line Ports: 22/filtered/tcp//ssh///, 53/open/tcp//tcpwrapped///, 111/filtered/tcp//rpcbind///, 543/filtered/tcp//klogin///, 544/filtered/tcp//kshell///,... (3 Replies)
Discussion started by: esolvepolito
3 Replies

7. Shell Programming and Scripting

awk print - fields separated with comma's need to ignore inbetween double quotes

I am trying to re-format a .csv file using awk. I have 6 fields in the .csv file. Some of the fields are enclosed in double quotes and contain comma's inside the quotes. awk is breaking this into multiple fields. Sample lines from the .csv file: Device Name,Personnel,Date,Solution... (1 Reply)
Discussion started by: jxrst
1 Replies

8. Shell Programming and Scripting

Replace semicolon within double quotes in a file with semicolon delimiter

Hello Team, Could you please help me with the below question? I have a file with the following properties 1) File Delimiter is ; 2) Text columns are within double quotes 3) Numeric columns will not have double quotes 4) File has total 6 columns Please see a sample record from file ... (3 Replies)
Discussion started by: sam99
3 Replies

9. Shell Programming and Scripting

awk to add plus or minus to fields and split another field

In the tab-delimited input below I am trying to use awk to -10 from $2 and +10 to $3. Something like awk -F'\t' -v OFS='\t' -v s=10 '{split($4,a,":"); print $1,$2-s,$3+s,a,$5,$6} | awk {split(a,b,"-"); print $1,$2-s,$3+s,b-s,b+s,$5,$6}' input should do that. I also need to -10 from $4... (2 Replies)
Discussion started by: cmccabe
2 Replies

10. UNIX for Beginners Questions & Answers

How to extract fields from a CSV i.e comma separated where some of the fields having comma as value?

can anyone help me!!!! How to I parse the CSV file file name : abc.csv (csv file) The above file containing data like abv,sfs,,hju,',',jkk wff,fst,,rgr,',',rgr ere,edf,erg,',',rgr,rgr I have a requirement like i have to extract different field and assign them into different... (4 Replies)
Discussion started by: J.Jena
4 Replies
bup-margin(1)						      General Commands Manual						     bup-margin(1)

NAME
bup-margin - figure out your deduplication safety margin SYNOPSIS
bup margin [options...] DESCRIPTION
bup margin iterates through all objects in your bup repository, calculating the largest number of prefix bits shared between any two entries. This number, n, identifies the longest subset of SHA-1 you could use and still encounter a collision between your object ids. For example, one system that was tested had a collection of 11 million objects (70 GB), and bup margin returned 45. That means a 46-bit hash would be sufficient to avoid all collisions among that set of objects; each object in that repository could be uniquely identified by its first 46 bits. The number of bits needed seems to increase by about 1 or 2 for every doubling of the number of objects. Since SHA-1 hashes have 160 bits, that leaves 115 bits of margin. Of course, because SHA-1 hashes are essentially random, it's theoretically possible to use many more bits with far fewer objects. If you're paranoid about the possibility of SHA-1 collisions, you can monitor your repository by running bup margin occasionally to see if you're getting dangerously close to 160 bits. OPTIONS
--predict Guess the offset into each index file where a particular object will appear, and report the maximum deviation of the correct answer from the guess. This is potentially useful for tuning an interpolation search algorithm. --ignore-midx don't use .midx files, use only .idx files. This is only really useful when used with --predict. EXAMPLE
$ bup margin Reading indexes: 100.00% (1612581/1612581), done. 40 40 matching prefix bits 1.94 bits per doubling 120 bits (61.86 doublings) remaining 4.19338e+18 times larger is possible Everyone on earth could have 625878182 data sets like yours, all in one repository, and we would expect 1 object collision. $ bup margin --predict PackIdxList: using 1 index. Reading indexes: 100.00% (1612581/1612581), done. 915 of 1612581 (0.057%) SEE ALSO
bup-midx(1), bup-save(1) BUP
Part of the bup(1) suite. AUTHORS
Avery Pennarun <apenwarr@gmail.com>. Bup unknown- bup-margin(1)
All times are GMT -4. The time now is 08:31 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy