Sponsored Content
Full Discussion: Split Records
Top Forums Shell Programming and Scripting Split Records Post 302693777 by RudiC on Wednesday 29th of August 2012 05:15:36 PM
Old 08-29-2012
Your sample output is incorrect in line 4, should be 2,pa. This will yield the corrected output from your input:
Code:
awk -F"," -vOFS="," 'NR>1 {n=split($2, co, ":"); for (i=1;i<=n;i++) print $1, co[i]}' infile

 

10 More Discussions You Might Find Interesting

1. Shell Programming and Scripting

dynamically split the records

Hi, I was wandering would it be possible to split the record dynamically based on the certain values, for an instance i have a file with record with predefined split value i.e 10 col1 col2 col3 col4 ------------------------ aaaa bbbb 2 44aaaabbbb55cccddd1110 mmn xnmn 3... (6 Replies)
Discussion started by: braindrain
6 Replies

2. UNIX for Dummies Questions & Answers

How to split multiple records file in n files

Hello, Each record has a lenght of 7 characters I have 2 types of records 010 and 011 There is no character of end of line. For example my file is like that : 010hello 010bonjour011both 011sisters I would like to have 2 files 010.txt (2 records) hello bonjour and ... (1 Reply)
Discussion started by: jeuffeu
1 Replies

3. UNIX for Advanced & Expert Users

Split records based on '-'

HI, I have a pipe delimiter file , I have to search for second field pattern, if the second field does not contain a '-' , I need to start capturing the record from this line till I find another second field with '-' value. Below is the sample data SOURCE DATA ABC|ABC_702148-PARAM... (3 Replies)
Discussion started by: mora
3 Replies

4. Shell Programming and Scripting

split records into different files

Hi All, I want my file to be split based on value of 'N' (passed as argument). If value of 'N' is '2' then 4 new files will be generated from the below source file and the o/p file shoud look like File_$num , where num will be incremental. Source file: 1 2 3 4 5 O/p Files: ... (6 Replies)
Discussion started by: HemaV
6 Replies

5. UNIX for Dummies Questions & Answers

Split single record to multiple records

Hi Friends, source .... col1,col2,col3 a,b,1;2;3 here colom delimeter is comma(,). here we dont know what is the max length of col3 means now we have 1;2;3 next time i will receive 1;2;3;4;5;etc... required output .............. col1,col2,col3 a,b,1 a,b,2 a,b,3 please give me... (5 Replies)
Discussion started by: bab.galary
5 Replies

6. Shell Programming and Scripting

Split records into multiple records

Hi All, I am trying to split a record into multiple records based on a value. Input.txt "A",1,0,10 "B",2,0,10,15,20 "C",3,11,14,16,19,21,23 "D",1,0,5 My desired output is: "A",1,0,10 "B",2,0,10 "B",2,15,20 "C",3,11,14 "C",3,16,19 "C",3,21,23 (4 Replies)
Discussion started by: kmsekhar
4 Replies

7. Shell Programming and Scripting

Split the Master and Child Records

Hi, I receive a file that has Master record followed by one/more Child Records as shown below & also as attached in the file. Now , The key for the child record is from pos 4 to position 80 in the parent record, now the requirement is to create two files 1. Parent file --> has all the parent... (1 Reply)
Discussion started by: KNaveen
1 Replies

8. Shell Programming and Scripting

Split file based on records

I have to split a file based on number of lines and the below command works fine: split -l 2 Inputfile -d OutputfileMy input file contains header, detail and trailor info as below: H D D D D TMy split files for the above command contains: First File: H DSecond File: ... (11 Replies)
Discussion started by: Ajay Venkatesan
11 Replies

9. Shell Programming and Scripting

Split records

Hi I have a file $cat test a,1;2;3 b,4;5;6;7 c,8;9 I want to split each record to multiple based on semicolon in 2nd field. i.e a,1 a,2 a,3 b,4 b,5 (3 Replies)
Discussion started by: Shivdatta
3 Replies

10. Shell Programming and Scripting

How to split one record to multiple records?

Hi, I have one tab delimited file which is having multiple store_ids in first column seprated by pipe.I want to split the file on the basis of store_id(separating 1st record in to 2 records ). I tried some more options like below with using split,awk etc ,But not able to get proper output. can... (1 Reply)
Discussion started by: jaggy
1 Replies
bup-margin(1)						      General Commands Manual						     bup-margin(1)

NAME
bup-margin - figure out your deduplication safety margin SYNOPSIS
bup margin [options...] DESCRIPTION
bup margin iterates through all objects in your bup repository, calculating the largest number of prefix bits shared between any two entries. This number, n, identifies the longest subset of SHA-1 you could use and still encounter a collision between your object ids. For example, one system that was tested had a collection of 11 million objects (70 GB), and bup margin returned 45. That means a 46-bit hash would be sufficient to avoid all collisions among that set of objects; each object in that repository could be uniquely identified by its first 46 bits. The number of bits needed seems to increase by about 1 or 2 for every doubling of the number of objects. Since SHA-1 hashes have 160 bits, that leaves 115 bits of margin. Of course, because SHA-1 hashes are essentially random, it's theoretically possible to use many more bits with far fewer objects. If you're paranoid about the possibility of SHA-1 collisions, you can monitor your repository by running bup margin occasionally to see if you're getting dangerously close to 160 bits. OPTIONS
--predict Guess the offset into each index file where a particular object will appear, and report the maximum deviation of the correct answer from the guess. This is potentially useful for tuning an interpolation search algorithm. --ignore-midx don't use .midx files, use only .idx files. This is only really useful when used with --predict. EXAMPLE
$ bup margin Reading indexes: 100.00% (1612581/1612581), done. 40 40 matching prefix bits 1.94 bits per doubling 120 bits (61.86 doublings) remaining 4.19338e+18 times larger is possible Everyone on earth could have 625878182 data sets like yours, all in one repository, and we would expect 1 object collision. $ bup margin --predict PackIdxList: using 1 index. Reading indexes: 100.00% (1612581/1612581), done. 915 of 1612581 (0.057%) SEE ALSO
bup-midx(1), bup-save(1) BUP
Part of the bup(1) suite. AUTHORS
Avery Pennarun <apenwarr@gmail.com>. Bup unknown- bup-margin(1)
All times are GMT -4. The time now is 12:41 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy