Sponsored Content
Full Discussion: Multiple records based on :
Top Forums Shell Programming and Scripting Multiple records based on : Post 302519912 by mora on Thursday 5th of May 2011 11:10:40 AM
Old 05-05-2011
Multiple records based on :

Hi ,

I have the below source

Code:
source data
1|2|3|:123:abc|4
1|2|a| | 5
1|2|3|4|:a:s:D.....:n|t

Code:
Target data should be

1|2|3|:123:abc|4
1|2|3|:123:abc|4
1|2|a| | 5
1|2|3|4|:a:s:D.....:n|t
1|2|3|4|:a:s:D.....:n|t
1|2|3|4|:a:s:D.....:n|t
1|2|3|4|:a:s:D.....:n|t


based on the 5th column : I should print the same record..so I i have 2 :'s i have to print the record two times..if I have 5 :'s I have to print it 5 times so on

I got the below code from this forum

Code:
awk -F"|" '{s=gsub(":",":",$6); for(i=1;i<=s+1;i++) print $0}' OFS="|"  input_file

The code is splitting the records based on : , but it is removing all the rows with non :..I need to include those rows as well.

can any one tweak this code to include all the rows plus creating multiple records based on :

--Mora
 

10 More Discussions You Might Find Interesting

1. Shell Programming and Scripting

Based on num of records in file1 need to check records in file2 to set some condns

Hi All, I have two files say file1 and file2. I want to check the number of records in file1 and if its atleast 2 (i.e., 2 or greater than 2 ) then I have to check records in file2 .If records in file2 is atleast 1 (i.e. if its not empty ) i have to set some conditions . Could you pls... (3 Replies)
Discussion started by: mavesum
3 Replies

2. Shell Programming and Scripting

awk - splitting 1 large file into multiple based on same key records

Hello gurus, I am new to "awk" and trying to break a large file having 4 million records into several output files each having half million but at the same time I want to keep the similar key records in the same output file, not to exist accross the files. e.g. my data is like: Row_Num,... (6 Replies)
Discussion started by: kam66
6 Replies

3. Shell Programming and Scripting

Multiple records based on ';' in the record

Hi All, I have a *.csv files in a die /pro/lif/dow, (pipe delimiter file), these files are having 8 columns and 6 column(CDR_LOGIC) records are populated as below, I need to incorporate the below logic in all the *.csv files. 11||:ColumnA||:ColumnB 123||:ColumnA IIF(:ColumnA = :ColumnC then... (6 Replies)
Discussion started by: shruthidwh
6 Replies

4. UNIX for Advanced & Expert Users

Split records based on '-'

HI, I have a pipe delimiter file , I have to search for second field pattern, if the second field does not contain a '-' , I need to start capturing the record from this line till I find another second field with '-' value. Below is the sample data SOURCE DATA ABC|ABC_702148-PARAM... (3 Replies)
Discussion started by: mora
3 Replies

5. Shell Programming and Scripting

Split records into multiple records

Hi All, I am trying to split a record into multiple records based on a value. Input.txt "A",1,0,10 "B",2,0,10,15,20 "C",3,11,14,16,19,21,23 "D",1,0,5 My desired output is: "A",1,0,10 "B",2,0,10 "B",2,15,20 "C",3,11,14 "C",3,16,19 "C",3,21,23 (4 Replies)
Discussion started by: kmsekhar
4 Replies

6. Shell Programming and Scripting

Counting the records based on condition

Hi Unix team, I have a file with 30 columns with tab delimited. I need to count the records based on column 18 & 21 data. So I cut the data from the file using awk -F"\t" '{print $18,$21}' foo.txt Following is the output: USED SEDAN USED SUV NEW SUV USED Truck USED Truck USED... (6 Replies)
Discussion started by: karumudi7
6 Replies

7. Shell Programming and Scripting

Split file based on records

I have to split a file based on number of lines and the below command works fine: split -l 2 Inputfile -d OutputfileMy input file contains header, detail and trailor info as below: H D D D D TMy split files for the above command contains: First File: H DSecond File: ... (11 Replies)
Discussion started by: Ajay Venkatesan
11 Replies

8. Shell Programming and Scripting

Deleting the records based on the condition

Hi, Can any one help me, in deleting the records from the database table based on the following condition: script should take a configurable parameter as input. The input is nothing but “no. of years”. For example, if I enter 2 as input parameter, then the 2 year old records should get... (2 Replies)
Discussion started by: zxcjggu708
2 Replies

9. Shell Programming and Scripting

Merge records based on multiple columns

Hi, I have a file with 16 columns and out of these 16 columns 14 are key columns, 15 th is order column and 16th column is having information. I need to concate the 16th column based on value of 1-14th column as key in order of 15th column. Here are the example file Input File (multiple... (3 Replies)
Discussion started by: Ravi Agrawal
3 Replies

10. Programming

Subtract values based on records

Hi Guys, I am having below tables in oracle T1 ID F_TYPE F_AMT DATE_COL 1 F 6 11-Feb-16 1 D 2 11-Feb-16 1 D 2 11-Feb-16 1 F 6 11-Feb-16 1 F 2 12-Mar-16 1 D 3 12-Mar-16 1 F 4 10-Apr-16 1 F 4 11-Apr-16 1 D 1 11-Apr-16 T2 ID START_DATE END_DATE F_ID FLAG... (0 Replies)
Discussion started by: rohit_shinez
0 Replies
bup-margin(1)						      General Commands Manual						     bup-margin(1)

NAME
bup-margin - figure out your deduplication safety margin SYNOPSIS
bup margin [options...] DESCRIPTION
bup margin iterates through all objects in your bup repository, calculating the largest number of prefix bits shared between any two entries. This number, n, identifies the longest subset of SHA-1 you could use and still encounter a collision between your object ids. For example, one system that was tested had a collection of 11 million objects (70 GB), and bup margin returned 45. That means a 46-bit hash would be sufficient to avoid all collisions among that set of objects; each object in that repository could be uniquely identified by its first 46 bits. The number of bits needed seems to increase by about 1 or 2 for every doubling of the number of objects. Since SHA-1 hashes have 160 bits, that leaves 115 bits of margin. Of course, because SHA-1 hashes are essentially random, it's theoretically possible to use many more bits with far fewer objects. If you're paranoid about the possibility of SHA-1 collisions, you can monitor your repository by running bup margin occasionally to see if you're getting dangerously close to 160 bits. OPTIONS
--predict Guess the offset into each index file where a particular object will appear, and report the maximum deviation of the correct answer from the guess. This is potentially useful for tuning an interpolation search algorithm. --ignore-midx don't use .midx files, use only .idx files. This is only really useful when used with --predict. EXAMPLE
$ bup margin Reading indexes: 100.00% (1612581/1612581), done. 40 40 matching prefix bits 1.94 bits per doubling 120 bits (61.86 doublings) remaining 4.19338e+18 times larger is possible Everyone on earth could have 625878182 data sets like yours, all in one repository, and we would expect 1 object collision. $ bup margin --predict PackIdxList: using 1 index. Reading indexes: 100.00% (1612581/1612581), done. 915 of 1612581 (0.057%) SEE ALSO
bup-midx(1), bup-save(1) BUP
Part of the bup(1) suite. AUTHORS
Avery Pennarun <apenwarr@gmail.com>. Bup unknown- bup-margin(1)
All times are GMT -4. The time now is 08:43 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy