I have not get much answer/solution for the posting. Here I break down the question and hope to get some help.
1. How can I use AWK to read in two records at the same time and keep loop to next two when the condition is meet?
position 1-10 --> Unique to identity whether there is secondary record or not
--> If there is more than one record with the same value for this portion,
it means there is secondary record;
otherwise, there is only primary record
position 11 ---> 0 means the record is the secondary
1 means the record is the primary
Output file ----> starting from position 12
Segment definition --->starting from position 36 (TTTT)
XXXX###
XXXX---> Segment ID 4 bytes , eg TTTT or SH01
### ---> Total length of segment 020 means segment is 20 bytes long
See below string has two segment,
first one id is TTTT and length is 15 bytes long;
second one is is SH01 and length is 8 bytes long
TTTT015cvsdbfffSH01008X
ENDS segment format --->ENDS010###
ENDS010 --> segment id and length
### represents the total number of segment in current records.
For example ENDS010004 means there is 4 segments in the record including ENDS010 segment
-----------------
rules
1. if group by position 1-10 have one record, then reformat the string by cutting off first 11 bytes and output
2. if group by position 1-10 have two record, then
for the record with value as 1 in position 11,
then reformat string by
a. cutting of the first 11 bytes
b. recount the number of segments
c. append ENDS010### segment at the end of string
for the record with values as 0 in position 11,
then reformat string by
a. cut the first two segments from primary records and append them at the beginning of the output string
b. recount the number of segments
c. append ENDS010### segment at the end of string
----------------
Attached are two example files, one for input and one for output
All,
I have a task to search through several hundred files and extract duplicate detail records and keep them grouped with their header record. If no duplicate detail record exists, don't pull the header. For example, an input file could look like this:
input.txt
HA
D1
D2
D2
D3
D4
D4... (17 Replies)
Hi,
I have one huge record and know that each record in the file is 550 bytes long. How do I parse out individual records from the single huge record.
Thanks, (4 Replies)
Hi,
I got a file which is one huge record. I know each record should be 550 bytes long. How do I parse out the records from the one huge record. (1 Reply)
I have a file temp.dat. The contents of this file is as follows
abcdefgh
abcdefgh
abcdefgh
abcdefgh
abcdefgh
abcdefgh
The multiple records in this file needs to be converted in to a single record.
abcdefgh abcdefgh abcdefgh abcdefgh abcdefgh abcdefgh (2 Replies)
Hi All,
I have a *.csv files in a die /pro/lif/dow, (pipe delimiter file), these files are having 8 columns and 6 column(CDR_LOGIC) records are populated as below, I need to incorporate the below logic in all the *.csv files.
11||:ColumnA||:ColumnB
123||:ColumnA
IIF(:ColumnA = :ColumnC then... (6 Replies)
Hi Friends,
source
....
col1,col2,col3
a,b,1;2;3
here colom delimeter is comma(,).
here we dont know what is the max length of col3 means now we have 1;2;3 next time i will receive 1;2;3;4;5;etc...
required output
..............
col1,col2,col3
a,b,1
a,b,2
a,b,3
please give me... (5 Replies)
I need to make one record to multiple records based on occurence column in the record and change the date.For example below first record has 5 ,so need to create 5 records from one and change the date to 5 months.Occurence can be any number.
I am unable to come with a script.Can some one help
... (5 Replies)
Hi,
I have one tab delimited file which is having multiple store_ids in first column seprated by pipe.I want to split the file on the basis of store_id(separating 1st record in to 2 records ).
I tried some more options like below with using split,awk etc ,But not able to get proper output. can... (1 Reply)
Hi Everyone,
I have below record set. File is fixed widht file
101newjersyus 20150110
101nboston us 20150103
102boston us 20140106
102boston us 20140103
I need to group record based on first 3 letters in our case(101 and 102)
and sort last 8 digit in ascending order and print only... (4 Replies)
Hi,
I have a backup report that unfortunately has some kind of hanging indent thing where the first line contains one column more than the others
I managed to get the output that I wanted using awk, but just wanting to know if there is short way of doing it using the same awk
Below is what... (2 Replies)
Discussion started by: newbie_01
2 Replies
LEARN ABOUT DEBIAN
shelr
SHELR(1)SHELR(1)NAME
shelr - screencasting for shell ninjas
DESCRIPTION
Shelr records terminal output and can replay it.
Also you can shere Your records at http://shelr.tv/ or other services.
SYNOPSIS
shelr command [id]
COMMANDS
record Will record your terminal unless you type exit or Ctrl+D and store it to $HOME/.local/share/shelr/
list lists all your shellcasts.
play plays local or remote shellcast.
push publish your shellcast
dump dump shellcast as json to current directory.
EXAMPLES
Record your shellcast:
$ shelr record
$ # do something ...
$ exit
List recorded shellcasts:
$ shelr list
Play local shellcast:
$ shelr play 1293702847 # play your own local record
$ shelr play record.json # created with shelr dump
$ shelr play last # will play most recent local record
Play remote shellcast:
$ shelr play http://shelr.tv/records/4d1f7c3890820d6144000002.json
Publish your shellcast:
$ shelr push 1293702847
$ shelr push last # will push most recent local record
Setup recording backend:
$ shelr backend script
$ shelr backend ttyrec
BUGS
Windows heh.
COPYRIGHT
(C) 2010, 2011, 2012 Antono Vasiljev self@antono.info
Licensed under GPLv3+
SEE ALSO script(1), scriptreplay(1), ttyrec(1), ttyplay(1)
April 2012 SHELR(1)