Hi sorry if you are not understanding or I interpreted wrongly my qst.
Please I am repeating my qst. The file is like as below. There is one file which contains almost 1000K records. All records are SEQUENCE NUMBER separated. All SEQUENCE NUMBER is Uniq for that file. There might be 0 lines or 1000 line in-between two SEQUENCE NUMBER means one SEQUENCE NUMBER: can contain some line of sub records inside it or no line of sub records.
My requirement is I want to fetch some SEQUENCE NUMBER: from that file like from SEQUENCE NUMBER: 56000000001 to 56000000005 (fetching 5 records or fetching 100K records based on SEQUENCE NUMBER).
is your sequence has any start line or end line in common pattern ?
Hi,
Yes each SEQUENCE is having starting like REC# 1 with SEQUENCE NUMBER: 56000000001 and ended with * **** END OF AUDIT REPORT *** as mentioned below but there is no start and end as you mentioned above.
for e.g. two record as below.
******************************************
1 xxxxxxxxxxxxx xxxxxxxx#1000215 xxxxx REC# 1 PAGE: 1
xxxxxxx xxxxxxA SEQUENCE NUMBER: 56000000001 CID NUMBER: 06000000001
******************************************
* **** END OF SEQUENCE REPORT ***
******************************************
1 xxxxxxxxxxxxx xxxxxxxx#1000215 xxxxx REC# 2 PAGE: 1
xxxxxxx xxxxxxA SEQUENCE NUMBER: 56000000002 CID NUMBER: 06000000002
******************************************
* **** END OF SEQUENCE REPORT ***
******************************************
I was wondering if anyone could explain to me how to split a variable length EBCDIC file into seperate files based on the record key. I have the COBOL layout, and so I need to split the file into 13 different EBCDIC files so that I can run each one through a C++ converter I have, and get the... (11 Replies)
input.csv:
Field1,Field2,Field3,Field4,Field4
abc ,123 ,xyz ,000 ,pqr
mno ,123 ,dfr ,111 ,bbb
output:
Field2,Field4
123 ,000
123 ,111
how to fetch the values of Field4 where Field2='123'
I don't want to fetch the values based on column position. Instead want to... (10 Replies)
Hi,
I have a Huge 7 GB file which has around 1 million records, i want to split this file into 4 files to contain around 250k messages each.
Please help me as Split command cannot work here as it might miss tags..
Format of the file is as below
<!--###### ###### START-->... (6 Replies)
Hi,
I have the following command in place
nawk -F, '!a++' file > file.uniq
It has been working perfectly as per requirements, by removing duplicates by taking into consideration only first 3 fields. Recently it has started giving below error:
bash-3.2$ nawk -F, '!a++'... (17 Replies)
I was given a data file that I need to split into multiple lines/records based on a key word. The problem is that it is 2.5GB or bigger and everything I try in perl or sed causes a Segmentation fault. Can someone give me some other ideas.
The data is of the form:... (5 Replies)
Hi folks,
Below is the content of a file 'tmp.dat', and I want to keep the uniq record (key by first column). However, the uniq record should be the last record.
302293022|2|744124889|744124889
302293022|3|744124889|744124889
302293022|4|744124889|744124889
302293022|5|744124889|744124889... (4 Replies)
Anyone can help for filter the uniq record for below example? Thank you very much
Input file
20090503011111|test|abc
20090503011112|tet1|abc|def
20090503011112|test1|bcd|def
20090503011131|abc|abc
20090503011131|bbc|bcd
20090503011152|bcd|abc
20090503011151|abc|abc... (8 Replies)
Dear friends,
I receive the following files into a FTP location on a daily basis
-rw-r----- 1 guest ftp1 5021 Aug 19 09:03 CHECK_TEST_Extracts_20080818210000.zip
-rw-r----- 1 guest ftp1 2437 Aug 20 05:15 CHECK_TEST_Extracts_20080819210000.zip
-rw-r----- 1 guest ... (2 Replies)
Hi Folks,
I need to compare two very huge file ( i.e the files would contain a minimum of 70k records each) using awk or sed. The comparison needs to be done with respect to a 'key'. For example :
File1
**********
1234|TONY|Y75634|20/07/2008
1235|TINA|XCVB56|30/07/2009... (13 Replies)
Hi,
I have one huge record and know that each record in the file is 550 bytes long. How do I parse out individual records from the single huge record.
Thanks, (4 Replies)