Sponsored Content
Top Forums Shell Programming and Scripting EBCDIC File Split Based On Record Key Post 302963046 by Don Cragun on Tuesday 22nd of December 2015 07:08:54 PM
Old 12-22-2015
You can convert the entire EBCDIC file to ASCII just using:
Code:
dd if=EBCDICfile of=ASCIIfile conv=ascii

but, with packed decimal fields in the COBOL input file, you may end up with null bytes in the output file. And, you can't have null bytes in a text file. If you aren't working on text files, many of the standard utilities produce undefined results.

But, you can use cut and paste even if the files being processed are not text files. So, after converting your file to ASCII, you could walk through the file in a loop, starting with offset=1:
  • grabbing two bytes (with cut) to determine record type,
  • based on the type, grab x more bytes (again with cut) to complete the record you started reading,
  • writing the complete x+2 byte record to the appropriate output file (again, based on record type), and
  • incrementing offset by x+2.
As long as you don't modify the parts of the record that contain packed decimal data, converting back from ASCII to EBCDIC should still have correct packed decimal data in the resulting EBCDIC file.
 

10 More Discussions You Might Find Interesting

1. UNIX for Dummies Questions & Answers

How to count the record count in an EBCDIC file.

How do I get the record count in an EBCDIC file on a Linux Box. :confused: (1 Reply)
Discussion started by: oracle8
1 Replies

2. Shell Programming and Scripting

How to split a file record

-Hi, I have a problem with parcing/spliting a file record into two parts and assigning the split parts to two viriables. The record is as follows: ftrn facc ttrd feed xref fsdb fcp ruldb csdb omom fordr ftxn fodb fsdc texc oxox reng ttrn ttxn fqdb ... (5 Replies)
Discussion started by: aoussenko
5 Replies

3. Shell Programming and Scripting

Split long record into csv file

Hi I receive a mainframe file which has very long records (1100 chars) with no field delimiters. I need to parse each record and output a comma delimited (csv) file. The record layout is fixed. If there weren't so many fields and records I would read the file into Excel, as a "fixed width"... (10 Replies)
Discussion started by: wvdeijk
10 Replies

4. Shell Programming and Scripting

Split a record based on particular match

Hi , I have a requirement to split the record based on particular match using UNIX. Case1: Input Record : 10.44.48.63;"Personals/Dating;sports";1441 Output Records : 10.44.48.63;Personals/Dating;1441;Original 10.44.48.63;sports;1441;Dummy Case2: Input Record : ... (5 Replies)
Discussion started by: mksuneel
5 Replies

5. Shell Programming and Scripting

split record based on delimiter

Hi, My inputfile contains field separaer is ^. 12^inms^ 13^fakdks^ssk^s3 23^avsd^ 13^fakdks^ssk^a4 I wanted to print only 2 delimiter occurence i.e 12^inms^ 23^avsd^ (4 Replies)
Discussion started by: Jairaj
4 Replies

6. Shell Programming and Scripting

Split file when the key field change !

Hello, I have the following example data file: Rv.Global_Sk,1077.160523,D,16/09/2011 Rv.Global_Sk,1077.08098,D,17/09/2011 Rv.Global_Sk,1077.001445,D,18/09/2011 Rv.Global_Sk,1072.660733,D,19/09/2011 Rv.Global_Sk,1070.381557,D,20/09/2011 Rv.Global_Sk,1071.971747,D,21/09/2011... (4 Replies)
Discussion started by: csierra
4 Replies

7. Shell Programming and Scripting

Fetching record based on Uniq Key from huge file.

Hi i want to fetch 100k record from a file which is looking like as below. XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX ... (17 Replies)
Discussion started by: lathigara
17 Replies

8. Shell Programming and Scripting

Split a large file in n records and skip a particular record

Hello All, I have a large file, more than 50,000 lines, and I want to split it in even 5000 records. Which I can do using sed '1d;$d;' <filename> | awk 'NR%5000==1{x="F"++i;}{print > x}'Now I need to add one more condition that is not to break the file at 5000th record if the 5000th record... (20 Replies)
Discussion started by: ibmtech
20 Replies

9. UNIX for Advanced & Expert Users

Removing Header and Trailer record of a EBCDIC file

I have a EBCDIC multi layout file which has a header record which is 21 bytes, The Detail records are 2427 bytes long and the trailer record is 9 bytes long. Is there a command to remove the header as well as trailer record and read only the detail records while at the same time not altering... (1 Reply)
Discussion started by: abhilashnair
1 Replies

10. UNIX for Advanced & Expert Users

How to split large file with different record delimiter?

Hi, I have received a file which is 20 GB. We would like to split the file into 4 equal parts and process it to avoid memory issues. If the record delimiter is unix new line, I could use split command either with option l or b. The problem is that the line terminator is |##| How to use... (5 Replies)
Discussion started by: Ravi.K
5 Replies
SPLIT(1)						    BSD General Commands Manual 						  SPLIT(1)

NAME
split -- split a file into pieces SYNOPSIS
split -d [-l line_count] [-a suffix_length] [file [prefix]] split -d -b byte_count[K|k|M|m|G|g] [-a suffix_length] [file [prefix]] split -d -n chunk_count [-a suffix_length] [file [prefix]] split -d -p pattern [-a suffix_length] [file [prefix]] DESCRIPTION
The split utility reads the given file and breaks it up into files of 1000 lines each (if no options are specified), leaving the file unchanged. If file is a single dash ('-') or absent, split reads from the standard input. The options are as follows: -a suffix_length Use suffix_length letters to form the suffix of the file name. -b byte_count[K|k|M|m|G|g] Create split files byte_count bytes in length. If k or K is appended to the number, the file is split into byte_count kilobyte pieces. If m or M is appended to the number, the file is split into byte_count megabyte pieces. If g or G is appended to the num- ber, the file is split into byte_count gigabyte pieces. -d Use a numeric suffix instead of a alphabetic suffix. -l line_count Create split files line_count lines in length. -n chunk_count Split file into chunk_count smaller files. -p pattern The file is split whenever an input line matches pattern, which is interpreted as an extended regular expression. The matching line will be the first line of the next output file. This option is incompatible with the -b and -l options. If additional arguments are specified, the first is used as the name of the input file which is to be split. If a second additional argument is specified, it is used as a prefix for the names of the files into which the file is split. In this case, each file into which the file is split is named by the prefix followed by a lexically ordered suffix using suffix_length characters in the range ``a-z''. If -a is not speci- fied, two letters are used as the suffix. If the prefix argument is not specified, the file is split into lexically ordered files named with the prefix ``x'' and with suffixes as above. ENVIRONMENT
The LANG, LC_ALL, LC_CTYPE and LC_COLLATE environment variables affect the execution of split as described in environ(7). EXIT STATUS
The split utility exits 0 on success, and >0 if an error occurs. SEE ALSO
csplit(1), re_format(7) STANDARDS
The split utility conforms to IEEE Std 1003.1-2001 (``POSIX.1''). HISTORY
A split command appeared in Version 3 AT&T UNIX. BUGS
The maximum line length for matching patterns is 65536. BSD
May 9, 2013 BSD
All times are GMT -4. The time now is 11:05 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy