Splitting a text file into smaller files with awk, how to create a different name for each new file
Hello,
I have some large text files that look like,
There can be thousands of records and there is no specific length for each record as far as the number of lines or tag fields between MEND and $$$$. Each record ends with the $$$$ terminator. I am trying to divide large files into a number of smaller files, each with the same number of records.
This code attempts to do this,
by storing rows in OUT[] until a counter is reached (the desired number of records in each subfile) and then printing the rows, clearing the array, and resetting the counter. This also attempts to trap if EOF is reached before the counter reaches the set number.
The obvious problem is that there is no way to change the output file name for each subsequent write, so I will only end up with the last file. I think I can change the value of $output_file with the awk code but I think the awk here runs in a different subshell than bash, so I don't think that will work.
If I could run the awk only on specific lines of the file, I think I could call awk from a bash loop and make that work but I am guessing there is an easier way. I am running this in 32-bit cygwin so have everything available from that kit.
Suggestions would be appreciated.
LMHmedchem
This User Gave Thanks to LMHmedchem For This Post:
Note: the csplit syntax is incorrect Ignore the example. See Don Cragun's post below
Have you looked at the csplit command? It works by context (context split), and the split is based on a string or a pattern, not length of records or block sizes. You make it it use fix number of records per output small file as well. Your requirement is for a pattern I think.
e.g.,
You get to specify the output filenames, so a quick read of the man page is in order, but they are generally something like xx01, xx02 by default.
Change the prefix and if there are literally thousands of possible output files, then declare 4 or 5 digits for the numeration operator.
FWIW sounds like you need a sqlite db or something similar, maintaining thousands of files are a nightmare waiting to happen.
Last edited by jim mcnamara; 12-10-2018 at 02:49 PM..
Reason: Error.
These 2 Users Gave Thanks to jim mcnamara For This Post:
Have you looked at the csplit command? It works by context (context split), and the split is based on a string or a pattern, not length of records or block sizes. You make it it use fix number of records per output small file as well. Your requirement is for a pattern I think.
e.g.,
You get to specify the output filenames, so a quick read of the man page is in order, but they are generally something like xx01, xx02 by default.
Change the prefix and if there are literally thousands of possible output files, then declare 4 or 5 digits for the numeration operator.
FWIW sounds like you need a sqlite db or something similar, maintaining thousands of files are a nightmare waiting to happen.
Hi Jim,
The standard csplit synopsis is more like:
Note that the patterns with optional offsets (i.e., /BRE/[offset]) come after the file operand; not before it. And, since the pattern is a basic regular expression, the dollar-sign is a special character and needs to be escaped to be taken literally (instead of as a match for the end of the line). And, the offset is needed in this case because without one, the operand /\$\$\$\$/ will start the next record with the line that matches that BRE; instead of ending the current record with that line.
Then note that each time the pattern is matched, a new output file is created. So getting each output file to contain six records is going to require an iterative process where each pass produces seven output files (the first six with one record each and the seventh with any remaining records). Then the first six will need to be combined into a real output file and the loop will then need to be repeated if there was a seventh output file.
Without specifying options for output filenames and the number of digits in the output filenames, the command:
would produce the files xx00, xx01, xx02, xx03, xx04, and xx05, containing the first six records, respectively, from the file named file and produce a file name xx06 containing any remaining records. But, this will only work if there are at least 7 records in your input file. When this finally produces an error, the last input file will contain no more than six input records, but you will need further processing to find out exactly how many, if that is important to you.
Hi LMHmedchem,
I would tend to just use awk for this. It is perfectly capable of creating output filenames for each record (or for each set of records in this case), counting records and grouping them together in the output, and it is also capable of reading an input file without creating a pipeline and wasting time reading and writing the file unnecessarily with cat:
Note that I couldn't use split as a variable name in awk because split is the name of a standard awk function. (Some versions of awk might allow you to have a variable and a function with the same name, but that is not required by the standards.
The function used here that creates the output filenames makes the assumption that the filename you want should contain digits digits before the last four characters of output_file (i.e., before the .txt that is assumed to be at the end of the output filename). If you want to use a filename extension that is a different length, you'll have to adjust the fname() function. Since you said there can be thousands of records in an input file, I set the default for digits at 4 (which will work for up to 9,999 input records even if cpf is set to 1.
These 2 Users Gave Thanks to Don Cragun For This Post:
I will simplify the explaination a bit, I need to parse through a 87m file -
I have a single text file in the form of :
<NAME>house........
SOMETEXT
SOMETEXT
SOMETEXT
.
.
.
.
</script>
MORETEXT
MORETEXT
.
.
. (6 Replies)
Hi,
I have a space delimited text file with multiple columns 102 columns. I want to break it up into 100 files labelled 1.txt through 100.txt (n.txt). Each text file will contain the first two columns and in addition the nth column (that corresponds to n.txt). The third file will contain the... (1 Reply)
Hi,
I'm trying to split a large file into several smaller files
the script will have two input arguments argument1=filename and argument2=no of files to be split.
In my large input file I have a header followed by 100009 records
The first line is a header; I want this header in all my... (9 Replies)
Hi All,
I am new to this forumn as well to the UNIX, I have basic knowledge of UNIX which I studied some years ago, now I have to do some shell scripting to load data into Oracle database using sqlldr utility, whcih I am able to do. I have a requirement where I need to do following operation.
I... (10 Replies)
Hi Everyone,
I am using a centos 5.2 server as an sflow log collector on my network. Currently I am using inmons free sflowtool to collect the packets sent by my switches. I have a bash script running on an infinate loop to stop and start the log collection at set intervals - currently one... (2 Replies)
Hello
We have a text file with 400,000 lines and need to split into multiple files each with 5000 lines ( will result in 80 files)
Got an idea of using head and tail commands to do that with a loop but looked not efficient.
Please advise the simple and yet effective way to do it.
TIA... (3 Replies)
hi all
im new to this forum..excuse me if anythng wrong.
I have a file containing 600 MB data in that. when i do parse the data in perl program im getting out of memory error.
so iam planning to split the file into smaller files and process one by one.
can any one tell me what is the code... (1 Reply)
Hello..
Iam in need to urgent help with the below.
Have data-file with 40,567
and need to split them into multiple files with smaller line-count.
Iam aware of "split" command with -l option which allows you to specify the no of lines in smaller files ,with the target file-name pattern... (1 Reply)
I'm trying to figure out how to do this efficiently with as little execution time as possible and I'm pretty sure using sed is the best way. However I'm new to sed and all the reading and examples I've found don't seem to show a similar exercise:
I have a long text file (i'll call it... (3 Replies)
I need to split a file based on certain context inside the file. Is there a unix command that can do this? I have looked into split and csplit but it does not seem like those would work because I need to split this file based on certain text. The file has multiple records and I need to split this... (1 Reply)