For this instance, I too think era had the better solution
I started working on my approach, and thought of trying as era did. I wanted to finish my approach, thinking the grabbing of line # from the cat -n command, and using it to determine a next step might prove an interesting exercise.
Also, era's solution would probably run a lot faster than mine!
Just for completeness, I should note that the modulo arithmetic in the Perl script I posted was a major brain fart. Here's a hopefully corrected version, with an explanation.
I threw in the mapping of arbitrary file names in the array @n for show.
The BEGIN block creates an array @file of four file handles (indexed 0 through 3 -- Perl arrays start at zero) and a mapping @m of which line number to print to which handle. Somewhat confusingly, the first entry in the mapping is for line numbers 10, 20, 30, ... (array index zero), while only the second is for line numbers 1, 11, 21, etc.
In the main loop (outside the BEGIN block) we simply calculate the remainder (modulo) of the line number $. divided by 10 (not 9!!) and use that as an index into @m to get the handle index, and then through another level of indexing print to the handle we are pointed to.
Also for the record, the shell version will have an issue if there is input with backslashes in it. Change read to read -r or if your shell doesn't support that, see if you have the line command instead. Also for maintainability I suppose it would be better to use higher-numbered file descriptors -- file descriptors 1 and 2 are reserved for standard output and standard error, as you probably know. (I wanted to keep them in sync to make the script easier to follow, but it sucks if you try to debug it and lose all your errors into a file someplace.)
As usual, Radoulov's solution is impressive, though a bit hard to follow. Apparently the names of the output files will be the input file name with a number suffix added.
I speculate that mawk keeps the file handles open just in case, i.e. secretly does the file handle juggling that I did explicitly in the Perl script. (Incidentally, you don't really need IO::File for that, but it makes it a lot more readable -- the stuff you have to do to manipulate bare file handles in bare Perl is arcane even by Perl standards.)
Last edited by era; 10-01-2008 at 04:04 AM..
Reason: Note on file descriptor numbering in sh implementation
As far as I know [ngm]awk should maintain the files open until the end of the program or an explicit close call (close(filename)):
Reading the strace output I notice some differences in read/write calls timings.
I'm quite sure that the below output does not show all time consuming events.
Hi
I have a requirement, where i will receive multiple files in a folder (say: /fol1/fol2/). There will be at least 14 to 16 files. The size of the files will different, some may be 80GB or 90GB, some may be less than 5 GB (and the size of the files are very unpredictable). But the names of the... (10 Replies)
Hello All,
I have records in a file in a pattern A,B,B,B,B,K,A,B,B,K
Is there any command or simple logic I can pull out records into multiple files based on A record? I want output as
File1: A,B,B,B,B,K
File2: A,B,B,K (9 Replies)
Hi,
I have a text file (attached the sample). I have also, attached the way the way the files need to be split.
We get this file, that will either have 24 Jurisdictions, or will miss some and retain some.
Like in the attached sample file, there are only Jurisdictions 03,11,14,15, 20 and 30.... (3 Replies)
Hello All ,
I have a file which needs to split based on the blank lines
Name ABC
Address London
Age 32
(4 blank new line)
Name DEF
Address London
Age 30
(4 blank new line)
Name DEF
Address London (8 Replies)
Hi
i have requirement like below
M <form_name> sdasadasdMklkM
D ......
D .....
M form_name> sdasadasdMklkM
D ......
D .....
D ......
D .....
M form_name> sdasadasdMklkM
D ......
M form_name> sdasadasdMklkM
i want split file based on line number by finding... (10 Replies)
Hi,
I have a Huge 7 GB file which has around 1 million records, i want to split this file into 4 files to contain around 250k messages each.
Please help me as Split command cannot work here as it might miss tags..
Format of the file is as below
<!--###### ###### START-->... (6 Replies)
Hi ,
I have huge files around 400 mb, which has clob data and have diffeent scenarios:
I am trying to pass scenario number as parameter and and get required modified file based on the scenario number and criteria.
Scenario 1:
file name : scenario_1.txt
... (2 Replies)
Hello, I have a large file (2GB) that I would like to split based on pattern and size.
I've used the following command to split the file (token is "HELLO")
awk '/HELLO/{i++}{print > "file"i}' input.txt
and the output is similar to the following (i included filesize in KB):
10 ... (2 Replies)
Hello, I am using awk to split a file into multiple files using command:
nawk '{
if ( $1 == "<process" )
{
n=split($2, arr, "\"");
file=arr
}
print > file }' processes.xml
<process name="Process1.process">
... (3 Replies)
Dear all,
I have a large file which is composed of 8000 frames, what i would like to do is split the file into 8000 single files names file.pdb.1, file.pdb.2 etc etc
each frame in the large file is seperated by a "ENDMDL" flag so my thinking is to use this flag a a point to split the files... (4 Replies)