Can an expert kindly write an efficient Linux ksh script that will split a large 2 GB text file into two?
Here is a couple of sample record from that text file:
"field1","field2","field3",11,22,33,44
"TG","field2b","field3b",1,2,3,4
The above rows are delimited by commas.
This script is to search the first field for the word "TG". If that row is present, it is to load that row to a TG.txt file. If this first field is not "TG", then it is to load that row into a NoTG.txt file.
So the result is a new TG.txt with the following row:
"TG","field2b","field3b",1,2,3,4
and a new NoTG.txt with the following row:
"field1","field2","field3",11,22,33,44
Thanks in advance. This forum rocks - with lots of helpful heroes!!