Quote:
Originally Posted by
Scrutinizer
It really depends on your input files and what you do inside the loop. The only thing that will not always work correctly with this loop skeleton is that there should be double quotes around "$FILENAME" .
Input files are double-quote fixed length de-limited files.
I have used double-quotes in this format "${FILENAME}".dat , as the filename with extension as such isn't stored in any variable.
Inside the loop, the code performs basic extraction using if-conditions, cut-command and sed-command from each field which does not effect the while-read loop at all.
My problem is it works for small files perfectly i.e it reads every line of small files and gives expected output. It only gives the error for very huge files of about more than around 7000 lines. So I just wanted to know why it works for small files but not large files.
So if you could give me suggestions on what can wrong , I would check out for that.
---------- Post updated at 01:23 PM ---------- Previous update was at 01:20 PM ----------
Quote:
Originally Posted by
methyl
Providing that these are correctly-formatted unix text files the only major stopper might be if individual records are too long or if you are using arrays (whether it be Shell or awk).
What do you mean by correctly formmatted unix text files ?
Yes, the individual records contain around 10,000 to 30,000 lines. For the records with less than 5000 lines, the code works perfectly. And the output file obtained after consolidating will be around 100,000 to 200,000 lines .