I have writing quite a few shell scripts to always play with the files.
But recently there were some job demand as to write some script to convert a report into a pipe delimited feed file and the volume of the data in it is going to be around some million of records.
1) Can some help me know how the performence will be if a write a shell script to read the records from report line by line a create an output feed as the volume grows.
2) If performence goes down (from my past experience) then what are ways to bring it back.
3) Or for this kind of job should we look at more fulfledge languages like perl or python instead shell scripting and what is the performence differnce if so.
FYI ...
EXAMPLE
Report would be like -->
-------------------
EMP = 1234
NAME = UNIXNEWBEE
DOB = 12/1984
EMP = 5341
NAME = UNIXGURU
DOB = 12/1964
... so on
output feed should be like
-------------------------
1234|UNIXNEWBEE|12/1984
5341|UNIXGURU|12/1964
... so on
NOTE : I am working on multiuser UNIX server remotely through telnet.
Thanks in advance for the help
Nageswara