Script Optimization - large delimited file, for loop with many greps
Since there are approximately 75K gsfiles and hundreds of stfiles per gsfile, this script can take hours. How can I rewrite this script, so that it's much faster? I'm not as familiar with perl but I'm open to all suggestions.
Here's an example of an input file. The delimiters can be any non-word character.
output should look like this ...
TPU,TPUCD21,789,PONUM342342,32424214
TPU,TPUCD43,543,PONUM1253123,989248095
TPT,TPUCD87,23234,PONUM99484,425961150
I hope this isn't too long! I'm new and not yet familiar with the forum posting style. Thanks so much for your help.
foo | grep | cut | sed | really | long | pipe | chain is never efficient, and you're doing this on almost every line. You've also got a lot of useless use of backticks, and useless use of cat. Whenever you have 'for file in `cat foo` you could've done
much more efficiently. You can also do
to redirect stderr once for the whole loop instead of doing a special redirection for each and every individual command.
You can also set LC_ALL once instead of doing so for each and every individual command.
In your defense, you've been forced to deal with input data that looks like line noise! I don't entirely understand what you're doing. Why are you csplitting on 10000 and /^ST/ ? Are two non-word characters in a row, **, supposed to imply a blank record between them? Finally, what is your system, what is your shell? That will have a big effect on the tools available to you.
I've started writing a solution in awk.
Last edited by Corona688; 04-21-2011 at 06:27 PM..
I'm using Korn Shell on Microsoft Windows Services for UNIX 3.5 which supports
Sun Microsystems Solaris versions 7 and 8
Red Hat Linux version 8.0
IBM AIX version 5L 5.2
Hewlett-Packard HP-UX version 11i
Thanks for the tip about the backticks, sterr redirect and the while read ... I'll change that.
Yeah, the file is cumbersome . As for the splitting, each /^ST/ is a new group, I had taken the {100000} to be the max number of times to execute the csplit.
Yes, two non-word characters in a row, is a blank field
Hope this clarifies the structure of the file ... the initial file is approx 3 million lines
How about this:
Not complete since neither's your example, but much more efficient than grep | cut for every line, and might be enough to get you started.
---------- Post updated at 03:46 PM ---------- Previous update was at 03:44 PM ----------
Quote:
I'm using Korn Shell on Microsoft Windows Services for UNIX 3.5
Blech. Poor imitation of a korn shell.
And since you're not actually running UNIX my awk script of course can't run as a script like I intended. Small difference though. Just run it like awk -f script.awk inputfile
---------- Post updated at 03:52 PM ---------- Previous update was at 03:46 PM ----------
Whoa, is your data actually indented like that? That changes things.
would I call this awk script from within my ksh script?
Yes. You could dump everything I wrote into a text file named script.awk (name unimportant), then run awk on that file in your ksh script with awk -f script.awk datafile
Or you could embed the entire thing into your ksh script like
If your shell supports multi-line strings, that is.
I'll be happy to help with troubles you have improving it but it's probably best for you to match it to your needs. I'm not as likely to notice if things go just slightly wrong.
Thanks a lot Corona, I really appreciate your help ... I have a few other parsing issues but solving this piece helps me a great deal ... I knew there was a better way then grep|cut etc.
I just started scripting by stringing commands together and I'm noticing more and more that's the wrong approach
Optimization shell/awk script to aggregate (sum) for all the columns of Huge data file
File delimiter "|"
Need to have Sum of all columns, with column number : aggregation (summation) for each column
File not having the header
Like below -
Column 1 "Total
Column 2 : "Total
...
...... (2 Replies)
I have a file size is around 24 G with 14 columns, delimiter with "|"
My requirement- can anyone provide me the fastest and best to get the below results
Number of records of the file
First column and second Column- Unique counts
Thanks for your time
Karti
------ Post updated at... (3 Replies)
Hi,
I have requirement to create tab delimited file with values coming from variables.
File will contain only two columns separated by tab.
Header will be added once.
Values will be keep adding upon the script run.
If values already exists then values will be replaced.
I have done so... (1 Reply)
Hi All,
I need a unix script to convert .csv files to .skv files (changing a comma delimited file to a semi colon delimited file). I am a unix newbie and so don't know where to start. The script will be scheduled using cron and needs to convert each .csv file in a particular folder to a .skv... (4 Replies)
Hello,
I have a very large dictionary file which is in text format and which contains a large number of sub-sections. Each sub-section starts with the following header :
#DATA
#VALID 1
and ends with a footer as shown below
#END
The data between the Header and the Footer consists of... (6 Replies)
Hello everyone.
I wrote the following script but the second part is not excecuting. It is not sending the notification by email if the error occurs.
the send mail is working so i think the errorr should be in the if statement
LOGDIR=/logs/out
LOG=`date "+%Y%m%d"`.LOG-FILE.out #the log file ... (11 Replies)
Hi All
I wanted to know how to effectively delete some columns in a large tab delimited file.
I have a file that contains 5 columns and almost 100,000 rows
3456 f g t t
3456 g h
456 f h
4567 f g h z
345 f g
567 h j k lThis is a very large data file and tab delimited.
I need... (2 Replies)
I have a large flat file with variable length fields that are pipe delimited. The file has no new line or CR/LF characters to indicate a new record. I need to parse the file and after some number of fields, I need to insert a CR/LF to start the next record.
Input file ... (2 Replies)
I have a command that does something and then creates a log file (importlog.xml).
I then want to grep that newly created log (importlog.xml) file for a certain word (success).
I then want to write that grep result to a new file (success.log).
So far I can run the command which creates the... (2 Replies)
I have the following script:
#!/usr/bin/ksh
export MDIR=/datafiles
NAME=$1
SERVER=$2
DIRECTORY=$3
DATABASE=$4
ID=$5
export dirlist=`/usr/bin/ssh -q $ID@$SERVER find $DIRECTORY -type d -print`
for dir in $dirlist
do
SIZE=`</dev/null /usr/bin/ssh -q $ID@$SERVER du -ks $dir`
echo... (6 Replies)