Sponsored Content
Top Forums Shell Programming and Scripting Script Optimization - large delimited file, for loop with many greps Post 302516119 by verge on Thursday 21st of April 2011 04:55:00 PM
Old 04-21-2011
Script Optimization - large delimited file, for loop with many greps

Since there are approximately 75K gsfiles and hundreds of stfiles per gsfile, this script can take hours. How can I rewrite this script, so that it's much faster? I'm not as familiar with perl but I'm open to all suggestions.

Code:
ls file.list>$split
for gsfile in `cat $split`;
do
  csplit -ks -n 6 -f $gsfile.ST $gsfile /^ST/ {100000} 2>>$diagnostic
  for stfile in `ls $gsfile.ST*|sort -n`; 
  do 
    delim=`LC_ALL=C grep "^GS" $gsfile|cut -c3` 2>>$diagnostic
  gscode=`LC_ALL=C grep "^GS" $gsfile|cut -d "$delim" -f3` 2>>$diagnostic
  supcd=`LC_ALL=C grep "^N1.SU" $stfile|cut -d "$delim" -f5|head -1` 2>>$diagnostic
    sellcd=`LC_ALL=C grep "^N1.SE" $stfile|cut -d "$delim" -f5|head -1` 2>>$diagnostic
    firponum=`LC_ALL=C grep "^IT1" $stfile|cut -d "$delim" -f10|head -1` 2>>$diagnostic
    invtl=`LC_ALL=C grep "^TDS" $stfile|cut -d "$delim" -f2|tr -cd '[[:digit:]]'` 2>>$diagnostic 
    #I have about ten more greps here
    echo "$gscode,$supcd,$sellcd,$firponum,$invtl">>$detail_file
    rm -f $stfile 2>>$diagnostic                                                                                          
  done 
done

Here's an example of an input file. The delimiters can be any non-word character.
Code:
 
gsfile_1
GS*IN*TPU*TPM*110303*0634*65433*X*002000 
ST*810*0001  
N1*SU*TPUNAME*92*TPUCD21 
N1*SE*SELNAME*92*789 
IT1*1*8*EA*909234.12**BP*PARTNUM123*PO*PONUM342342*PL*526 
IT1*2*3*EA*53342.65**BP*PARTNUM456*PO*PONUM31131*PL*528 
TDS*32424214  
SE*7*0001
ST*810*0002  
N1*SU*TPUNAME*92*TPUCD43 
N1*SE*SELNAME*92*543 
DTM*011*110302 
IT1*1*10*EA*909234.12**BP*PARTNUM575*PO*PONUM1253123*PL*001  
IT1*2*15*EA*53342.65**BP*PARTNUM483*PO*PONUM646456*PL*002 
TDS*989248095 
SE*8*0002 
GE*2*65433
gs_file2
GS~IN~TPT~TPM~110302~2055~2321123~X~003010~
ST~810~000027324~
N1~SU~TPMNAME~92~TPUCD87
N1~SE~SELMNAME~92~23234
IT1~001~3450~EA~1234.67~~BP~PARTNUM6546-048~PO~PONUM99484~PL~235~
TDS~425961150~
SE~6~2321123~
GE~1~3201~

output should look like this ...
TPU,TPUCD21,789,PONUM342342,32424214
TPU,TPUCD43,543,PONUM1253123,989248095
TPT,TPUCD87,23234,PONUM99484,425961150

I hope this isn't too long! I'm new and not yet familiar with the forum posting style. Thanks so much for your help.
 

10 More Discussions You Might Find Interesting

1. Shell Programming and Scripting

Directory sizes loop optimization

I have the following script: #!/usr/bin/ksh export MDIR=/datafiles NAME=$1 SERVER=$2 DIRECTORY=$3 DATABASE=$4 ID=$5 export dirlist=`/usr/bin/ssh -q $ID@$SERVER find $DIRECTORY -type d -print` for dir in $dirlist do SIZE=`</dev/null /usr/bin/ssh -q $ID@$SERVER du -ks $dir` echo... (6 Replies)
Discussion started by: la_womn
6 Replies

2. UNIX for Dummies Questions & Answers

Command that creates file and also greps that file?

I have a command that does something and then creates a log file (importlog.xml). I then want to grep that newly created log (importlog.xml) file for a certain word (success). I then want to write that grep result to a new file (success.log). So far I can run the command which creates the... (2 Replies)
Discussion started by: Sepia
2 Replies

3. Shell Programming and Scripting

Large pipe delimited file that I need to add CR/LF every n fields

I have a large flat file with variable length fields that are pipe delimited. The file has no new line or CR/LF characters to indicate a new record. I need to parse the file and after some number of fields, I need to insert a CR/LF to start the next record. Input file ... (2 Replies)
Discussion started by: clintrpeterson
2 Replies

4. Shell Programming and Scripting

Extracting a portion of data from a very large tab delimited text file

Hi All I wanted to know how to effectively delete some columns in a large tab delimited file. I have a file that contains 5 columns and almost 100,000 rows 3456 f g t t 3456 g h 456 f h 4567 f g h z 345 f g 567 h j k lThis is a very large data file and tab delimited. I need... (2 Replies)
Discussion started by: Lucky Ali
2 Replies

5. Shell Programming and Scripting

help with a shell script that greps an error from the logs

Hello everyone. I wrote the following script but the second part is not excecuting. It is not sending the notification by email if the error occurs. the send mail is working so i think the errorr should be in the if statement LOGDIR=/logs/out LOG=`date "+%Y%m%d"`.LOG-FILE.out #the log file ... (11 Replies)
Discussion started by: adak2010
11 Replies

6. Shell Programming and Scripting

Removing dupes within 2 delimited areas in a large dictionary file

Hello, I have a very large dictionary file which is in text format and which contains a large number of sub-sections. Each sub-section starts with the following header : #DATA #VALID 1 and ends with a footer as shown below #END The data between the Header and the Footer consists of... (6 Replies)
Discussion started by: gimley
6 Replies

7. Shell Programming and Scripting

Need a script to convert comma delimited files to semi colon delimited

Hi All, I need a unix script to convert .csv files to .skv files (changing a comma delimited file to a semi colon delimited file). I am a unix newbie and so don't know where to start. The script will be scheduled using cron and needs to convert each .csv file in a particular folder to a .skv... (4 Replies)
Discussion started by: CarpKing
4 Replies

8. Shell Programming and Scripting

Tab Delimited file in loop

Hi, I have requirement to create tab delimited file with values coming from variables. File will contain only two columns separated by tab. Header will be added once. Values will be keep adding upon the script run. If values already exists then values will be replaced. I have done so... (1 Reply)
Discussion started by: sukhdip
1 Replies

9. UNIX for Advanced & Expert Users

Need optimized awk/perl/shell to give the statistics for the Large delimited file

I have a file size is around 24 G with 14 columns, delimiter with "|" My requirement- can anyone provide me the fastest and best to get the below results Number of records of the file First column and second Column- Unique counts Thanks for your time Karti ------ Post updated at... (3 Replies)
Discussion started by: kartikirans
3 Replies

10. UNIX for Advanced & Expert Users

Need Optimization shell/awk script to aggreagte (sum) for all the columns of Huge data file

Optimization shell/awk script to aggregate (sum) for all the columns of Huge data file File delimiter "|" Need to have Sum of all columns, with column number : aggregation (summation) for each column File not having the header Like below - Column 1 "Total Column 2 : "Total ... ...... (2 Replies)
Discussion started by: kartikirans
2 Replies
CUT(1)							    BSD General Commands Manual 						    CUT(1)

NAME
cut -- cut out selected portions of each line of a file SYNOPSIS
cut -b list [-n] [file ...] cut -c list [file ...] cut -f list [-w | -d delim] [-s] [file ...] DESCRIPTION
The cut utility cuts out selected portions of each line (as specified by list) from each file and writes them to the standard output. If no file arguments are specified, or a file argument is a single dash ('-'), cut reads from the standard input. The items specified by list can be in terms of column position or in terms of fields delimited by a special character. Column and field numbering start from 1. The list option argument is a comma or whitespace separated set of increasing numbers and/or number ranges. Number ranges consist of a num- ber, a dash ('-'), and a second number and select the columns or fields from the first number to the second, inclusive. Numbers or number ranges may be preceded by a dash, which selects all columns or fields from 1 to the last number. Numbers or number ranges may be followed by a dash, which selects all columns or fields from the last number to the end of the line. Numbers and number ranges may be repeated, overlap- ping, and in any order. It is not an error to select columns or fields not present in the input line. The options are as follows: -b list The list specifies byte positions. -c list The list specifies character positions. -d delim Use delim as the field delimiter character instead of the tab character. -f list The list specifies fields, separated in the input by the field delimiter character (see the -d option). Output fields are separated by a single occurrence of the field delimiter character. -n Do not split multi-byte characters. Characters will only be output if at least one byte is selected, and, after a prefix of zero or more unselected bytes, the rest of the bytes that form the character are selected. -s Suppress lines with no field delimiter characters. Unless specified, lines with no delimiters are passed through unmodified. -w Use whitespace (spaces and tabs) as the delimiter. Consecutive spaces and tabs count as one single field separator. ENVIRONMENT
The LANG, LC_ALL and LC_CTYPE environment variables affect the execution of cut as described in environ(7). EXIT STATUS
The cut utility exits 0 on success, and >0 if an error occurs. EXAMPLES
Extract users' login names and shells from the system passwd(5) file as ``name:shell'' pairs: cut -d : -f 1,7 /etc/passwd Show the names and login times of the currently logged in users: who | cut -c 1-16,26-38 SEE ALSO
colrm(1), paste(1) STANDARDS
The cut utility conforms to IEEE Std 1003.2-1992 (``POSIX.2''). HISTORY
A cut command appeared in AT&T System III UNIX. BSD
August 8, 2012 BSD
All times are GMT -4. The time now is 05:48 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy