Help to improve speed of text processing script


 
Thread Tools Search this Thread
Top Forums Shell Programming and Scripting Help to improve speed of text processing script
# 8  
Old 07-27-2009
Quote:
You know the movie matrix?
Since you have remainded me of the movie 'matrix' - am all charged up to answer your question to an extent atleast Smilie

Code:
${#lines[@]}

If this is not modified, try assigning it to a variable and reuse that, instead of computing it each time.

Code:
echo "${lines[${i}]}" >> test_$count.txt
done
echo "" >> test_$count.txt

Writing to a file whilst in the loop block will greatly reduce the performance of the script, what happens for every write call is ...
Code:
open file
write data
close file

Ideally what should be done is

Code:
open file
write data
write data
.
.
.
close file

instead write it to the outer block something like
Code:
while [ condition ]
do
# check and do some processing
done > $output_file

In this method, there will be n + 2 (approx) calls to file libs for 'n' units of write

instead of

Code:
while [ condition ]
do
# check and do some processing
# write to a file  > $output_file
done

In this method, there will be 3n calls for 'n' units of write, which will scale badly as 'n' progresses ...
# 9  
Old 07-27-2009
lorus,

why are we trying to write a split script..!? let split command do the job...

Code:
split -db 1m InFile OutFile

o/p: it creates files like
OutFile00
OutFile01
...
# 10  
Old 07-27-2009
Quote:
Originally Posted by matrixmadhan
Since you have remainded me of the movie 'matrix' - am all charged up to answer your question to an extent atleast Smilie
Hehe thats good to know. I have just to bring something about matrix in every post to get your help :-D

Your suggestions makes absolutly sence, so I rewrote it to this

Code:
#!/bin/bash

declare -a lines
OIFS="$IFS"
IFS=$'\n'
set -f   # cf. help set
lines=($(< "input.txt"))
set +f
IFS="$OIFS"

splitsize=102400
i=0
count=0
lines_tot=${#lines[@]}

while [ $i -le $lines_tot ]; do
    count=$[count+1]
    touch output/output_$count.txt
    
    while [ `ls -al output/output_$count.txt | awk '{print $5}'` -le $splitsize -a $i -le ${#lines[@]} ]; do
        i=$[i+1]
        if [ `expr "${lines[${i}]}" : '#Game No.*'` != 0 ]; then
            while [ `expr "${lines[${i}]}" : '.*wins.*'` = 0 ]; do
                i=$[i+1]
                echo "${lines[${i}]}"
            done
            echo ""
        fi
    done >> output/output_$count.txt
    
done

But that file open/close process doesn't seems to be the time-thief

A simple

Code:
while [ $i -le $lines_tot ]; do

    echo "${lines[${i}]}" >> output_test.txt
    
done

processes the whole file in just a few seconds.

So the time thief must be the "expr" command inside the inner loop. Is there maybe any equivalent command that is faster?

Quote:
lorus,

why are we trying to write a split script..!? let split command do the job...
Because "split" cuts at static points and that would destroy the structure of my file, doesn't it?
# 11  
Old 07-27-2009
Quote:
Hehe thats good to know. I have just to bring something about matrix in every post to get your help :-D
Good one ! Smilie

Quote:
But that file open/close process doesn't seems to be the time-thief
This will definitely have an impact and will scale accordingly to larger files.
Quote:
So the time thief must be the "expr" command inside the inner loop. Is there maybe any equivalent command that is faster?
What exactly is the operation performed? Can you please give an example?
# 12  
Old 07-27-2009
My input file contains of blocks like the following

Code:
#Game No : 8273167998 
***** Hand History for Game 8273167998 *****
$100 USD NL Texas Hold'em - Saturday, July 25, 11:34:58 EDT 2009
Table Deep Stack #1459548 (No DP) (Real Money)
Seat 6 is the button
Total number of players : 6 
Seat 5: Ducilator ( $128.60 USD )
Seat 4: EvilAdj ( $145.66 USD )
Seat 3: Ice81111 ( $78.60 USD )
Seat 6: RicsterM ( $292.48 USD )
Seat 1: Techno1990 ( $141.06 USD )
Seat 2: pdiloop ( $100 USD )
Techno1990 posts small blind [$0.50 USD].
pdiloop posts big blind [$1 USD].
** Dealing down cards **
Ice81111 folds
EvilAdj folds
Ducilator raises [$4 USD]
RicsterM folds
Techno1990 folds
pdiloop folds
Ducilator does not show cards.
Ducilator wins $5.50 USD

first I search for the start of the block with this expression: ' #Game No.*"

Code:
if [ `expr "${lines[${i}]}" : '#Game No.*'` != 0 ]; then


then I put out all following lines while I find this expression: '.*wins.*'


Code:
 while [ `expr "${lines[${i}]}" : '.*wins.*'` = 0 ]; do
                i=$[i+1]
                echo "${lines[${i}]}"
  done

the loop around this is to check if the current output file size reaches the split limit

Code:
while [ `ls -al output/output_$count.txt | awk '{print $5}'` -le $splitsize -a $i -le ${#lines[@]} ]; do
       ...
done >> output/output_$count.txt

so the `expr "${lines[${i}]}" : '.*wins.*'` command is executed at every single line of the input file. That are ~800.000 times.
0.03sec per iteration means ~7 hours for the whole process.
# 13  
Old 07-27-2009
Have you considered csplit. Assume the average size of your block is 300 bytes 1024000/300 = 3413 block for 1MB
Code:
csplit -k myinputfilename  '/^#Game/-1{3413}'

# 14  
Old 07-27-2009
Quote:
Originally Posted by jim mcnamara
Have you considered csplit. Assume the average size of your block is 300 bytes 1024000/300 = 3413 block for 1MB
Code:
csplit -k myinputfilename  '/^#Game/-1{3413}'

I forgot to say, that the length of each block is different

The posted one is just an example.
Login or Register to Ask a Question

Previous Thread | Next Thread

10 More Discussions You Might Find Interesting

1. Solaris

Rsync quite slow (using very little cpu): how to improve its speed?

I have "inherited" a OmniOS (illumos based) server. I noticed rsync is significantly slower in respect to my reference, FreeBSD 12-CURRENT, running on exactly same hardware. Using same hardware, same command with same source and target disks, OmniOS r151026 gives: test@omniosce:~# time... (11 Replies)
Discussion started by: priyadarshan
11 Replies

2. Shell Programming and Scripting

Improve script

Gents, Is there the possibility to improve this script to be able to have same output information. I did this script, but I believe there is a very short code to get same output here my script awk -F, '{if($10>0 && $10<=15) print $6}' tmp1 | sort -k1n | awk '{a++} END { for (n in a )... (23 Replies)
Discussion started by: jiam912
23 Replies

3. Shell Programming and Scripting

How to improve an script?

Gents. I have 2 different scripts for the same purpose: raw2csv_1 Script raw2csv_1 finish the process in less that 1 minute raw2csv_2 Script raw2csv_2 finish the process in more that 6 minutes. Can you please check if there is any option to improve the raw2csv_2. To finish the job... (4 Replies)
Discussion started by: jiam912
4 Replies

4. UNIX for Dummies Questions & Answers

How to improve the performance of this script?

Hi , i wrote a script to convert dates to the formate i want .it works fine but the conversion is tkaing lot of time . Can some one help me tweek this script #!/bin/bash file=$1 ofile=$2 cp $file $ofile mydates=$(grep -Po '+/+/+' $ofile) # gets 8/1/13 mydates=$(echo "$mydates" | sort |... (5 Replies)
Discussion started by: vikatakavi
5 Replies

5. Programming

awk processing / Shell Script Processing to remove columns text file

Hello, I extracted a list of files in a directory with the command ls . However this is not my computer, so the ls functionality has been revamped so that it gives the filesizes in front like this : This is the output of ls command : I stored the output in a file filelist 1.1M... (5 Replies)
Discussion started by: ajayram
5 Replies

6. Shell Programming and Scripting

Help need to improve performance :Parallel processing ideas

Hi, Please tell me how to include parallel processing for the below code. Thanks in advance I have a list of users directories in root directory. Each user has a directory by his /her username. I am finding the size of each directorry using du -g command.. and checking if the size exceeds 3GB a... (6 Replies)
Discussion started by: justchill
6 Replies

7. Shell Programming and Scripting

awk, perl Script for processing a single line text file

I need a script to process a huge single line text file: The sample of the text is: "forward_inline_item": "Inline", "options_region_Australia": "Australia", "server_event_err_msg": "There was an error attempting to save", "Token": "Yes", "family": "Family","pwd_login_tab": "Enter Your... (1 Reply)
Discussion started by: hmsadiq
1 Replies

8. Shell Programming and Scripting

Any way to improve performance of this script

I have a data file of 2 gig I need to do all these, but its taking hours, any where i can improve performance, thanks a lot #!/usr/bin/ksh echo TIMESTAMP="$(date +'_%y-%m-%d.%H-%M-%S')" function showHelp { cat << EOF >&2 syntax extreme.sh FILENAME Specify filename to parse EOF... (3 Replies)
Discussion started by: sirababu
3 Replies

9. Shell Programming and Scripting

KSH script -text file processing NULL issues

I'm trying to strip any garbage that may be at the end of my text file and that part is working. The problem only seems to be with the really long lines in the file. When the head command is executed I am directing the output to a new file. The new file always get a null in the 4096 position but... (2 Replies)
Discussion started by: geauxsaints
2 Replies

10. Shell Programming and Scripting

Can I improve this script ???

Hi all, Still a newbie and learning as I go ... as you do :) Have created this script to report on disc usage and I've just included the ChkSpace function this morning. It's the first time I've read a file (line-by-bloody-line) and would like to know if I can improve this script ? FYI - I... (11 Replies)
Discussion started by: Cameron
11 Replies
Login or Register to Ask a Question