Help to improve speed of text processing script


 
Thread Tools Search this Thread
Top Forums Shell Programming and Scripting Help to improve speed of text processing script
# 1  
Old 07-26-2009
Help to improve speed of text processing script

Hey together,

You should know, that I'am relatively new to shell scripting, so my solution is probably a little awkward.

Here is the script:

Code:
#!/bin/bash

live_dir=/var/lib/pokerhands/live

for limit in `find $live_dir/ -type d  | sed -e s#$live_dir/##`; do
    cat $live_dir/$limit/* > $limit
    
    declare -a lines
    OIFS="$IFS"
    IFS=$'\n'
    set -f   # cf. help set
    lines=($(< "$limit"))
    set +f
    IFS="$OIFS"
    
    i=0
    count=0

    while [ $i -le ${#lines[@]} ]; do

        count=$[count+1]
        touch test_$count.txt
        
        while [ `ls -al test_$count.txt | awk '{print $5}'` -le 1048576 -a $i -le ${#lines[@]} ]; do
            i=$[i+1]
            if [ `expr "${lines[${i}]}" : '#Game No.*'` != 0 ]; then
                while [ `expr "${lines[${i}]}" : '.*wins.*'` = 0 ]; do
                    i=$[i+1]
                    echo "${lines[${i}]}" >> test_$count.txt
                done
                echo "" >> test_$count.txt
            fi
        done
        
    done
done

This Script splits a input file into ~1MB Parts, without destroying the data blocks.

The data blocks of the input file look something like this:

Code:
#Game No : 8273167998 
***** Hand History for Game 8273167998 *****
$100 USD NL Texas Hold'em - Saturday, July 25, 11:34:58 EDT 2009
Table Deep Stack #1459548 (No DP) (Real Money)
Seat 6 is the button
Total number of players : 6 
Seat 5: Ducilator ( $128.60 USD )
Seat 4: EvilAdj ( $145.66 USD )
Seat 3: Ice81111 ( $78.60 USD )
Seat 6: RicsterM ( $292.48 USD )
Seat 1: Techno1990 ( $141.06 USD )
Seat 2: pdiloop ( $100 USD )
Techno1990 posts small blind [$0.50 USD].
pdiloop posts big blind [$1 USD].
** Dealing down cards **
Ice81111 folds
EvilAdj folds
Ducilator raises [$4 USD]
RicsterM folds
Techno1990 folds
pdiloop folds
Ducilator does not show cards.
Ducilator wins $5.50 USD

It is working so far, but the problem is the speed ... for a ~20mb input file it runs for some hours ...

What makes it so slow?

Can anyone help me to improve the speed?
# 2  
Old 07-26-2009
From the code it looks like you ARE an experienced scripter.
But did you try the basic?
Try your script with debug.
ksh -x
It could be any of them.
See which line takes more time. Simple.

Also, why do you need the "sed" in the first line ?
From the first look, it looks like you are removing the directory details but you are adding it up again at the "cat".
# 3  
Old 07-26-2009
What are the conditions to split the file? There are probably other approaches to speed up the process.

Regards
# 4  
Old 07-26-2009
Quote:
Originally Posted by lorus
Can anyone help me to improve the speed?
This is the price to pay when you don't follow the rules and use temp file and useless commands .... time Smilie

Try to post a sample data file and required output and suggest a different approach.
# 5  
Old 07-26-2009
Quote:
Originally Posted by danmero
This is the price to pay when you don't follow the rules and use temp file and useless commands .... time Smilie
Yeah I realized that and so I just asked for some hints to get a better solution to run it faster Smilie

Please notice that I'am relatively new to shell scripting. In fact this is my 2nd script try.

Quote:
Originally Posted by Franklin52
What are the conditions to split the file? There are probably other approaches to speed up the process.

Regards
I have a large input text file (aprox. 20mb) and want to split it into single 1MB output files. The input file include contains of data blocks that are'nt allowed to destroy during the split. (posted short sample of this data block in my 1st post)

Quote:
Originally Posted by danmero
Try to post a sample data file and required output and suggest a different approach.
Yeah thats a good idea. I attached my test enviroment to this post.
It contains of the following structure.

Code:
testenv/
|-- output        <-output folder
|   |-- output_1.txt    
|   |-- output_2.txt    <-output files
|   |-- output_3.txt
|   `-- output_4.txt
|-- input.txt        <-input file
`-- split.sh        <-script file

I reduced the script code to the essential things and set the output file size to 100kb to demonstrate you the principle and let you better understand what I want to do.

In this example I let the script run for ~5min and in this time it processes ~400kb of 25mb and put out these 4 files.

Thanks in advance for your help Smilie
# 6  
Old 07-27-2009
Just run your script as requested earlier.
Code:
ksh -x my your.sh

You can find out by yourself which line is taking more time.
# 7  
Old 07-27-2009
Quote:
Originally Posted by edidataguy
Just run your script as requested earlier.
Code:
ksh -x my your.sh

You can find out by yourself which line is taking more time.
You know the movie matrix? In that speed the chars fly through my screen, when I use "ksh -x". That really doesn't helps me much Smilie

I think the problem is, that I use the "expr" command on any single line of the input file.
That means every iteration takes ~0.03sec. On an input file with 800.000 lines the whole process take ~7 hours.
Each iteration have to be 0.0003sec to get an acceptable result.

Is there maybe any faster command than "expr" which can do the same (regexp)?

Last edited by lorus; 07-27-2009 at 05:58 AM..
Login or Register to Ask a Question

Previous Thread | Next Thread

10 More Discussions You Might Find Interesting

1. Solaris

Rsync quite slow (using very little cpu): how to improve its speed?

I have "inherited" a OmniOS (illumos based) server. I noticed rsync is significantly slower in respect to my reference, FreeBSD 12-CURRENT, running on exactly same hardware. Using same hardware, same command with same source and target disks, OmniOS r151026 gives: test@omniosce:~# time... (11 Replies)
Discussion started by: priyadarshan
11 Replies

2. Shell Programming and Scripting

Improve script

Gents, Is there the possibility to improve this script to be able to have same output information. I did this script, but I believe there is a very short code to get same output here my script awk -F, '{if($10>0 && $10<=15) print $6}' tmp1 | sort -k1n | awk '{a++} END { for (n in a )... (23 Replies)
Discussion started by: jiam912
23 Replies

3. Shell Programming and Scripting

How to improve an script?

Gents. I have 2 different scripts for the same purpose: raw2csv_1 Script raw2csv_1 finish the process in less that 1 minute raw2csv_2 Script raw2csv_2 finish the process in more that 6 minutes. Can you please check if there is any option to improve the raw2csv_2. To finish the job... (4 Replies)
Discussion started by: jiam912
4 Replies

4. UNIX for Dummies Questions & Answers

How to improve the performance of this script?

Hi , i wrote a script to convert dates to the formate i want .it works fine but the conversion is tkaing lot of time . Can some one help me tweek this script #!/bin/bash file=$1 ofile=$2 cp $file $ofile mydates=$(grep -Po '+/+/+' $ofile) # gets 8/1/13 mydates=$(echo "$mydates" | sort |... (5 Replies)
Discussion started by: vikatakavi
5 Replies

5. Programming

awk processing / Shell Script Processing to remove columns text file

Hello, I extracted a list of files in a directory with the command ls . However this is not my computer, so the ls functionality has been revamped so that it gives the filesizes in front like this : This is the output of ls command : I stored the output in a file filelist 1.1M... (5 Replies)
Discussion started by: ajayram
5 Replies

6. Shell Programming and Scripting

Help need to improve performance :Parallel processing ideas

Hi, Please tell me how to include parallel processing for the below code. Thanks in advance I have a list of users directories in root directory. Each user has a directory by his /her username. I am finding the size of each directorry using du -g command.. and checking if the size exceeds 3GB a... (6 Replies)
Discussion started by: justchill
6 Replies

7. Shell Programming and Scripting

awk, perl Script for processing a single line text file

I need a script to process a huge single line text file: The sample of the text is: "forward_inline_item": "Inline", "options_region_Australia": "Australia", "server_event_err_msg": "There was an error attempting to save", "Token": "Yes", "family": "Family","pwd_login_tab": "Enter Your... (1 Reply)
Discussion started by: hmsadiq
1 Replies

8. Shell Programming and Scripting

Any way to improve performance of this script

I have a data file of 2 gig I need to do all these, but its taking hours, any where i can improve performance, thanks a lot #!/usr/bin/ksh echo TIMESTAMP="$(date +'_%y-%m-%d.%H-%M-%S')" function showHelp { cat << EOF >&2 syntax extreme.sh FILENAME Specify filename to parse EOF... (3 Replies)
Discussion started by: sirababu
3 Replies

9. Shell Programming and Scripting

KSH script -text file processing NULL issues

I'm trying to strip any garbage that may be at the end of my text file and that part is working. The problem only seems to be with the really long lines in the file. When the head command is executed I am directing the output to a new file. The new file always get a null in the 4096 position but... (2 Replies)
Discussion started by: geauxsaints
2 Replies

10. Shell Programming and Scripting

Can I improve this script ???

Hi all, Still a newbie and learning as I go ... as you do :) Have created this script to report on disc usage and I've just included the ChkSpace function this morning. It's the first time I've read a file (line-by-bloody-line) and would like to know if I can improve this script ? FYI - I... (11 Replies)
Discussion started by: Cameron
11 Replies
Login or Register to Ask a Question