Sponsored Content
Top Forums Shell Programming and Scripting Help to improve speed of text processing script Post 302338102 by lorus on Monday 27th of July 2009 05:37:08 AM
Old 07-27-2009
Quote:
Originally Posted by matrixmadhan
Since you have remainded me of the movie 'matrix' - am all charged up to answer your question to an extent atleast Smilie
Hehe thats good to know. I have just to bring something about matrix in every post to get your help :-D

Your suggestions makes absolutly sence, so I rewrote it to this

Code:
#!/bin/bash

declare -a lines
OIFS="$IFS"
IFS=$'\n'
set -f   # cf. help set
lines=($(< "input.txt"))
set +f
IFS="$OIFS"

splitsize=102400
i=0
count=0
lines_tot=${#lines[@]}

while [ $i -le $lines_tot ]; do
    count=$[count+1]
    touch output/output_$count.txt
    
    while [ `ls -al output/output_$count.txt | awk '{print $5}'` -le $splitsize -a $i -le ${#lines[@]} ]; do
        i=$[i+1]
        if [ `expr "${lines[${i}]}" : '#Game No.*'` != 0 ]; then
            while [ `expr "${lines[${i}]}" : '.*wins.*'` = 0 ]; do
                i=$[i+1]
                echo "${lines[${i}]}"
            done
            echo ""
        fi
    done >> output/output_$count.txt
    
done

But that file open/close process doesn't seems to be the time-thief

A simple

Code:
while [ $i -le $lines_tot ]; do

    echo "${lines[${i}]}" >> output_test.txt
    
done

processes the whole file in just a few seconds.

So the time thief must be the "expr" command inside the inner loop. Is there maybe any equivalent command that is faster?

Quote:
lorus,

why are we trying to write a split script..!? let split command do the job...
Because "split" cuts at static points and that would destroy the structure of my file, doesn't it?
 

10 More Discussions You Might Find Interesting

1. Shell Programming and Scripting

Can I improve this script ???

Hi all, Still a newbie and learning as I go ... as you do :) Have created this script to report on disc usage and I've just included the ChkSpace function this morning. It's the first time I've read a file (line-by-bloody-line) and would like to know if I can improve this script ? FYI - I... (11 Replies)
Discussion started by: Cameron
11 Replies

2. Shell Programming and Scripting

KSH script -text file processing NULL issues

I'm trying to strip any garbage that may be at the end of my text file and that part is working. The problem only seems to be with the really long lines in the file. When the head command is executed I am directing the output to a new file. The new file always get a null in the 4096 position but... (2 Replies)
Discussion started by: geauxsaints
2 Replies

3. Shell Programming and Scripting

Any way to improve performance of this script

I have a data file of 2 gig I need to do all these, but its taking hours, any where i can improve performance, thanks a lot #!/usr/bin/ksh echo TIMESTAMP="$(date +'_%y-%m-%d.%H-%M-%S')" function showHelp { cat << EOF >&2 syntax extreme.sh FILENAME Specify filename to parse EOF... (3 Replies)
Discussion started by: sirababu
3 Replies

4. Shell Programming and Scripting

awk, perl Script for processing a single line text file

I need a script to process a huge single line text file: The sample of the text is: "forward_inline_item": "Inline", "options_region_Australia": "Australia", "server_event_err_msg": "There was an error attempting to save", "Token": "Yes", "family": "Family","pwd_login_tab": "Enter Your... (1 Reply)
Discussion started by: hmsadiq
1 Replies

5. Shell Programming and Scripting

Help need to improve performance :Parallel processing ideas

Hi, Please tell me how to include parallel processing for the below code. Thanks in advance I have a list of users directories in root directory. Each user has a directory by his /her username. I am finding the size of each directorry using du -g command.. and checking if the size exceeds 3GB a... (6 Replies)
Discussion started by: justchill
6 Replies

6. Programming

awk processing / Shell Script Processing to remove columns text file

Hello, I extracted a list of files in a directory with the command ls . However this is not my computer, so the ls functionality has been revamped so that it gives the filesizes in front like this : This is the output of ls command : I stored the output in a file filelist 1.1M... (5 Replies)
Discussion started by: ajayram
5 Replies

7. UNIX for Dummies Questions & Answers

How to improve the performance of this script?

Hi , i wrote a script to convert dates to the formate i want .it works fine but the conversion is tkaing lot of time . Can some one help me tweek this script #!/bin/bash file=$1 ofile=$2 cp $file $ofile mydates=$(grep -Po '+/+/+' $ofile) # gets 8/1/13 mydates=$(echo "$mydates" | sort |... (5 Replies)
Discussion started by: vikatakavi
5 Replies

8. Shell Programming and Scripting

How to improve an script?

Gents. I have 2 different scripts for the same purpose: raw2csv_1 Script raw2csv_1 finish the process in less that 1 minute raw2csv_2 Script raw2csv_2 finish the process in more that 6 minutes. Can you please check if there is any option to improve the raw2csv_2. To finish the job... (4 Replies)
Discussion started by: jiam912
4 Replies

9. Shell Programming and Scripting

Improve script

Gents, Is there the possibility to improve this script to be able to have same output information. I did this script, but I believe there is a very short code to get same output here my script awk -F, '{if($10>0 && $10<=15) print $6}' tmp1 | sort -k1n | awk '{a++} END { for (n in a )... (23 Replies)
Discussion started by: jiam912
23 Replies

10. Solaris

Rsync quite slow (using very little cpu): how to improve its speed?

I have "inherited" a OmniOS (illumos based) server. I noticed rsync is significantly slower in respect to my reference, FreeBSD 12-CURRENT, running on exactly same hardware. Using same hardware, same command with same source and target disks, OmniOS r151026 gives: test@omniosce:~# time... (11 Replies)
Discussion started by: priyadarshan
11 Replies
rl(1)								   User Commands							     rl(1)

NAME
       rl - Randomize Lines.

SYNOPSIS
       rl [OPTION]...  [FILE]...

DESCRIPTION
       rl  reads  lines from a input file or stdin, randomizes the lines and outputs a specified number of lines.  It does this with only a single
       pass over the input while trying to use as little memory as possible.

       -c, --count=N
	      Select the number of lines to be returned in the output.	If this argument is omitted all the lines in the file will be returned	in
	      random order.  If the input contains less lines than specified and the --reselect option below is not specified a warning is printed
	      and all lines are returned in random order.

       -r, --reselect
	      When using this option a single line may be selected multiple times.  The default behaviour is that any  input  line  will  only	be
	      selected once.  This option makes it possible to specify a --count option with more lines than the file actually holds.

       -o, --output=FILE
	      Send randomized lines to FILE instead of stdout.

       -d, --delimiter=DELIM
	      Use specified character as a "line" delimiter instead of the newline character.

       -0, --null
	      Input lines are terminated by a null character.  This option is useful to process the output of the GNU find -print0 option.

       -n, --line-number
	      Output lines are numbered with the line number from the input file.

       -q, --quiet, --silent
	      Be quiet about any errors or warnings.

       -h, --help
	      Show short summary of options.

       -v, --version
	      Show version of program.

EXAMPLES
       Some simple demonstrations of how rl can help you do everyday tasks.

       Play a random sound after 4 minutes (perfect for toast):
	   sleep 240 ; play `find /sounds -name '*.au' -print | rl --count=1`

       Play the 15 most recent .mp3 files in random order.
	   ls -c *.mp3 | head -n 15 | rl  | xargs --delimiter='
' play

       Roll a dice:
	   seq 6 | rl --count 2

       Roll a dice 1000 times and see which number comes up more often:
	   seq 6 | rl --reselect --count 1000 | sort | uniq -c | sort -n

       Shuffle the words of a sentence:
	   echo -n "The rain in Spain stays mainly in the plain." 
	     | rl --delimiter=' ';echo

       Find all movies and play them in random order.
	   find . -name '*.avi' -print0 | rl -0 | xargs -n 1 -0 mplayer
       Because -0 is used filenames with spaces (even newlines and other unusual characters) in them work.

BUGS
       The  program currently does not have very smart memory management.  If you feed it huge files and expect it to fully randomize all lines it
       will completely read the file in memory. If you specify the --count option it will only use the memory required for storing  the  specified
       number of lines.  Improvements on this area are on the TODO list.

       The  program uses the rand() system random function.  This function returns a number between 0 and RAND_MAX, which may not be very large on
       some systems.  This will result in non-random results for files containing more lines than RAND_MAX.

       Note that if you specify multiple input files they are randomized per file.  This is a different result from when you cat all the files and
       pipe the result into rl.

COPYRIGHT
       Copyright (C) 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008 Arthur de Jong.
       This  is  free  software;  see  the  license  for  copying conditions.  There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A
       PARTICULAR PURPOSE.

Version 0.2.7							     Jul 2008								     rl(1)
All times are GMT -4. The time now is 07:26 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy