Sponsored Content
Top Forums Shell Programming and Scripting Help with File Slow Processing Post 302534639 by methyl on Tuesday 28th of June 2011 10:36:48 AM
Old 06-28-2011
(Late post - lost connection, may be out of context)
Quote:
What Operating System and version are you running? It is sun solaris
The version is in the output from the "uname -a" command. It should then be possible to look up whether your Solaris is an old one which has the old Bourne Shell for /bin/sh or a new one with the more modern Posix Shell.

1) The big inefficency is using a Shell "read" to read records line-by-line from a data file, then using multiple "awk" runs to separate the fields.
I see now why you reassigned the channels because you are already using the Shell input channel to read a list of files.

I agree with the ideas behind "agiles" modifications.

2) As you have a list of required files, use that list.
I'd add a test to the script to check whether the file exists.
I see that "agiles" modification is ingeneous because it allows for this by sending errors to /dev/null:
Quote:
for FILE in `cd localcurves; ls $ZEROCURVEFILES 2>/dev/null`
3) Invoke awk only once and use it to read the data from the files.
A lot of the inefficiency comes from the number of times the original script starts "awk" to process the same $line .

4) Hold the list of 133 files in a real file not an environment variable and use "while" rather than "for". Some Bourne shells will not let you have an environment variable that big.

5) Consider making a version of K2test.sh which only generates the relevant 133 files in /home/sratta/feds/localCurves/curves .

6) Noticed that the variable $LOWERCASEFILE is not set anywhere.

7) If you have a journalling filesystem it is inefficient to repeatedly create a batch of files then overwrite them with Shell. Depends whether K2test.sh removes old files before generating new files.
 

10 More Discussions You Might Find Interesting

1. UNIX for Advanced & Expert Users

File writing is slow

Hello Guru, I am using a Pro*C program to prepare some reports usaually the report file size is greater than 1GB. But nowadays program is very slow. I found out the program is taking much time to write data to file ..... is there any unix related reason to be slow down, the file writting... (2 Replies)
Discussion started by: bhagyaraj.p
2 Replies

2. SCO

Slow Processing - not matching hardware capabilities

I have been a SCO UNIX user, never an administrator...so I am stumbling around looking for information. I don't know too much about what is onboard in terms of hardware, however; I will try my best. We have SCO 5.07 and have applied MP5. We have a quad core processor with 4 250 GB... (1 Reply)
Discussion started by: atpbrownie
1 Replies

3. Red Hat

file writing over nfs very slow

Hi guys, I am trying something. I wrote a simple shell program to test something where continuous while loop writes on a file over the nfs. The time taken to write "hello" 3000 times take about 10 sec which is not right. Ideally it should take fraction of seconds. If I write on the local disk, it... (1 Reply)
Discussion started by: abhig
1 Replies

4. Shell Programming and Scripting

File processing is very slow with cut command

Dear All, I am using the following script to find and replace the date format in a file. The field18 in the file has the following format: "01/26/2010 11:55:14 GMT+04:00" which I want to convert into the following format "20100126115514" for this purpose I am using the following lines of codes:... (5 Replies)
Discussion started by: bilalghazi
5 Replies

5. Shell Programming and Scripting

Slow performance filtering file

Please, I need help tuning my script. It works but it's too slow. The code reads an acivity log file with 50.000 - 100.000 lines and filters error messages from it. The data in the actlog file look similar to this: 02/08/2011 00:25:01,ANR2034E QUERY MOUNT: No match found using this criteria.... (5 Replies)
Discussion started by: Miila
5 Replies

6. Shell Programming and Scripting

Very big text file - Too slow!

Hello everyone, suppose there is a very big text file (>800 mb) that each line contains an article from wikipedia. Each article begins with a tag (<..>) containing its url. Currently there are 10^6 articles in the file. I want to take random N articles, eliminate all non-alpharithmetic... (14 Replies)
Discussion started by: fedonMan
14 Replies

7. Red Hat

GFS file system performance is very slow

My code Hi All, I am having redhat linux 5.3 (Tikanga) with GFS file system and its very very slow for executing ls -ls command also.Please see the below for 2minits 12 second takes. Please help me to fix the issue. $ sudo time ls -la BadFiles |wc -l 0.01user 0.26system... (3 Replies)
Discussion started by: susindram
3 Replies

8. Programming

awk processing / Shell Script Processing to remove columns text file

Hello, I extracted a list of files in a directory with the command ls . However this is not my computer, so the ls functionality has been revamped so that it gives the filesizes in front like this : This is the output of ls command : I stored the output in a file filelist 1.1M... (5 Replies)
Discussion started by: ajayram
5 Replies

9. Shell Programming and Scripting

Shell script reading file slow

I have shell program as below #!/bin/sh echo ======= LogManageri start ========== #This directory is getting the raw data from remote server Raw_data=/opt/ftplogs # This directory is ready for process the data Processing_dir=/opt/processing_dir # This directory is prcoessed files and... (4 Replies)
Discussion started by: Chenchireddy
4 Replies

10. Shell Programming and Scripting

Processing too slow with loop

I have 2 files file 1 : contains ALINE ALINE BANG B ON A B.B.V.A. BANG AMER CORG BANG ON MORENA BANG ON MORENAIC BANG ON MORENAICA BANG ON MORENAICA CORP BANG ON MORENAICA N.A file 2 contains and is seprated by ^ delimiter : NATIO MARKET^345432534 (10 Replies)
Discussion started by: nikhil jain
10 Replies
CPANPLUS::Shell(3pm)					 Perl Programmers Reference Guide				      CPANPLUS::Shell(3pm)

NAME
CPANPLUS::Shell - base class for CPANPLUS shells SYNOPSIS
use CPANPLUS::Shell; # load the shell indicated by your # config -- defaults to # CPANPLUS::Shell::Default use CPANPLUS::Shell qw[Classic] # load CPANPLUS::Shell::Classic; my $ui = CPANPLUS::Shell->new(); my $name = $ui->which; # Find out what shell you loaded $ui->shell; # run the ui shell DESCRIPTION
This module is the generic loading (and base class) for all "CPANPLUS" shells. Through this module you can load any installed "CPANPLUS" shell. Just about all the functionality is provided by the shell that you have loaded, and not by this class (which merely functions as a generic loading class), so please consult the documentation of your shell of choice. BUG REPORTS
Please report bugs or other issues to <bug-cpanplus@rt.cpan.org<gt>. AUTHOR
This module by Jos Boumans <kane@cpan.org>. COPYRIGHT
The CPAN++ interface (of which this module is a part of) is copyright (c) 2001 - 2007, Jos Boumans <kane@cpan.org>. All rights reserved. This library is free software; you may redistribute and/or modify it under the same terms as Perl itself. SEE ALSO
CPANPLUS::Shell::Default, CPANPLUS::Shell::Classic, cpanp perl v5.18.2 2014-01-06 CPANPLUS::Shell(3pm)
All times are GMT -4. The time now is 07:48 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy