Sponsored Content
Top Forums Shell Programming and Scripting How to make faster loop in multiple directories? Post 303025223 by baris35 on Sunday 28th of October 2018 12:23:15 PM
Old 10-28-2018
How to make faster loop in multiple directories?

Hello,
I am under Ubuntu 18.04 Bionic.
I have one shell script run.sh (which is out of my topic) to run files under multiple directories and one file to control all processes running under those directories (control.sh).
I set a cronjob task to check each of them with two minutes of intervals. When a process is dead or when all processes are dead during ramp-up, each process waits for the completion of previous one. That's okay but due to time consuming process of run.sh
it keeps me waiting. To write separate scripts just dedicated to each folder does not sound logic.
Is there a modern way to do this without entering all directory names ?
Just wish say to script: "implement your code to all subfolders"

control.sh
Code:
for A in folder1 folder2 folder3 folder4 folder5 folder6 folder7 folder8 \
folder9 folder10 folder11 folder12 folder13 folder14 folder15 ; do
cd $A
PID=$(cat *.pid)
if ! [ -n "$PID" -a -e /proc/$PID ]; then
/home/boris/public_html/run.sh $A
else
    echo "process exists, exit now!"
sleep 2
fi
cd ../
done


Thanks in advance
Boris

Last edited by baris35; 10-28-2018 at 02:11 PM.. Reason: typo error fix
 

10 More Discussions You Might Find Interesting

1. Solaris

looking for different debugger for Solaris or to make sunstudio faster

im using the sunstudio but it is very slow , is there ant other GUI debugger for sun Solaris or at list some ways to make it faster ? im using to debug throw telnet connection connected to remote server thanks (0 Replies)
Discussion started by: umen
0 Replies

2. Shell Programming and Scripting

Can anyone make this script run faster?

One of our servers runs Solaris 8 and does not have "ls -lh" as a valid command. I wrote the following script to make the ls output easier to read and emulate "ls -lh" functionality. The script works, but it is slow when executed on a directory that contains a large number of files. Can anyone make... (10 Replies)
Discussion started by: shew01
10 Replies

3. Shell Programming and Scripting

awk help to make my work faster

hii everyone , i have a file in which i have line numbers.. file name is file1.txt aa bb cc "12" qw xx yy zz "23" we bb qw we "123249" jh here 12,23,123249. is the line number now according to this line numbers we have to print lines from other file named... (11 Replies)
Discussion started by: kumar_amit
11 Replies

4. Red Hat

Re:How to make the linux pc faster

Hi, Can any one help me out in solving the problem i have a linux database server it is tooo slow that i am unable to open even the terminial is there any solution to get rid of this problem.How to make this server faster. Thanks & Regards Venky (0 Replies)
Discussion started by: venky_vemuri
0 Replies

5. Shell Programming and Scripting

How to make copy work faster

I am trying to copy a folder which contains a list of C executables. It takes 2 mins for completion,where as the entire script takes only 3 more minutes for other process. Is there a way to copy the folder faster so that the performance of the script will improve? (2 Replies)
Discussion started by: prasperl
2 Replies

6. Shell Programming and Scripting

Running rename command on large files and make it faster

Hi All, I have some 80,000 files in a directory which I need to rename. Below is the command which I am currently running and it seems, it is taking fore ever to run this command. This command seems too slow. Is there any way to speed up the command. I have have GNU Parallel installed on my... (6 Replies)
Discussion started by: shoaibjameel123
6 Replies

7. Shell Programming and Scripting

Make script faster

Hi all, In bash scripting, I use to read files: cat $file | while read line; do ... doneHowever, it's a very slow way to read file line by line. E.g. In a file that has 3 columns, and less than 400 rows, like this: I run next script: cat $line | while read line; do ## Reads each... (10 Replies)
Discussion started by: AlbertGM
10 Replies

8. Shell Programming and Scripting

awk changes to make it faster

I have script like below, who is picking number from one file and and searching in another file, and printing output. Bu is is very slow to be run on huge file.can we modify it with awk #! /bin/ksh while read line1 do echo "$line1" a=`echo $line1` if then echo "$num" cat file1|nawk... (6 Replies)
Discussion started by: mirwasim
6 Replies

9. Shell Programming and Scripting

Accessing multiple directories in loop

Hi Guys, I need to access multiple directories whcih is following similar structure and need to copy those files in desitination path. for eg : if ] then cd ${DIR}/Mon/loaded echo "copying files to $GRS_DIR" cp * ${DIR}/Mon/ echo "Files of Monday are Copied" fi if ] then... (5 Replies)
Discussion started by: rohit_shinez
5 Replies

10. Shell Programming and Scripting

How to make awk command faster?

I have the below command which is referring a large file and it is taking 3 hours to run. Can something be done to make this command faster. awk -F ',' '{OFS=","}{ if ($13 == "9999") print $1,$2,$3,$4,$5,$6,$7,$8,$9,$10,$11,$12 }' ${NLAP_TEMP}/hist1.out|sort -T ${NLAP_TEMP} |uniq>... (13 Replies)
Discussion started by: Peu Mukherjee
13 Replies
SHLOCK(1)						    BSD General Commands Manual 						 SHLOCK(1)

NAME
shlock -- create or verify a lock file for shell scripts SYNOPSIS
shlock [-du] [-p PID] -f lockfile DESCRIPTION
The shlock command can create or verify a lock file on behalf of a shell or other script program. When it attempts to create a lock file, if one already exists, shlock verifies that it is or is not valid. If valid, shlock will exit with a non-zero exit code. If invalid, shlock will remove the lock file, and create a new one. shlock uses the link(2) system call to make the final target lock file, which is an atomic operation (i.e. "dot locking", so named for this mechanism's original use for locking system mailboxes). It puts the process ID ("PID") from the command line into the requested lock file. shlock verifies that an extant lock file is still valid by using kill(2) with a zero signal to check for the existence of the process that holds the lock. The -d option causes shlock to be verbose about what it is doing. The -f argument with lockfile is always required. The -p option with PID is given when the program is to create a lock file; when absent, shlock will simply check for the validity of the lock file. The -u option causes shlock to read and write the PID as a binary pid_t, instead of as ASCII, to be compatible with the locks created by UUCP. EXIT STATUS
A zero exit code indicates a valid lock file. EXAMPLES
BOURNE SHELL #!/bin/sh lckfile=/tmp/foo.lock if shlock -f ${lckfile} -p $$ then # do what required the lock rm ${lckfile} else echo Lock ${lckfile} already held by `cat ${lckfile}` fi C SHELL #!/bin/csh -f set lckfile=/tmp/foo.lock shlock -f ${lckfile} -p $$ if ($status == 0) then # do what required the lock rm ${lckfile} else echo Lock ${lckfile} already held by `cat ${lckfile}` endif The examples assume that the file system where the lock file is to be created is writable by the user, and has space available. HISTORY
shlock was written for the first Network News Transfer Protocol (NNTP) software distribution, released in March 1986. The algorithm was sug- gested by Peter Honeyman, from work he did on HoneyDanBer UUCP. AUTHORS
Erik E. Fair <fair@clock.org> BUGS
Does not work on NFS or other network file system on different systems because the disparate systems have disjoint PID spaces. Cannot handle the case where a lock file was not deleted, the process that created it has exited, and the system has created a new process with the same PID as in the dead lock file. The lock file will appear to be valid even though the process is unrelated to the one that cre- ated the lock in the first place. Always remove your lock files after you're done. BSD
June 29, 1997 BSD
All times are GMT -4. The time now is 03:43 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy