I am new to submitting jobs. I am trying to submit my perl file to the cluster.
This is what my shell file looks like (shell1.sh):
Code:
#!/bin/sh
#$ -S /bin/sh
cd data/projects/mydir/abbc
perl autocorro.pl
followed by qsub shell1.sh
It takes the qsub, but does nothing. I check qstat and see that I have a job running, but it's not doing anything. But: perl autocorro.pl works at the command line. just not in 'batch' form. Thanks
---------- Post updated at 10:06 PM ---------- Previous update was at 09:39 PM ----------
I've tried making shell script:
Code:
#
perl autocorro.pl
qstat just keeps running and running, even though using the command line 'perl autocorro.pl' takes 5 seconds.
---------- Post updated at 10:18 PM ---------- Previous update was at 10:06 PM ----------
Is there any way I can submit a job to a remote machine and return immediately without withing for the job to finish?
What I mean is this...using rsh I can submit a job to a remote machine like this:
rsh remotemac1 job.sh
But this doesn't return untill the job has finished and as a... (3 Replies)
i need to execute 5 jobs at a time in background and need to get the exit status of all the jobs i wrote small script below , i'm not sure this is right way to do it.any ideas please help.
$cat run_job.ksh
#!/usr/bin/ksh
####################################
typeset -u SCHEMA_NAME=$1
... (1 Reply)
I'm new to this and didn't know what my problem is called but here it is:
A program called "prepdata" is run which asks the user to enter in a <input> file for the data to be taken from. However, it won't accept that input filename as an argument:
$ prepdata <input> will NOT work
... (2 Replies)
I have several HP/UX nodes running Sendmail 8.13 ...some work fine, some don't.
When an Email is coming in, the 'DATA' command never ends. The other side of the connection gets to the point where it enters the '.' on a line by itself, but sendmail doesn't accept it...if fact it keeps on holding... (8 Replies)
Hello,
I am running GNU bash, version 3.2.39(1)-release (x86_64-pc-linux-gnu). I have a specific question pertaining to waiting on jobs run in sub-shells, based on the max number of parallel processes I want to allow, and then wait... (1 Reply)
Hi,
I am trying to submit a job to a queue on a cluster. When I run the job ( python script) from the command line it runs without putting python at the start. The script imports everything from another congifuration file (.config) but when I submit to the queue it tells me there is no module... (0 Replies)
Hello,
I want to submit my awk script into cluster queue as my job takes about forty minutes to finish so I can not run it on the main node.
My awk script is like the following and I have three files. so, I write :
qsub -q short.q Myscript.awk file1 file2 file3
It submits the work into... (1 Reply)
Hi all,
Today, I want to ask how to submit multiple qsub jobs.
I want to submit 100 .sh files for the simulations.
The name of files is
run_001.sh,
run_002.sh,
run_003.sh,
.....
.....
run_100.sh
Submitting each file manually are time-consuming, hence, I want to make another .sh file... (11 Replies)
I have multiple jobs and each job dependent on other job.
Each Job generates a log and If job completed successfully log file end's with JOB ENDED SUCCESSFULLY message and if it failed then it will end with JOB ENDED with FAILURE.
I need an help how to start.
Attaching the JOB dependency... (3 Replies)
Discussion started by: santoshkumarkal
3 Replies
LEARN ABOUT DEBIAN
condor_continue
condor_continue(1) General Commands Manual condor_continue(1)Name
condor_continue continue - suspended jobs from the Condor queue
Synopsis
condor_continue [-help -version]
condor_continue[-debug] [-pool centralmanagerhostname[:portnumber]-name scheddname][-addr "<a.b.c.d:port>"] cluster cluster.process user
-constraint expression-all
Description
condor_continuecontinues one or more suspended jobs from the Condor job queue. If the -nameoption is specified, the named condor_scheddis
targeted for processing. Otherwise, the local condor_scheddis targeted. The job(s) to be continued are identified by one of the job identi-
fiers, as described below. For any given job, only the owner of the job or one of the queue super users (defined by the QUEUE_SUPER_USERS
macro) can continue the job.
Options-help
Display usage information
-version
Display version information
-pool centralmanagerhostname[:portnumber]
Specify a pool by giving the central manager's host name and an optional port number
-name scheddname
Send the command to a machine identified by scheddname
-addr <a.b.c.d:port>
Send the command to a machine located at "<a.b.c.d:port>"
-debug
Causes debugging information to be sent to stderr , based on the value of the configuration variable TOOL_DEBUG
cluster
Continue all jobs in the specified cluster
cluster.process
Continue the specific job in the cluster
user
Continue jobs belonging to specified user
-constraint expression
Continue all jobs which match the job ClassAd expression constraint
-all
Continue all the jobs in the queue
Exit Status
condor_continuewill exit with a status value of 0 (zero) upon success, and it will exit with the value 1 (one) upon failure.
Examples
To continue all jobs except for a specific user:
% condor_continue -constraint 'Owner =!= "foo"'
Author
Condor Team, University of Wisconsin-Madison
Copyright
Copyright (C) 1990-2012 Condor Team, Computer Sciences Department, University of Wisconsin-Madison, Madison, WI. All Rights Reserved.
Licensed under the Apache License, Version 2.0.
See the Condor Version 7.8.2 Manualor http://www.condorproject.org/licensefor additional notices. condor-admin@cs.wisc.edu
September 2012 condor_continue(1)