Sponsored Content
Top Forums Shell Programming and Scripting shell script queries: $home; broadcast ping Post 27381 by bionicfysh on Friday 30th of August 2002 09:27:17 AM
Old 08-30-2002
Lightbulb shell scripting...

Dear all,

Thanks so much for the feedback.
I have tried the following:

vi new

ls $HOME
ls ~
ls ~toto

they all respond accordingly

however:

user=toto
ls ~$user

it returns: can't ls to ~toto (???)


so the script can find ~sos10
and can view the variable user as a character variable
...
but cannot work with both...

I hear that this is not possible in bash...
and may be possible in csh, tcsh, or maybe sh

but I always thought that when you do sh script
it would run it in pure sh ...?


cheers again for any comments

N.B please try this exactly:
user=toto
echo~$user


bionic fysh

Last edited by bionicfysh; 08-30-2002 at 11:02 AM..
 

10 More Discussions You Might Find Interesting

1. UNIX for Dummies Questions & Answers

Sh Shell Script executing remote SQL queries

Hi there folks, I am trying to execute remote sql queries on an Oracle server. I would like to save the result of the executed sql queries on a text file, and send that text file as an attachment to an email address. Could anyone give me an idea on how the above could be achieved? Any help... (2 Replies)
Discussion started by: Javed
2 Replies

2. UNIX for Dummies Questions & Answers

shell script for sql queries

Hi All, I have written 4 sql queries . Now I want to write one SHELL SCRIPTING program for all these queries... i.e 1.select * from head; 2. select * from detail; 3. delete from head; 4. delete from detail; Please let me know how to write a shell script... Thank you (1 Reply)
Discussion started by: user71408
1 Replies

3. Shell Programming and Scripting

Multiple MySql queries in shell script?

Hi guys, i know how to run a single query using mysql embedded in a shell script as follows: `mysql -umyuser -pmypass --host myhost database<<SQL ${query}; quit SQL` However, how would i be able to run several queries within the same connection? The reason for this is i am creating... (3 Replies)
Discussion started by: muay_tb
3 Replies

4. Shell Programming and Scripting

How to extract queries using UNIX shell script?

Hi, I have an input file which have many lines,from which i need to extract only the complete sql statements and write this alone to an output file. please help us in this. Regards Meva (7 Replies)
Discussion started by: meva
7 Replies

5. Shell Programming and Scripting

Nested SQL queries within Shell script

Hi, Would someone know if I can fire nested sql queries in a shell script? Basically what I am trying to do is as follows: my_sql=$(sqlplus -s /nolog<<EOF|sed -e "s/Connected. *//g" connect... (2 Replies)
Discussion started by: shrutihardas
2 Replies

6. Shell Programming and Scripting

Shell script to give broadcast and network address

Hello, I am running a post script in autoyast where I am trying to set the broadcast and network address. I have the ip address and netmask already (reading from a file).. I saw the post from fpmurphy but it is using ksh which isn't an option in autoyast. Thanks in advance! (3 Replies)
Discussion started by: bloodclot
3 Replies

7. Shell Programming and Scripting

Executing set of sql queries from shell script

Hi All, I tried executing set of queries from shell script but not able to capture the input query in the log file. The code looks something similar to below sqlplus user/pwd@dbname << EOF > output.log $(<inputfile.txt) EOF The above code is capturing the output of queries into... (9 Replies)
Discussion started by: loggedin.ksh
9 Replies

8. Linux

How to store count of multiple queries in variables in a shell script?

how to store the count of queries in variables inside a filein shell script my output : filename ------- variable1=result from 1st query variable2=result from 2nd query . . . . (3 Replies)
Discussion started by: sanvel
3 Replies

9. Shell Programming and Scripting

run sql queries from UNIX shell script.

How can i run sql queries from UNIX shell script and retrieve data into text docs of UNIX? :confused: (1 Reply)
Discussion started by: 24ajay
1 Replies

10. Shell Programming and Scripting

Issue on executing db2 queries through shell script

hi i am trying to execute db2 queries through shell script. it's working fine but for few queries is not working ( those queries are taking time so the script is not waiting to get the complete the execution of that query ) could you please any one help me on this is there any wait... (1 Reply)
Discussion started by: bhaskar v
1 Replies
oarsub(1)							   OAR commands 							 oarsub(1)

NAME
oarsub - OAR batch scheduler job submission command. SYNOPSIS
oarsub [OPTIONS] <job executable> oarsub [OPTIONS] -I oarsub [OPTIONS] -C <JOB ID> DESCRIPTION
One uses oarsub to submit a job to the OAR batch scheduler managing the resources of a HPC Cluster. A job is defined by the description of a set of resources needed to execute a task and a script or executable to run. A job may also be run interactively, and one may also use oarsub to connect to a previously submitted job. The scheduler is in charge of providing a set of resources matching the oarsub command request. Once scheduled and then launched, a job consists of one process executed on the first node of the resources it was attibuted, with a set of environment variables that define the resources which are at the job disposal. That means that the job's executable is responsible for connecting those resources and dispatching the tasks. OPTIONS
-I, --interactive Request an interactive job. Open a login shell on the first node of the reservation instead of running a script. -C, --connect <JOB ID> Connect to a running job. -l, --resource <LIST> Set the requested resources for the job. The different parameters are resource properties registered in OAR database, and `walltime' which specifies the duration before the job must be automatically terminated if still running. Walltime format is [hour:mn:sec|hour:mn|hour]. Ex: nodes=4/cpu=1,walltime=2:00:00 You can specify multiple -l options at the same line. This tells OAR that this is a moldable job so it can take different shapes. For example if you have an application that is very scalable: oarsub -l cpu=2,walltime=20:00:00 -l cpu=4,walltime=10:00:00 -l cpu=8,walltime=5:00:00 ./script.sh OAR will schedule one of these three resource definitions (depending of the current load of the cluster). --array <NUMBER> Submit an array job containing "NUMBER" subjobs. All the subjobs share the same array_id but each subjob is independent and has its own job_id. All the subjobs have the same characteristics (script, requirements) and can be identified by an environment variable $OAR_ARRAY_INDEX. Array jobs can neither be Interactive (-I) nor a reservation (-r). --array-param-file <FILE> Submit a parametric array job. Each non-empty line of "FILE" defines the parameters for the submition of new subjob. All the subjobs have the same characteristics (script, requirements) and can be identified by an environment variable $OAR_ARRAY_INDEX. '#' is the comment sign. Parametric array jobs can neither be Interactive (-I) nor a reservation (-r). -S, --scanscript Batch mode only: ask oarsub to scan the given script for OAR directives (#OAR -l ...) -q, --queue <QUEUE> Set the the queue to submit the job to. -p, --property "<LIST>" Add a list of constraints to properties for the job. The format of a contraint is the one of a WHERE clause using the SQL syntax. -r, --reservation <DATE> Request that the job starts at a specified time. A job creation using this option is called a reservation and instead of a submission. --checkpoint <DELAY> Enable the checkpointing mechanism for the job. A signal will be sent DELAY seconds before the walltime to the first processus of the job (on the first node of the resources). --signal <#SIG> Specify the signal to use when checkpointing. Use signal numbers (see kill -l), default is 12 (SIGUSR2). -t, --type <TYPE> Specify a specific type (besteffort, timesharing, idempotent, cosystem, deploy, container, inner, token:xxx=yy) Note: a job with a type of idempotent will be automatically resubmitted if its exit code is 99. -d, --directory <DIR> Specify the directory where to launch the command (default is current directory) --project <TXT> Specify a name of a project the job belongs to. -n, --name <TXT> Specify an arbitrary name for the job. -a, --anterior <OAR JOB ID> Previously submitted job that this new job execution must depend on. The new job will only start upon the end of the previous one. --notify <TXT> Specify a notification method (mail or command to execute). Ex: --notify "mail:name@domain.com" --notify "exec:/path/to/script args" args are job_id,job_name,TAG,comment TAG can be: - RUNNING : when the job is launched - END : when the job is finished normally - ERROR : when the job is finished abnormally - INFO : used when oardel is called on the job - SUSPENDED : when the job is suspended - RESUMING : when the job is resumed --resubmit <OAR JOB ID> Resubmit the given job as a new one. -k, --use-job-key Activate the job-key mechanism. A job-key will be generated allowing to connect the job from outside the set of resources managed by OAR. The job-key mechanism may be activated by default in your OAR environment. In this case this option is useless. -i, --import-job-key-from-file Import the job-key to use from existing files (public and private key files) instead of generating a new one. One may also use the OAR_JOB_KEY_FILE environment variable. --import-job-key-inline Import the job-key to use inline instead of generating a new one. -e, --export-job-key-to-file Export the the job key to a file. Warning: the file will be overwritten if it already exists. (the %jobid% pattern is automatically replaced) -O <FILE> Specify the files that will store the standard output stream of the job. The %jobid% pattern is automatically replaced. -E <FILE> Specify the files that will store the standard error stream of the job. The %jobid% pattern is automatically replaced. --hold Set the job state into Hold instead of Waiting, so that it is not scheduled (you must run oarresume to turn it into the Waiting state). -D, --dumper Print result in Perl Data::Dumper format. -X, --xml Print result in XML format. -Y, --yaml Print result in YAML format. -J, --json Print result in JSON format. -h, --help Print this help message. -V, --version Print the version of OAR. ENVIRONMENT
OAR_FILE_NODES aka OAR_NODE_FILE aka OAR_NODEFILE Pathname to the file containing the list of the nodes that are allocated to the job. OAR_JOB_NAME Name of the job as given using the -n option. OAR_JOB_ID aka OAR_JOBID Id of the job. Each job get a unique job identifier. This identifier can be use to retrieve information about the job using oarstat, or to connect to a running job using oarsub -C or oarsh for instance. OAR_ARRAY_ID aka OAR_ARRAYID Array Id of the job. Each array job get an unique array identifier that is shared by all the subjobs of the array job. This identifier can be used to identify the different subjobs pertaining to a same array job. It can also be used to deal with all the subjobs of a given array at once (by means of the option --array in the case of oarstat, oarhold, oarresume and oardel). By definition, single jobs are considered array jobs with only one subjob. OAR_JOB_INDEX aka OAR_JOBINDEX Array Index of the job. On an array job, each job get an unique an unique (on a given array) job index. This identifier can be used to differ jobs on the context of a given array, for instance to give a different behaviour to each of the subjobs. By definition, single jobs are considered array jobs with only one subjob, having OAR_JOB_INDEX = 0. OAR_JOB_WALLTIME resp. OAR_JOB_WALLTIME_SECONDS Walltime of the job in the hh:mm:ss format resp. in seconds. OAR_RESOURCE_PROPERTIES_FILE Pathname to the file containing the list of all resources attributes for the job, and their value OAR_PROJECT_NAME Name of the project the job is part of, as given using the --project option. OAR_STDOUT and OAR_STDERR Pathname to the files storing the standard output and error of the job's executable when not running in interactive mode. OAR_WORKING_DIRECTORY aka OAR_WORKDIR aka OAR_O_WORKDIR Working directory for the job. The job executable will be executed in this directory on the first node allocated to the job. OAR_JOB_KEY_FILE Key file to use for the submission (or for oarsh) if using a job key (-k or --use-job-key option). One may provide the job key to import using the -i or --import-job-key-from-file option as well. SCRIPT
Script can contain the description of the job. Lines with options must begin by the key #OAR. There are the same options as previous. EXAMPLES
Job submission with arguments : oarsub -l /nodes=4 -I oarsub -q default -l /nodes=10/cpu=3,walltime=50:30:00 -p "switch = 'sw1'" /home/users/toto/prog oarsub -r "2009-04-27 11:00:00" -l /nodes=12/cpu=2 oarsub -C 154 Submit an array job with 10 identic subjobs: oarsub -l /nodes=4 /home/users/toto/prog --array 10 Submit a parametric array job (file params.txt): oarsub /home/users/toto/prog --array-param-file /home/users/toto/params.txt Parameter File example (params.txt): # my param file =item #single param =item 100 =item #a subjob without parameters =item "" =item #a subjob with a string containing spaces as parameter =item "arg1a arg1b arg1c" "arg2a arg2b" Script example (file /home/users/toto/script.sh): #!/bin/bash #OAR -l /nodes=4/cpu=1,walltime=3:15:00 #OAR -p switch = 'sw3' or switch = 'sw5' #OAR -t besteffort #OAR -t type2 #OAR -k #OAR -e /path/to/job/key #OAR --stdout stdoutfile.log /home/users/toto/prog Submit the script : oarsub -S /home/users/toto/script.sh SEE ALSO
oarsh(1), oardel(1), oarstat(1), oarnodes(1), oarhold(1), oarresume(1) COPYRIGHTS
Copyright X 2008 Laboratoire d'Informatique de Grenoble (http://www.liglab.fr). This software is licensed under the GNU Library General Public License. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. oarsub 2012-05-23 oarsub(1)
All times are GMT -4. The time now is 09:38 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy