08-05-2008
Hi,
I tried running it. This time I didnt get any error.
I wrote it as
sort /app/chdata/workflow/suppl/esoutput/spd/flatfile/testfile1.txt | awk '{ file=substr($0,1,2)".txt"; print >> file }'
The script didnt throw any error.
However the output file is 0KB.
10 More Discussions You Might Find Interesting
1. UNIX for Dummies Questions & Answers
Hello,
Each record has a lenght of 7 characters
I have 2 types of records 010 and 011
There is no character of end of line.
For example my file is like that :
010hello 010bonjour011both 011sisters
I would like to have 2 files
010.txt (2 records)
hello
bonjour
and
... (1 Reply)
Discussion started by: jeuffeu
1 Replies
2. Shell Programming and Scripting
hi,
I'm trying to sort a file which has 3.7 million records an gettign the following error...any help is appreciated...
sort: Write error while merging.
Thanks (6 Replies)
Discussion started by: greenworld
6 Replies
3. Shell Programming and Scripting
Dear All,
I am a newbie to shell scripting so this one is really over my head.
I have a text file with five fields as below:
76576.867188 6232.454102 2.008904 55.000000 3
76576.867188 6232.454102 3.607231 55.000000 4
76576.867188 6232.454102 1.555146 65.000000 3
76576.867188 6232.454102... (19 Replies)
Discussion started by: Ghetz
19 Replies
4. Shell Programming and Scripting
Hi Gurus,
I need to cut single record in the file(asdf) to multile records based on the number of bytes..(44 characters). So every record will have 44 characters. All the records should be in the same file..to each of these lines I need to add the folder(<date>) name.
I have a dir. in which... (20 Replies)
Discussion started by: ram2581
20 Replies
5. Shell Programming and Scripting
cat file1.txt
field1 "user1":
field2:"data-cde"
field3:"data-pqr"
field4:"data-mno"
field1 "user1":
field2:"data-dcb"
field3:"data-mxz"
field4:"data-zul"
field1 "user2":
field2:"data-cqz"
field3:"data-xoq"
field4:"data-pos"
Now i need to have the date like below.
i have just... (7 Replies)
Discussion started by: ckaramsetty
7 Replies
6. Shell Programming and Scripting
I have a file which has number of pipe delimited records.
I am able to read the records....but I want to sort it after reading.
i=0
while IFS="|" read -r usrId dataOwn expire email group secProf startDt endDt smhRole RoleCat DataProf SysRole MesgRole SearchProf
do
print $usrId $dataOwn... (4 Replies)
Discussion started by: harish468
4 Replies
7. Shell Programming and Scripting
I have to split a file based on number of lines and the below command works fine:
split -l 2 Inputfile -d OutputfileMy input file contains header, detail and trailor info as below:
H
D
D
D
D
TMy split files for the above command contains:
First File:
H
DSecond File:
... (11 Replies)
Discussion started by: Ajay Venkatesan
11 Replies
8. Shell Programming and Scripting
Hello All,
I have a large file, more than 50,000 lines, and I want to split it in even 5000 records. Which I can do using
sed '1d;$d;' <filename> | awk 'NR%5000==1{x="F"++i;}{print > x}'Now I need to add one more condition that is not to break the file at 5000th record if the 5000th record... (20 Replies)
Discussion started by: ibmtech
20 Replies
9. Shell Programming and Scripting
Hi All,
I have one file containing thousands of table names in single column. Now I want that file split into multiple files e.g one file containing table names starting from A, other containing all tables starting from B...and so on..till Z.
I tried below but it did not work.
for i in... (6 Replies)
Discussion started by: shekhar_4_u
6 Replies
10. Shell Programming and Scripting
I have a dilemma, we have users who are copying files to "directory 1." These images have file names which include the year it was taken. I need to put together a script to do the following:
Examine the file naming convention, ensuring it's the proper format (e.g. test-1983_filename-123.tif)... (8 Replies)
Discussion started by: Nvizn
8 Replies
LEARN ABOUT DEBIAN
pegasus-statistics
PEGASUS-STATISTICS(1) PEGASUS-STATISTICS(1)
NAME
pegasus-statistics - A tool to generate statistics about the workflow run.
SYNOPSIS
pegasus-statistics [-h|--help]
[-o|--output dir]
[-c|--conf propfile]
[-p|--statistics-level level]
[-t|--time-filter filter]
[-i|--ignore-db-inconsistency]
[-v|--verbose]
[-q|--quiet]
[submitdir]
DESCRIPTION
pegasus-statistics generates statistics about the workflow run like total jobs/tasks/sub workflows ran , how many succeeded/failed etc. It
generates job instance statistics like run time, condor queue delay etc. It generates invocation statistics information grouped by
transformation name. It also generates job instance and invocation statistics information grouped by time and host.
OPTIONS
-h, --help
Prints a usage summary with all the available command-line options.
-o dir, --output dir
Writes the output to the given directory.
-c propfile, --conf propfile
The properties file to use. This option overrides all other property files.
-s level, --statistics-level level
Specifies the statistics information to generate. Valid levels are: all, summary, wf_stats, jb_stats, tf_stats, and ti_stats. Default
is summary. The output generated by pegasus-statistics is based on the the level set:
o all: generates all the statistics information.
o summary: generates the workflow statistics summary. In the case of a hierarchical workflow the summary is across all sub
workflows.
o wf_stats: generates the workflow statistics information of each individual workflow. In case of a hierarchical workflow the
workflow statistics are created for each sub workflow.
o jb_stats: generates the job statistics information of each individual workflow. In case of hierarchical workflow the job
statistics is created for each sub workflows.
o tf_stats: generates the invocation statistics information of each individual workflow grouped by transformation name .In case of
hierarchical workflow the transformation statistics is created for each sub workflows.
o ti_stats: generates the job instance and invocation statistics like total count and runtime grouped by time and host.
-t filter, --time-filter filter
Specifies the time filter to group the time statistics. Valid filter values are: month, week, day, hour. Default is day.
-i, --ignore-db-inconsistency
Turn off the the check for database consistency.
-v, --verbose
Increases the log level. If omitted, the default level will be set to WARNING. When this option is given, the log level is changed to
INFO. If this option is repeated, the log level will be changed to DEBUG.
-q, --quiet
Decreases the log level. If omitted, the default level will be set to WARNING. When this option is given, the log level is changed to
ERROR.
EXAMPLE
Runs pegasus-statistics and writes the output to the given directory:
$ pegasus-statistics -o /scratch/statistics /scratch/grid-setup/run0001
AUTHORS
Prasanth Thomas
Pegasus Team http://pegasus.isi.edu
05/24/2012 PEGASUS-STATISTICS(1)