04-20-2009
Performance issue in UNIX while generating .dat file from large text file
Hello Gurus,
We are facing some performance issue in UNIX. If someone had faced such kind of issue in past please provide your suggestions on this .
Problem Definition:
/Few of load processes of our Finance Application are facing issue in UNIX when they uses a shell script having below portion of code. The below portion of codes reads an input file and writes them into an .dat file. The performance issue arises when there is huge volume of data in the input file.
For example: For data volume having 200,000 records is taking 38 mins to get append/write into the .dat file which increases the complete load process timings. We need to increase the performance of this proces by reducing the time its taking to append/write the records.
/*****************************************
Portion of Code from Shell Script:
/**************************************************************************************************** *******************************************
m_arr_ctr=1
cat ${m_recv_dir}/${m_glb_d92_nm}${m_glb_file_seq} |while read d92_line
do
m_brch_cd=`echo "${d92_line}" |cut -c166-168`
# This is the case when we reach the last line '*/', we just skip that line
if [ "${m_brch_cd}" = "" ]
then
continue
fi
if [ "${m_brch_cd}" = "400" ]
then
m_jv_cd=`echo "${d92_line}" |cut -c190-192`
else
m_jv_cd=${m_brch_cd}
fi
if [ ! -s tmp_d92${m_brch_cd}z${m_jv_cd} ]
then
echo "TMP" > tmp_d92${m_brch_cd}z${m_jv_cd}
m_a_d92_list[$m_arr_ctr]=tmp_d92${m_brch_cd}z${m_jv_cd}
m_a_d92_files[$m_arr_ctr]=${m_recv_dir}/gd${m_brch_cd}x${m_jv_cd}${m_glb_rate_cd}.dat
m_arr_ctr=`expr $m_arr_ctr + 1`
m_touched="N"
else
m_touched="Y"
fi
if [ m_touched = "N" ]
then
echo "${d92_line}" > ${m_recv_dir}/gd${m_brch_cd}${m_jv_cd}${m_glb_rate_cd}.dat
else
echo "${d92_line}" >> ${m_recv_dir}/gd${m_brch_cd}${m_jv_cd}${m_glb_rate_cd}.dat
fi
done
for m_file_name in `echo ${m_a_d92_files[*]}`
do
if [[ `grep "*/" ${m_file_name} | wc -l` = 0 ]]
then
echo "*/" >> ${m_file_name}
fi
done
for m_file_name in `echo ${m_a_d92_list[*]}`
do
rm -f $m_file_name
done
/************************************
Please provide your valuable suggestions. Also is there any way by using SED command for appending the output in fast way?
10 More Discussions You Might Find Interesting
1. UNIX for Dummies Questions & Answers
Hi,
how does the Unix File System perform with large directories (containing ~30.000 files)?
What kind of structure is used for the organization of a directory's content, linear lists, (binary) trees?
I hope the description 'Unix File System' is exact enough, I don't know more about the file... (3 Replies)
Discussion started by: dive
3 Replies
2. Shell Programming and Scripting
Hi.
I want to attach a .xls or .dat file while sending mail thru unix.
I have come across diff attachments sending options, but allthose embeds the content in the mail. I want the attachement to be send as such.
Please help me out.
regards
Diwakar (1 Reply)
Discussion started by: diwakar82
1 Replies
3. Shell Programming and Scripting
Hi All,
I have a .dat file named test.dat where I have stored some process IDs.
Now I need to pick a process ID, one by one and then fire kill -9 for each of those. The logic should be:
1. open file <filename.dat>
2. read until last line of file
3. if process ID is found fire kill -9... (5 Replies)
Discussion started by: Sibasish
5 Replies
4. UNIX for Dummies Questions & Answers
I have a .dat file in unix and it keeps failing file validation on line x. How do I delete a data string from a .dat file in UNIX?
I tried the following:
sed -e 'data string' -e file name
and it telling me unrecognized command (4 Replies)
Discussion started by: supergirl3954
4 Replies
5. Shell Programming and Scripting
Background
-------------
The Unix flavor can be any amongst Solaris, AIX, HP-UX and Linux. I have below 2 flat files.
File-1
------
Contains 50,000 rows with 2 fields in each row, separated by pipe.
Row structure is like Object_Id|Object_Name, as following:
111|XXX
222|YYY
333|ZZZ
... (6 Replies)
Discussion started by: Souvik
6 Replies
6. Shell Programming and Scripting
Hi,
The source system has created the file in the dat format and put into the linux directory as mentioned below. I want to do foloowing things.
a) Delete the Line started with <CR><LF> in the record
b)Also line
...........................................................<CR><LF>
... (1 Reply)
Discussion started by: mr_harish80
1 Replies
7. Shell Programming and Scripting
I have around 300 files(*.rdf,*.fmb,*.pll,*.ctl,*.sh,*.sql,*.prog) which are of large size.
Around 8000 keywords(which will be in the file $keywordfile) needed to be searched inside those files.
If a keyword is found in a file..I have to insert the filename,extension,catagoery,keyword,occurrence... (8 Replies)
Discussion started by: millan
8 Replies
8. UNIX for Dummies Questions & Answers
Hi Guys,
I have a simple request. I have a file in w3c format. Each file has 2 header lines. Rest of the lines are 16 columns each. They are separated by Tab. I need to discard the first 2 lines and then write each column of the txt file into a seperate column of CSV. I tried the command below... (1 Reply)
Discussion started by: tinkugadu
1 Replies
9. Answers to Frequently Asked Questions
i have two files , one is var.txt and another res.dat file
var.txt contains informaton like below
date,request,sales,item
20171015,1,123456,216
20171015,1,123456,217
20171015,2,345678,214
20171015,3,456789,218
and res.dat contains is a one huge file contains information like... (1 Reply)
Discussion started by: pogo
1 Replies
10. Shell Programming and Scripting
i have a unix script which generates the csv file. the data in csv file is dynamic. how can i convert/move the data from csv file to xml. please suggest (1 Reply)
Discussion started by: archana25
1 Replies
A2P(1) Perl Programmers Reference Guide A2P(1)
NAME
a2p - Awk to Perl translator
SYNOPSIS
a2p [options] [filename]
DESCRIPTION
A2p takes an awk script specified on the command line (or from standard input) and produces a comparable perl script on the standard out-
put.
OPTIONS
Options include:
-D<number>
sets debugging flags.
-F<character>
tells a2p that this awk script is always invoked with this -F switch.
-n<fieldlist>
specifies the names of the input fields if input does not have to be split into an array. If you were translating an awk script that
processes the password file, you might say:
a2p -7 -nlogin.password.uid.gid.gcos.shell.home
Any delimiter can be used to separate the field names.
-<number>
causes a2p to assume that input will always have that many fields.
-o tells a2p to use old awk behavior. The only current differences are:
* Old awk always has a line loop, even if there are no line actions, whereas new awk does not.
* In old awk, sprintf is extremely greedy about its arguments. For example, given the statement
print sprintf(some_args), extra_args;
old awk considers extra_args to be arguments to "sprintf"; new awk considers them arguments to "print".
"Considerations"
A2p cannot do as good a job translating as a human would, but it usually does pretty well. There are some areas where you may want to
examine the perl script produced and tweak it some. Here are some of them, in no particular order.
There is an awk idiom of putting int() around a string expression to force numeric interpretation, even though the argument is always inte-
ger anyway. This is generally unneeded in perl, but a2p can't tell if the argument is always going to be integer, so it leaves it in. You
may wish to remove it.
Perl differentiates numeric comparison from string comparison. Awk has one operator for both that decides at run time which comparison to
do. A2p does not try to do a complete job of awk emulation at this point. Instead it guesses which one you want. It's almost always
right, but it can be spoofed. All such guesses are marked with the comment ""#???"". You should go through and check them. You might
want to run at least once with the -w switch to perl, which will warn you if you use == where you should have used eq.
Perl does not attempt to emulate the behavior of awk in which nonexistent array elements spring into existence simply by being referenced.
If somehow you are relying on this mechanism to create null entries for a subsequent for...in, they won't be there in perl.
If a2p makes a split line that assigns to a list of variables that looks like (Fld1, Fld2, Fld3...) you may want to rerun a2p using the -n
option mentioned above. This will let you name the fields throughout the script. If it splits to an array instead, the script is probably
referring to the number of fields somewhere.
The exit statement in awk doesn't necessarily exit; it goes to the END block if there is one. Awk scripts that do contortions within the
END block to bypass the block under such circumstances can be simplified by removing the conditional in the END block and just exiting
directly from the perl script.
Perl has two kinds of array, numerically-indexed and associative. Perl associative arrays are called "hashes". Awk arrays are usually
translated to hashes, but if you happen to know that the index is always going to be numeric you could change the {...} to [...]. Itera-
tion over a hash is done using the keys() function, but iteration over an array is NOT. You might need to modify any loop that iterates
over such an array.
Awk starts by assuming OFMT has the value %.6g. Perl starts by assuming its equivalent, $#, to have the value %.20g. You'll want to set
$# explicitly if you use the default value of OFMT.
Near the top of the line loop will be the split operation that is implicit in the awk script. There are times when you can move this down
past some conditionals that test the entire record so that the split is not done as often.
For aesthetic reasons you may wish to change the array base $[ from 1 back to perl's default of 0, but remember to change all array sub-
scripts AND all substr() and index() operations to match.
Cute comments that say "# Here is a workaround because awk is dumb" are passed through unmodified.
Awk scripts are often embedded in a shell script that pipes stuff into and out of awk. Often the shell script wrapper can be incorporated
into the perl script, since perl can start up pipes into and out of itself, and can do other things that awk can't do by itself.
Scripts that refer to the special variables RSTART and RLENGTH can often be simplified by referring to the variables $`, $& and $', as long
as they are within the scope of the pattern match that sets them.
The produced perl script may have subroutines defined to deal with awk's semantics regarding getline and print. Since a2p usually picks
correctness over efficiency. it is almost always possible to rewrite such code to be more efficient by discarding the semantic sugar.
For efficiency, you may wish to remove the keyword from any return statement that is the last statement executed in a subroutine. A2p
catches the most common case, but doesn't analyze embedded blocks for subtler cases.
ARGV[0] translates to $ARGV0, but ARGV[n] translates to $ARGV[$n]. A loop that tries to iterate over ARGV[0] won't find it.
ENVIRONMENT
A2p uses no environment variables.
AUTHOR
Larry Wall <larry@wall.org>
FILES
SEE ALSO
perl The perl compiler/interpreter
s2p sed to perl translator
DIAGNOSTICS
BUGS
It would be possible to emulate awk's behavior in selecting string versus numeric operations at run time by inspection of the operands, but
it would be gross and inefficient. Besides, a2p almost always guesses right.
Storage for the awk syntax tree is currently static, and can run out.
perl v5.8.9 2005-03-10 A2P(1)