How to improve the performance of parsers in Perl?
Hi,
I have around one lakh records. I have used XML for the creation of the data.
I have used these 2 Perl modules.
The data will loo like this and most it is textual entries.
But i am facing performance issues while creating XML chunk.
For Example: 9000 entries it is around 2 min.
Is there any option available to reduce the time taken and improve the performance of these modules and XML generation?
Is there any caching mechanism available for these modules?
How can i improve the performance and reduce the time taken and provide results in quicker way?
Any suggestions?
Regards
Archana
FYI: A lakh or lac (English pronunciation: /ˈlæk/ lak or /ˈlɑːk/ lahk) is a unit in the Indian numbering system equal to one hundred thousand (100,000).
Last edited by fpmurphy; 04-21-2011 at 09:55 AM..
Reason: Add definition of lakh
Hi ,
i'm searching for files over many Aix servers with rsh command using this request :
find /dir1 -name '*.' -exec ls {} \;
and then count them with "wc"
but i would improve this search because it's too long and replace directly find with ls command but "ls *. " doesn't work.
and... (3 Replies)
Hi All,
I am using grep command to find string "abc" in one file .
content of file is
***********
abc = xyz
def= lmn
************
i have given the below mentioned command to redirect the output to tmp file
grep abc file | sort -u | awk '{print #3}' > out_file
Then i am searching... (2 Replies)
hi someone tell me which ways i can improve disk I/O and system process performance.kindly refer some commands so i can do it on my test machine.thanks, Mazhar (2 Replies)
I have a data file of 2 gig
I need to do all these, but its taking hours, any where i can improve performance, thanks a lot
#!/usr/bin/ksh
echo TIMESTAMP="$(date +'_%y-%m-%d.%H-%M-%S')"
function showHelp {
cat << EOF >&2
syntax extreme.sh FILENAME
Specify filename to parse
EOF... (3 Replies)
Hi Friends,
I wrote the below shell script to generate a report on alert messages recieved on a day. But i for processing around 4500 lines (alerts) the script is taking aorund 30 minutes to process.
Please help me to make it faster and improve the performace of the script. i would be very... (10 Replies)
Hi All,
I have written a script as follows which is taking lot of time in executing/searching only 3500 records taken as input from one file in log file of 12 GB Approximately.
Working of script is read the csv file as an input having 2 arguments which are transaction_id,mobile_number and search... (6 Replies)
Hi,
Am pretty new for perl scripting and confused how to use XML Parser..
I just want to know how to use perl and XML parser..
The scenario is when i execute a command, it will generate a XML file..
From that XML file generated i need to grep for specific term that has HTML TAG say... (2 Replies)
Hi ,
i wrote a script to convert dates to the formate i want .it works fine but the conversion is tkaing lot of time . Can some one help me tweek this script
#!/bin/bash
file=$1
ofile=$2
cp $file $ofile
mydates=$(grep -Po '+/+/+' $ofile) # gets 8/1/13
mydates=$(echo "$mydates" | sort |... (5 Replies)
Hello,
Attached is my very simple C++ code to remove any substrings (DNA sequence) of each other, i.e. any redundant sequence is removed to get unique sequences. Similar to sort | uniq command except there is reverse-complementary for DNA sequence. The program runs well with small dataset, but... (11 Replies)