How to improve the performance of parsers in Perl?
Hi,
I have around one lakh records. I have used XML for the creation of the data.
I have used these 2 Perl modules.
The data will loo like this and most it is textual entries.
But i am facing performance issues while creating XML chunk.
For Example: 9000 entries it is around 2 min.
Is there any option available to reduce the time taken and improve the performance of these modules and XML generation?
Is there any caching mechanism available for these modules?
How can i improve the performance and reduce the time taken and provide results in quicker way?
Any suggestions?
Regards
Archana
FYI: A lakh or lac (English pronunciation: /ˈlæk/ lak or /ˈlɑːk/ lahk) is a unit in the Indian numbering system equal to one hundred thousand (100,000).
Last edited by fpmurphy; 04-21-2011 at 09:55 AM..
Reason: Add definition of lakh
Unless you need the complete document DOM in memory, a SAX (Sequential Access XML) parser will nearly always be faster. Whereas the DOM operates on the document as a whole, SAX parsers operate on each piece of the XML document sequentially. Even better would be StAX (Streaming API for XML) which is a newer API for pull-parsing of XML
Perl has several SAX modules but I do not see any StAX modules.
Unless you need the complete document DOM in memory, a SAX (Sequential Access XML) parser will nearly always be faster. Whereas the DOM operates on the document as a whole, SAX parsers operate on each piece of the XML document sequentially. Even better would be StAX (Streaming API for XML) which is a newer API for pull-parsing of XML
Perl has several SAX modules but I do not see any StAX modules.
Hi,
I tried looking but our constraint is that we need to use DOM parsers only.
So is there caching mechanism available for these parsers?
Is there any method to speed up the Dom parsers in Perl?
There is really no way to speed up a DOM parser except with a faster machine and more memory. As your file gets bigger and bigger, the DOM representation of your document will require more and more memory. That is why they came up with SAX and StAX.
Hello,
Attached is my very simple C++ code to remove any substrings (DNA sequence) of each other, i.e. any redundant sequence is removed to get unique sequences. Similar to sort | uniq command except there is reverse-complementary for DNA sequence. The program runs well with small dataset, but... (11 Replies)
Hi ,
i wrote a script to convert dates to the formate i want .it works fine but the conversion is tkaing lot of time . Can some one help me tweek this script
#!/bin/bash
file=$1
ofile=$2
cp $file $ofile
mydates=$(grep -Po '+/+/+' $ofile) # gets 8/1/13
mydates=$(echo "$mydates" | sort |... (5 Replies)
Hi,
Am pretty new for perl scripting and confused how to use XML Parser..
I just want to know how to use perl and XML parser..
The scenario is when i execute a command, it will generate a XML file..
From that XML file generated i need to grep for specific term that has HTML TAG say... (2 Replies)
Hi All,
I have written a script as follows which is taking lot of time in executing/searching only 3500 records taken as input from one file in log file of 12 GB Approximately.
Working of script is read the csv file as an input having 2 arguments which are transaction_id,mobile_number and search... (6 Replies)
Hi Friends,
I wrote the below shell script to generate a report on alert messages recieved on a day. But i for processing around 4500 lines (alerts) the script is taking aorund 30 minutes to process.
Please help me to make it faster and improve the performace of the script. i would be very... (10 Replies)
I have a data file of 2 gig
I need to do all these, but its taking hours, any where i can improve performance, thanks a lot
#!/usr/bin/ksh
echo TIMESTAMP="$(date +'_%y-%m-%d.%H-%M-%S')"
function showHelp {
cat << EOF >&2
syntax extreme.sh FILENAME
Specify filename to parse
EOF... (3 Replies)
hi someone tell me which ways i can improve disk I/O and system process performance.kindly refer some commands so i can do it on my test machine.thanks, Mazhar (2 Replies)
Hi All,
I am using grep command to find string "abc" in one file .
content of file is
***********
abc = xyz
def= lmn
************
i have given the below mentioned command to redirect the output to tmp file
grep abc file | sort -u | awk '{print #3}' > out_file
Then i am searching... (2 Replies)
Hi ,
i'm searching for files over many Aix servers with rsh command using this request :
find /dir1 -name '*.' -exec ls {} \;
and then count them with "wc"
but i would improve this search because it's too long and replace directly find with ls command but "ls *. " doesn't work.
and... (3 Replies)