Friends,
I have to write a shell script,the description is----
i Have to check the uniqueness of the numbers in a file.
A file is containing 200thousand tickets and a ticket have 15 numbers in asecending order.And there is a strip that is having 6 tickets that means 90 numbers.I... (7 Replies)
Hi All, Prepare a perl script for extracting data from xml file. The xml data look like as
AC StartTime="1227858839" ID="88" ETime="1227858837" DSTFlag="false" Type="2" Duration="303" />
<AS StartTime="1227858849" SigPairs="119 40 98 15 100 32 128 18 131 23 70 39 123 20 120 27 100 17 136 12... (3 Replies)
Below is my perl script:
#!/usr/bin/perl
open(FILE,"$ARGV") or die "$!";
@DATA = <FILE>;
close FILE;
$join = join("",@DATA);
@array = split( ">",$join);
for($i=0;$i<=scalar(@array);$i++){
system ("/home/bin/./program_name_count_length MULTI_sequence_DATA_FILE -d... (5 Replies)
While executing perl scriptit gives some compling issue, please help out
$inputFilename="c:\allways.pl";
open (FILEH,$inputFilename) or die "Could not open log file";
Error : Could not open log file at c:\allways.pl line 4
learner in Perl (1 Reply)
I’m new to Linux script and not sure how to filter out bad records from huge flat files (over 1.3GB each). The delimiter is a semi colon “;”
Here is the sample of 5 lines in the file:
Name1;phone1;address1;city1;state1;zipcode1
Name2;phone2;address2;city2;state2;zipcode2;comment... (7 Replies)
What do i need to do have the below perl program load 205 million record files into the hash. It currently works on smaller files, but not working on huge files. Any idea what i need to do to modify to make it work with huge files:
#!/usr/bin/perl
$ot1=$ARGV;
$ot2=$ARGV;
open(mfileot1,... (12 Replies)
We have the data looks like below in a log file.
I want to generat files based on the string between two hash(#) symbol like below
Source:
#ext1#test1.tale2 drop
#ext1#test11.tale21 drop
#ext1#test123.tale21 drop
#ext2#test1.tale21 drop
#ext2#test12.tale21 drop
#ext3#test11.tale21 drop... (5 Replies)
I have 2 large file (.dat) around 70 g, 12 columns but the data not sorted in both the files.. need your inputs in giving the best optimized method/command to achieve this and redirect the not macthing lines to the thrid file ( diff.dat)
File 1 - 15 columns
File 2 - 15 columns
Data is... (9 Replies)
Discussion started by: kartikirans
9 Replies
LEARN ABOUT DEBIAN
data::dumpxml::parser
DumpXML::Parser(3pm) User Contributed Perl Documentation DumpXML::Parser(3pm)NAME
Data::DumpXML::Parser - Restore data dumped by Data::DumpXML
SYNOPSIS
use Data::DumpXML::Parser;
my $p = Data::DumpXML::Parser->new;
my $data = $p->parsefile(shift || "test.xml");
DESCRIPTION
"Data::DumpXML::Parser" is an "XML::Parser" subclass that can recreate the data structure from an XML document produced by "Data::DumpXML".
The parserfile() method returns a reference to an array of the values dumped.
The constructor method new() takes a single additional argument to that of "XML::Parser":
Blesser => CODEREF
A subroutine that is invoked to bless restored objects. The subroutine is invoked with two arguments: a reference to the object, and a
string containing the class name. If not provided, the built-in "bless" function is used.
For situations where the input file cannot necessarily be trusted and blessing arbitrary Classes might give malicious input the ability
to exploit the DESTROY methods of modules used by the code, it is a good idea to provide a no-op blesser:
my $p = Data::DumpXML::Parser->new(Blesser => sub {});
SEE ALSO
Data::DumpXML, XML::Parser
AUTHOR
Copyright 2001 Gisle Aas.
This library is free software; you can redistribute it and/or modify it under the same terms as Perl itself.
perl v5.8.8 2006-04-08 DumpXML::Parser(3pm)