Sponsored Content
Top Forums Shell Programming and Scripting Perl : How is file handling working here?? Post 302831881 by balajesuri on Friday 12th of July 2013 06:19:17 AM
Old 07-12-2013
Everytime you do read from a filehandle and store it in a scalar variable, then one line of input is read. (By one line, I mean the data until the next record separator).

Here is how it works internally:
When you open a filehandle, lets say, there's a mark that is at the beginning of file. The first time $rec = scalar <$INF> is encountered, the first line from file is read and referred by $rec. Now, the mark that we spoke of earlier, is at the character just after newline character (assuming newline as the default record separator). Now again when $rec = scalar <$INF> is encountered, one more line of data is read.

In the example code you provided in your post, a wrapper routine "readfile" reads one line of data and prints it.

The usual way of reading a file is to use a loop. Then again, that depends on what you really want to do:

Code:
open FH, "< /path/to/file";
while ($rec = <FH>) {
    print $rec;
}
close FH;

 

10 More Discussions You Might Find Interesting

1. UNIX for Dummies Questions & Answers

file handling problem in perl......

Hi, I am opening a file......then i am wrting some data into it......and i am reopening the file again but ......i get a error cannot open file....... $::file= "\adder\testfile.txt" open(TEST1,$::file); some write operation close(TEST1) open(TEST1,$::file) 'I GET A ERROR CAN OPEN... (2 Replies)
Discussion started by: vivekshankar
2 Replies

2. Shell Programming and Scripting

perl redirect output to file ..not working

here is simple perl script i wanted for my net connection ... just to check if default gateway is pingable or not if not write in log file but problem is that i can not write in file i can print on STDOUT but not in file ...why so ?? same thing was there when i was tying to write on sockets... (7 Replies)
Discussion started by: zedex
7 Replies

3. Shell Programming and Scripting

Perl revers File handling

Hi Experts, I have a big text file, so I want read it at eof to upper bound !. after I use a fseek to go SEEK_END, is it possible to step up upperbound? Best Regards. Note that I'm used perl script. (2 Replies)
Discussion started by: Zaxon
2 Replies

4. Programming

XML Handling in Perl

Hi there, I'm newby in perl and XML. I can read and parse Xml with XML-Node upper XML::Parser, but how can I create XML tags and pack my individual data in it then send through socket. PLZ lead me :) Thanks in Advance. (1 Reply)
Discussion started by: Zaxon
1 Replies

5. Shell Programming and Scripting

String handling is not working inside if loop

Hi All, I am comparing two strings inside an if condition if the strings are same then it should go inside the loop else it should execute code given in else part. But there is a but inside my script Even if the if condition is true it is not going inside the loop also it is executing... (4 Replies)
Discussion started by: usha rao
4 Replies

6. Shell Programming and Scripting

file handling in perl without using system command

Hi , Is there any way to achieve following using perl program (i.e without using system command). 1.system ("echo 'test' > /usr/spool/ship.csv"); 2.system ("cat /usr/ajay_test* >> /usr/spool/RAM/work/patil.csv"); 3.system("> /usr/spool/ajay.txt"); e.g for system("rm -f... (1 Reply)
Discussion started by: ajaypatil_am
1 Replies

7. Shell Programming and Scripting

Handling Parameters in Perl

Hi All, I'm pretty new to the forum and also to UNIX. I have a requirement for which I need some help. I have a script (example.script) where I get user inputs using the read command. I would need to pass the read-fetched input to a perl command (explained below) in my script. The part which... (3 Replies)
Discussion started by: bharath.gct
3 Replies

8. Programming

Perl help for file handling

$# some text $$ some text $@ some text $$. some text Mg1 some text Mg2 some text . . . Mg10 some text The above 10 lines are to be extracted except the lines starting from $#,$$.,... (4 Replies)
Discussion started by: baig.abdul
4 Replies

9. Shell Programming and Scripting

PERL error handling

I have a PERL command line embedded in a UNIX script. The script doesn't handle errors coming out of this command. I'm processing large files and occassionally I run out of disk space and end up with half a file. perl -p -e 's/\n/\r\n/g' < TR_TMP_$4 > $4 How do I handle errors coming out... (1 Reply)
Discussion started by: OTChancy
1 Replies

10. Shell Programming and Scripting

Perl file handling error

Hi, I am reading and file and writting each word to other file. where I have used array to store the data. I am getting below error as "Use of uninitialized value in concatenation (.) or string at customize_split_raw.pl line 51, <IN_FILE> " Where my line 51 code is 50 foreach... (8 Replies)
Discussion started by: Beginer123
8 Replies
XML::SAX::ByRecord(3pm) 				User Contributed Perl Documentation				   XML::SAX::ByRecord(3pm)

NAME
XML::SAX::ByRecord - Record oriented processing of (data) documents SYNOPSIS
use XML::SAX::Machines qw( ByRecord ) ; my $m = ByRecord( "My::RecordFilter1", "My::RecordFilter2", ... { Handler => $h, ## optional } ); $m->parse_uri( "foo.xml" ); DESCRIPTION
XML::SAX::ByRecord is a SAX machine that treats a document as a series of records. Everything before and after the records is emitted as- is while the records are excerpted in to little mini-documents and run one at a time through the filter pipeline contained in ByRecord. The output is a document that has the same exact things before, after, and between the records that the input document did, but which has run each record through a filter. So if a document has 10 records in it, the per-record filter pipeline will see 10 sets of ( start_document, body of record, end_document ) events. An example is below. This has several use cases: o Big, record oriented documents Big documents can be treated a record at a time with various DOM oriented processors like XML::Filter::XSLT. o Streaming XML Small sections of an XML stream can be run through a document processor without holding up the stream. o Record oriented style sheets / processors Sometimes it's just plain easier to write a style sheet or SAX filter that applies to a single record at at time, rather than having to run through a series of records. Topology Here's how the innards look: +-----------------------------------------------------------+ | An XML:SAX::ByRecord | | Intake | | +----------+ +---------+ +--------+ Exhaust | --+-->| Splitter |--->| Stage_1 |-->...-->| Merger |----------+-----> | +----------+ +---------+ +--------+ | | ^ | | | | | +---------->---------------+ | | Events not in any records | | | +-----------------------------------------------------------+ The "Splitter" is an XML::Filter::DocSplitter by default, and the "Merger" is an XML::Filter::Merger by default. The line that bypasses the "Stage_1 ..." filter pipeline is used for all events that do not occur in a record. All events that occur in a record pass through the filter pipeline. Example Here's a quick little filter to uppercase text content: package My::Filter::Uc; use vars qw( @ISA ); @ISA = qw( XML::SAX::Base ); use XML::SAX::Base; sub characters { my $self = shift; my ( $data ) = @_; $data->{Data} = uc $data->{Data}; $self->SUPER::characters( @_ ); } And here's a little machine that uses it: $m = Pipeline( ByRecord( "My::Filter::Uc" ), $out, ); When fed a document like: <root> a <rec>b</rec> c <rec>d</rec> e <rec>f</rec> g </root> the output looks like: <root> a <rec>B</rec> c <rec>C</rec> e <rec>D</rec> g </root> and the My::Filter::Uc got three sets of events like: start_document start_element: <rec> characters: 'b' end_element: </rec> end_document start_document start_element: <rec> characters: 'd' end_element: </rec> end_document start_document start_element: <rec> characters: 'f' end_element: </rec> end_document METHODS
new my $d = XML::SAX::ByRecord->new( @channels, \%options ); Longhand for calling the ByRecord function exported by XML::SAX::Machines. CREDIT
Proposed by Matt Sergeant, with advise by Kip Hampton and Robin Berjon. Writing an aggregator. To be written. Pretty much just that "start_manifold_processing" and "end_manifold_processing" need to be provided. See XML::Filter::Merger and it's source code for a starter. perl v5.10.0 2009-06-11 XML::SAX::ByRecord(3pm)
All times are GMT -4. The time now is 06:52 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy