Sponsored Content
Top Forums Shell Programming and Scripting Help in outputting the result in log files Post 302137785 by dave_nithis on Thursday 27th of September 2007 03:26:43 AM
Old 09-27-2007
What is rec and FS in your solution?
 

10 More Discussions You Might Find Interesting

1. Shell Programming and Scripting

Outputting from two input files.

Ok, lets suppose I have two files like so: file1 John 5441223 Sandy 113446 Jill 489799 file2 Sandy Tuesday Jill Friday John Monday Is it possible to match records from these two files and output them into one output file? For example, lets suppose I want to output like this: ... (5 Replies)
Discussion started by: Liguidsoul
5 Replies

2. Shell Programming and Scripting

Outputting formatted Result log file from old 30000 lines result log<help required>

Well I have a 3000 lines result log file that contains all the machine data when it does the testing... It has 3 different section that i am intrsted in 1) starting with "20071126 11:11:11 Machine Header 1" 1000 lines... "End machine header 1" 2) starting with "20071126 12:12:12 Machine... (5 Replies)
Discussion started by: vikas.iet
5 Replies

3. Shell Programming and Scripting

Read multiple log files and create output file and put the result

OS : Linux 2.6.9-67 - Red Hat Enterprise Linux ES release 4 Looking for a script that reads the following log files that gets generated everynight between 2 - 5am Master_App_20090717.log Master_App1_20090717.log Master_App2_20090717.log Master_App3_20090717.log... (2 Replies)
Discussion started by: aavam
2 Replies

4. Shell Programming and Scripting

write a perl script or kornshell reading a two files and outputting to comma format

Hello Can someone help me to write a perl script or kornshell reading a two files and outputting to comma format. Here is the two files listofdisks.txt id, diskname, diskgroup, diskisze(GB), FC 1, CN34, GRP1, 30, FC_CN34 2, CN67, GRP5, 19, 4, VD1, GRP4, 23, FC_VD1 6, CF_D1, ... (0 Replies)
Discussion started by: deiow
0 Replies

5. UNIX for Dummies Questions & Answers

rm command-outputting files as they are deleted?

Solaris 10/Korn Hi unix experts!, Is it possible to output the actual file names to a file as they are being deleted via the rm command? Context: Im executing the shell script at the command line and directing the output to an output file eg purgescript.ksh > output.lst within the... (3 Replies)
Discussion started by: satnamx
3 Replies

6. Shell Programming and Scripting

Copying files after result

Hi, I have a shell script #!/bin/sh date echo 'HI PROD' echo $Please ENTER THE INPUT 1 for old files 2 for new file read i if ; then cd /apps/acetp3_logs/prod3/O* pwd echo $PLEASE ENTER THE STRING TO SEARCH (PLEASE ENTER THE STRING INSIDE QUOTES ' ') read j echo... (6 Replies)
Discussion started by: thelakbe
6 Replies

7. UNIX for Dummies Questions & Answers

Outputting 1 file per row if pattern exists between files

I have many files that can have various amounts of rows. I essentially want to output each row into a new file if a pattern is matched between two files. I have some code that does something similar but I want it to output every single input row from every file into a separate output file; that... (5 Replies)
Discussion started by: verse123
5 Replies

8. UNIX for Beginners Questions & Answers

Comparing fastq files and outputting common records

I have two files: File_1: @M04961:22:000000000-B5VGJ:1:1101:9280:7106 1:N:0:86 GGCATGAAAACATACAAACCGTCTTTCCAGAAATTGTTCCAAGTATCGGCAACAGCTTTATCAATACCATGAAAAATATCAACCACACCAGAAGCAGCAT + GGGGGGGGGGGGGGGGGCCGGGGGF,EDFFGEDFG,@DGGCGGEGGG7DCGGGF68CGFFFGGGG@CGDGFFDFEFEFF:30CGAFFDFEFF8CAF;;8F ... (3 Replies)
Discussion started by: Xterra
3 Replies

9. Shell Programming and Scripting

Outputting Errors to a Log file

Good Morning, Every so often, I have copy scripts that to don't complete, but I don't immediately know why. It usually ends up being a permissions issue or a length issue. The scripts edit a log file, so I'd like to include any copy errors/issues in that file to check if the copies... (4 Replies)
Discussion started by: Stellaman1977
4 Replies

10. Shell Programming and Scripting

Outputting data from log file to report

I have a log file that looks like this. the lines are grouped. 2 lines per entry. M: 2019-01-25 13:02:31.698 P25, received network transmission from KI4EKI to TG 10282 M: 2019-01-25 13:02:35.694 P25, network end of transmission, 4.3 seconds, 1% packet loss M: 2019-01-25 13:02:38.893 P25,... (7 Replies)
Discussion started by: ae4ml
7 Replies
XML::SAX::ByRecord(3pm) 				User Contributed Perl Documentation				   XML::SAX::ByRecord(3pm)

NAME
XML::SAX::ByRecord - Record oriented processing of (data) documents SYNOPSIS
use XML::SAX::Machines qw( ByRecord ) ; my $m = ByRecord( "My::RecordFilter1", "My::RecordFilter2", ... { Handler => $h, ## optional } ); $m->parse_uri( "foo.xml" ); DESCRIPTION
XML::SAX::ByRecord is a SAX machine that treats a document as a series of records. Everything before and after the records is emitted as- is while the records are excerpted in to little mini-documents and run one at a time through the filter pipeline contained in ByRecord. The output is a document that has the same exact things before, after, and between the records that the input document did, but which has run each record through a filter. So if a document has 10 records in it, the per-record filter pipeline will see 10 sets of ( start_document, body of record, end_document ) events. An example is below. This has several use cases: o Big, record oriented documents Big documents can be treated a record at a time with various DOM oriented processors like XML::Filter::XSLT. o Streaming XML Small sections of an XML stream can be run through a document processor without holding up the stream. o Record oriented style sheets / processors Sometimes it's just plain easier to write a style sheet or SAX filter that applies to a single record at at time, rather than having to run through a series of records. Topology Here's how the innards look: +-----------------------------------------------------------+ | An XML:SAX::ByRecord | | Intake | | +----------+ +---------+ +--------+ Exhaust | --+-->| Splitter |--->| Stage_1 |-->...-->| Merger |----------+-----> | +----------+ +---------+ +--------+ | | ^ | | | | | +---------->---------------+ | | Events not in any records | | | +-----------------------------------------------------------+ The "Splitter" is an XML::Filter::DocSplitter by default, and the "Merger" is an XML::Filter::Merger by default. The line that bypasses the "Stage_1 ..." filter pipeline is used for all events that do not occur in a record. All events that occur in a record pass through the filter pipeline. Example Here's a quick little filter to uppercase text content: package My::Filter::Uc; use vars qw( @ISA ); @ISA = qw( XML::SAX::Base ); use XML::SAX::Base; sub characters { my $self = shift; my ( $data ) = @_; $data->{Data} = uc $data->{Data}; $self->SUPER::characters( @_ ); } And here's a little machine that uses it: $m = Pipeline( ByRecord( "My::Filter::Uc" ), $out, ); When fed a document like: <root> a <rec>b</rec> c <rec>d</rec> e <rec>f</rec> g </root> the output looks like: <root> a <rec>B</rec> c <rec>C</rec> e <rec>D</rec> g </root> and the My::Filter::Uc got three sets of events like: start_document start_element: <rec> characters: 'b' end_element: </rec> end_document start_document start_element: <rec> characters: 'd' end_element: </rec> end_document start_document start_element: <rec> characters: 'f' end_element: </rec> end_document METHODS
new my $d = XML::SAX::ByRecord->new( @channels, \%options ); Longhand for calling the ByRecord function exported by XML::SAX::Machines. CREDIT
Proposed by Matt Sergeant, with advise by Kip Hampton and Robin Berjon. Writing an aggregator. To be written. Pretty much just that "start_manifold_processing" and "end_manifold_processing" need to be provided. See XML::Filter::Merger and it's source code for a starter. perl v5.10.0 2009-06-11 XML::SAX::ByRecord(3pm)
All times are GMT -4. The time now is 10:43 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy