XMLSORT(1p) User Contributed Perl Documentation XMLSORT(1p)NAME
xmlsort - sorts 'records' in XML files
SYNOPSIS
xmlsort -r=<recordname> [ <other options> ] [ <filename> ]
Options:
-r <name> name of the elements to be sorted
-k <keys> child nodes to be used as sort keys
-i ignore case when sorting
-s normalise whitespace when comparing sort keys
-t <dir> buffer records to named directory rather than in memory
-m <bytes> set memory chunk size for disk buffering
-h help - display the full documentation
Example:
xmlsort -r 'person' -k 'lastname;firstname' -i -s in.xml >out.xml
DESCRIPTION
This script takes an XML document either on STDIN or from a named file and writes a sorted version of the file to STDOUT. The "-r" option
should be used to identify 'records' in the document - the bits you want sorted. Elements before and after the records will be unaffected
by the sort.
OPTIONS
Here is a brief summary of the command line options (and the XML::Filter::Sort options which they correspond to). For more details see
XML::Filter::Sort.
-r <recordname> (Record)
The name of the elements to be sorted. This can be a simple element name like 'person' or a pathname like 'employees/person' (only
person elements contained directly within an employees element).
-k <keys> (Keys)
Semicolon separated list of elements (or attributes) within a record which should be used as sort keys. Each key can optionally be
followed by 'alpha' or 'num' to indicate alphanumeric of numeric sorting and 'asc' or 'desc' for ascending or descending order (eg: -k
'lastname;firstname;age,n,d').
-i (IgnoreCase)
This option makes sort comparisons case insensitive.
-s (NormaliseKeySpace)
By default all whitespace in the sort key elements is considered significant. Specifying -s will case leading and trailing whitespace
to be stripped and internal whitespace runs to be collapsed to a single space.
-t <directory> (TempDir)
When sorting large documents, it may be prudent to use disk buffering rather than memory buffering. This option allows you to specify
where temporary files should be written.
-m <bytes> (MaxMem)
If you use the -t option to enable disk buffering, records will be collected in memory in 'chunks' of up to about 10 megabytes before
being sorted and spooled to temporary files. This option allows you to specify a larger chunk size. A suffix of K or M indicates
kilobytes or megabytes respectively.
SEE ALSO
This script uses the following modules:
XML::SAX::ParserFactory
XML::Filter::Sort
XML::SAX::Writer
AUTHOR
Grant McLean <grantm@cpan.org>
COPYRIGHT
Copyright (c) 2002 Grant McLean. All rights reserved. This program is free software; you can redistribute it and/or modify it under the
same terms as Perl itself.
perl v5.12.4 2002-06-14 XMLSORT(1p)
Check Out this Related Man Page
XML::Filter::DocSplitter(3pm) User Contributed Perl Documentation XML::Filter::DocSplitter(3pm)NAME
XML::Filter::DocSplitter - Multipass processing of documents
SYNOPSIS
## See XML::SAX::???? for an easier way to use this filter.
use XML::SAX::Machines qw( Machine ) ;
my $m = Machine(
[ Intake => "XML::Filter::DocSplitter" => qw( Filter ) ],
[ Filter => "My::Filter" => qw( Merger ) ],
[ Merger => "XML::Filter::Merger" => qw( Output ) ],
[ Output => *STDOUT ],
);
## Let the distributor coordinate with the merger
## XML::SAX::Manifold does this for you.
$m->Intake->set_aggregator( $m->Merger );
$m->parse_file( "foo" );
DESCRIPTION
XML::Filter::DocSplitter is a SAX filter that allows you to apply a filter to repeated sections of a document. It splits a document up at
a predefined elements in to multiple documents and the filter is run on each document. The result can be left as a stream of separate
documents or combined back in to a single document using a filter like XML::SAX::Merger.
By default, the input document is split in all children of the root element. By that reckoning, this document has three sub-documents in
it:
<doc>
<subdoc> .... </subdoc>
<subdoc> .... </subdoc>
<subdoc> .... </subdoc>
</doc>
When using without an aggregator, all events up to the first record are lost; with an aggregator, they are passed directly in to the
aggregator as the "first" document. All elements between the records (the "
" text nodes, in this case) are also passed directly to
the merger (these will arrive between the end_document and start_document calls for each of the records), as are all events from the last
record until the end of the input document. This means that the first document, as seen by the merger, is incomplete; it's missing it's
end_element, which is passed later.
The approach of passing events from the input document right on through to the merger differs from the way XML::Filter::Distributor works.
This class is derived from XML::SAX::Base, see that for details.
METHODS
new
my $d = XML::Filter::DocSplitter->new(
Handler => $h,
Aggregator => $a, ## optional
);
set_aggregator
$h->set_aggregator( $a );
Sets the SAX filter that will stitch the resulting subdocuments back together. Set to "undef" to prevent such stitchery.
The aggregator should support the "start_manifold_document", "end_manifold_document", and "set_include_all_roots" methods as described
in XML::Filter::Merger.
get_aggregator
my $a = $h->get_aggregator;
Gets the SAX filter that will stitch the resulting subdocuments back together.
set_split_path
$h->set_split_path( "/a/b/c" );
Sets the pattern to use when splitting the document. Patterns are a tiny little subset of the XPath language:
Pattern Description
======= ===========
/*/* splits the document on children of the root elt (default)
//record splits each <record> elt in to a document
/*/record splits each <record> child of the root elt
/a/b/c/d splits each of the <d> elts in to a document
get_split_path
my $a = $h->get_split_path;
LIMITATIONS
Can only feed a single aggregator at the moment :). I can fix this with a bit of effort.
AUTHOR
Barrie Slaymaker <barries@slaysys.com>
COPYRIGHT
Copyright 2000, Barrie Slaymaker, All Rights Reserved.
You may use this module under the terms of the Artistic, GPL, or the BSD licenses.
perl v5.10.0 2009-09-02 XML::Filter::DocSplitter(3pm)