Sponsored Content
Full Discussion: Edit Multiple Files in VI
Top Forums UNIX for Dummies Questions & Answers Edit Multiple Files in VI Post 41444 by criglerj on Monday 6th of October 2003 12:16:27 PM
Old 10-06-2003
When yanking, put the lines you want to copy into a named buffer, then edit the other file. E.g.,
Code:
$ vi foo1
"a3yy
:e foo2
3G
"ap
:wq

Use your favorite variation on "y" and "p" commands.
 

10 More Discussions You Might Find Interesting

1. AIX

Locking a file when using VI to prevent multiple-edit sessions by diff users

At the office, we often have to edit one file with VI. We are 4-6 workers doing it and sometimes can be done at the same time. We have found a problem and want to prevent it with a file lock. Is it possible and how ? problem : Worker-a starts edit VI session on File-A at 1PM Worker-b... (14 Replies)
Discussion started by: Browser_ice
14 Replies

2. Shell Programming and Scripting

How to edit file sections that cross multiple lines?

Hello, I'm wondering where I could go to learn how to edit file sections that cross multiple lines. I'm wanting to write scripts that will add Gnome menu entries for all users on a system for scripts I write, etc. I can search an replace simple examples with sed, but this seems more complex. ... (8 Replies)
Discussion started by: Narnie
8 Replies

3. Shell Programming and Scripting

Read and edit multiple files using a while loop

Hi all, I would like to simply read a file which lists a number of pathnames and files, then search and replace key strings using a few vi commands: :1,$s/search_str/replace_str/g<return> but I am not sure how to automate the <return> of these vis commands when I am putting this in a... (8 Replies)
Discussion started by: cyberfrog
8 Replies

4. UNIX for Dummies Questions & Answers

Using AWK: Extract data from multiple files and output to multiple new files

Hi, I'd like to process multiple files. For example: file1.txt file2.txt file3.txt Each file contains several lines of data. I want to extract a piece of data and output it to a new file. file1.txt ----> newfile1.txt file2.txt ----> newfile2.txt file3.txt ----> newfile3.txt Here is... (3 Replies)
Discussion started by: Liverpaul09
3 Replies

5. UNIX for Advanced & Expert Users

awk - remove block of text, multiple actions for 'if', inline edit

I'm having a couple of issues. I'm trying to edit a nagios config and remove a host definition if a certain "host_name" is found. My thought is I would find host definition block containing the host_name I'm looking for and output the line numbers for the first and last lines. Using set, I will... (9 Replies)
Discussion started by: mglenney
9 Replies

6. Shell Programming and Scripting

Need Help to Edit multiple column of a file

Hello Team, I want to know if there is any one liner command , using which I can edit multiple column of a file. input file input.txt (comma separated), taran, 12.45, uttam, 23.40, babay karan, 12.45, raju, 11.40, rahulg I want to update, 2nd and 4th column, but want all those column... (8 Replies)
Discussion started by: Uttam Maji
8 Replies

7. Shell Programming and Scripting

Help require to edit multiple files

I have 6 different pipe delimiter files. My loads failing due to missing company code. File1: 31 st field is company code. 402660076310|2014-12-10 17:22:39|2280361|MRYKI|1||CA|92507|US||1|1|0|0|0||N|A1|ONT|1001891771660009250700402660076310|WM|0201|RALA |2014-12-12|5|2014-12-12|5||FRI - 12... (4 Replies)
Discussion started by: srikanth38
4 Replies

8. Shell Programming and Scripting

Gunzip and edit many files

Experts - I have an requirement to gunzip and edit many files in a pair of directories. I have two scripts that work great when run separately, but I'm having problems combining the two. The goal is to gunzip the files found in the first script and pipe them to the bash/sed script and... (9 Replies)
Discussion started by: timj123
9 Replies

9. Shell Programming and Scripting

Edit and replace the multiple values in a file in one iteration

Hi All, I am preserving OLD and NEW values and want to replace the values in one go instead of using multiple sed and mv commands. Please help. echo "\nEnter the new qStart time '${CODE}' - (Hit Enter for No Change): \c" read NEW echo "\nEnter the new qStop time '${CODE}' - (Hit Enter for... (2 Replies)
Discussion started by: sdosanjh
2 Replies

10. Shell Programming and Scripting

Using sed to edit multiple files

Created a shell script to invoke sed to edit multiple files, but am missing something. Here's the shell script: oracle:$ cat edit_scripts.sh #!/bin/sh #------------------------------------------------------------------------------ # edit_scripts.sh # # This script executes sed to make global... (4 Replies)
Discussion started by: edstevens
4 Replies
XML::Filter::Merger(3pm)				User Contributed Perl Documentation				  XML::Filter::Merger(3pm)

NAME
XML::Filter::Merger - Assemble multiple SAX streams in to one document SYNOPSIS
## See XML::SAX::Manifold and XML::SAX::ByRecord for easy ways ## to use this processor. my $w = XML::SAX::Writer->new( Output => *STDOUT ); my $h = XML::Filter::Merger->new( Handler => $w ); my $p = XML::SAX::ParserFactory->parser( Handler => $h ); ## To insert second and later docs in to the first doc: $h->start_manifold_document( {} ); $p->parse_file( $_ ) for @ARGV; $h->end_manifold_document( {} ); ## To insert multiple docs inline (especially useful if ## a subclass does the inline parse): $h->start_document( {} ); $h->start_element( { ... } ); .... $h->start_element( { Name => "foo", ... } ); $p->parse_uri( $uri ); ## Body of $uri inserted in <foo>...</foo> $h->end_element( { Name => "foo", ... } ); ... DESCRIPTION
Combines several documents in to one "manifold" document. This can be done in two ways, both of which start by parsing a master document in to which (the guts of) secondary documents will be inserted. Inlining Secondary Documents The most SAX-like way is to simply pause the parsing of the master document between the two events where you want to insert a secondard document and parse the complete secondard document right then and there so it's events are inserted in the pipeline at the right spot. XML::Filter::Merger only passes the content of the secondary document's root element: my $h = XML::Filter::Merger->new( Handler => $w ); $h->start_document( {} ); $h->start_element( { Name => "foo1" } ); $p->parse_string( "<foo2><baz /></foo2>" ); $h->end_element( { Name => "foo1" } ); $h->end_document( {} ); results in $w seeing a document like "<foo1><baz/></foo1>". This technique is especially useful when subclassing XML::Filter::Merger to implement XInclude-like behavior. Here's a useless example that inserts some content after each "characters()" event: package Subclass; use vars qw( @ISA ); @ISA = qw( XML::Filter::Merger ); sub characters { my $self = shift; return $self->SUPER::characters( @_ ) ## ** unless $self->in_master_document; ## ** my $r = $self->SUPER::characters( @_ ); $self->set_include_all_roots( 1 ); XML::SAX::PurePerl->new( Handler => $self )->parse_string( "<hey/>" ); return $r; } ## **: It is often important to use the recursion guard shown here ## to protect the decision making logic that should only be run on ## the events in the master document from being run on events in the ## subdocument. Of course, if you want to apply the logic ## recursively, just leave the guard code out (and, yes, in this ## example, th guard code is phrased in a slightly redundant fashion, ## but we want to make the idiom clear). Feeding this filter "<foo> </foo>" results in "<foo> <hey/></foo>". We've called set_include_all_roots( 1 ) to get the secondary document's root element included. Inserting Manifold Documents A more involved way suitable to handling consecutive documents it to use the two non-SAX events--"start_manifold_document" and "end_manifold_document"--that are called before the first document to be combined and after the last one, respectively. The first document to be started after the "start_manifold_document" is the master document and is emitted as-is except that it will contain the contents of all of the other documents just before the root "end_element()" tag. For example: $h->start_manifold_document( {} ); $p->parse_string( "<foo1><bar /></foo1>" ); $p->parse_string( "<foo2><baz /></foo2>" ); $h->end_manifold_document( {} ); results in "<foo><bar /><baz /></foo>". The details In case the above was a bit vague, here are the rules this filter lives by. For the master document: o Events before the root "end_element" are forwarded as received. Because of the rules for secondary documents, any secondary documents sent to the filter in the midst of a master document will be inserted inline as their events are received. o All remaining events, from the root "end_element" are buffered until the end_manifold_document() received, and are then forwarded on. For secondary documents: o All events before the root "start_element" are discarded. There is no way to recover these (though we can add an option for most non- DTD events, I believe). o The root "start_element" is discarded by default, or forwarded if "set_include_all_roots( $v )" has been used to set a true value. o All events up to, but not including, the root "end_element" are forwarded as received. o The root "end_element" is discarded or forwarded if the matching "start_element" was. o All remaining events until and including the "end_document" are forwarded and processing. o Secondary documents may contain other secondary documents. o Secondary documents need not be well formed. The must, however, be well balanced. This requires very little buffering and is "most natural" with the limitations: o All of each secondary document's events must all be received between two consecutive events of it's master document. This is because most master document events are not buffered and this filter cannot tell from which upstream source a document came. o If the master document should happen to have some egregiously large amount of whitespace, commentary, or illegal events after the root element, buffer memory could be huge. This should be exceedingly rare, even non-existent in the real world. o If any documents are not well balanced, the result won't be. o METHODS
new my $d = XML::Filter::Merger->new( \%options ); reset Clears the filter after an accident. Useful when reusing the filter. new() and start_manifold_document() both call this. start_manifold_document This must be called before the master document's "start_document()" if you want XML::Filter::Merger to insert documents that will be sent after the master document. It does not need to be called if you are going to insert secondary documents by sending their events in the midst of processing the master document. It is passed an empty ({}) data structure. Additional Methods These are provided to make it easy for subclasses to find out roughly where they are in the document structure. Generally, these should be called after calling SUPER::start_...() and before calling SUPER::end_...() to be accurate. in_master_document Returns TRUE if the current event is in the first top level document. document_depth Gets how many nested documents surround the current document. 0 means that you are in a top level document. In manifold mode, This may or may not be a secondary document: secondary documents may also follow the primary document, in which case they have a document depth of 0. element_depth Gets how many nested elements surround the current element in the current input document. Does not count elements from documents surrounding this document. top_level_document_number Returns the number of the top level document in a manifold document. This is 0 for the first top level document, which is always the master document. end_manifold_document This must be called after the last document's end_document is called. It is passed an empty ({}) data structure which is passed on to the next processor's end_document() call. This call also causes the end_element() for the root element to be passed on. set_include_all_roots $h->set_include_all_roots( 1 ); Setting this option causes the merger to include all root element nodes, not just the first document's. This means that later documents are treated as subdocuments of the output document, rather than as envelopes carrying subdocuments. Given two documents received are: Doc1: <root1><foo></root1> Doc1: <root2><bar></root2> Doc3: <root3><baz></root3> then with this option cleared (the default), the result looks like: <root1><foo><bar><baz></root1> . This is useful when processing document oriented XML and each upstream filter channel gets a complete copy of the document. This is the case with the machine XML::SAX::Manifold and the splitting filter XML::Filter::Distributor. With this option set, the result looks like: <root1><foo><root2><bar></root2><root3><baz></root3></root1> This is useful when processing record oriented XML, where the first document only contains the preamble and postamble for the records and not all of the records. This is the case with the machine XML::SAX::ByRecord and the splitting filter XML::Filter::DocSplitter. The two splitter filters mentioned set this feature appropriately. LIMITATIONS
The events before and after a secondary document's root element events are discarded. It is conceivable that characters, PIs and commentary outside the root element might need to be kept. This may be added as an option. The DocumentLocators are not properly managed: they should be saved and restored around each each secondary document. Does not yet buffer all events after the first document's root end_element event. If these bite you, contact me. AUTHOR
Barrie Slaymaker <barries@slaysys.com> COPYRIGHT
Copyright 2002, Barrie Slaymaker, All Rights Reserved. You may use this module under the terms of the Artistic, GNU Public, or BSD licenses, you choice. perl v5.10.0 2009-09-02 XML::Filter::Merger(3pm)
All times are GMT -4. The time now is 03:19 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy