Sponsored Content
Top Forums Shell Programming and Scripting Transpose multi-line records into a single row Post 302531431 by daveyabe on Thursday 16th of June 2011 06:30:14 PM
Old 06-16-2011
Transpose multi-line records into a single row

Now that I've parsed out the data that I desire I'm left with variable length multi-line records that are field seperated by new lines (\n) and record seperated by a single empty line ("")

At first I was considering doing something like this to append all of the record rows into a single row:

Code:
awk 'BEGIN {FS= "\n";RS=""} {print $1,$2,$3,$4,$5,etc. > "NEWdata.txt" }' data.txt

Then I realized that these are not fixed length records, they're each a different number of rows. Is there an easy way around this?? Thanks!
 

10 More Discussions You Might Find Interesting

1. Shell Programming and Scripting

AWK Multi-Line Records Processing

I am an Awk newbie and cannot wrap my brain around my problem: Given multi-line records of varying lengths separated by a blank line I need to skip the first two lines of every record and extract every-other line in each record unless the first line of the record has the word "(CONT)" in the... (10 Replies)
Discussion started by: RacerX
10 Replies

2. Shell Programming and Scripting

How to use Perl to merge multi-line into single line

Hi, Can anyone know how to use perl to merge the following multi-line information which beginning with "BAM" into one line. For each line need to delete the return and add a space. Please see the red color line. ******Org. Multi-line) BAM admin 101.203.57.22 ... (3 Replies)
Discussion started by: happyday
3 Replies

3. Shell Programming and Scripting

Capturing multi-line records containing known value?

Some records in a file look like this, with any number of lines between start and end flags: /Start Some stuff Banana 1 Some more stuff End/ /Start Some stuff End/ /Start Some stuff Some more stuff Banana 2 End/ ...how would I process this file to find records containing the... (8 Replies)
Discussion started by: cs03dmj
8 Replies

4. UNIX for Dummies Questions & Answers

Alphabetical sort for multi line records contains in a single file

Hi all, I So, I've got a monster text document comprising a list of various company names and associated info just in a long list one after another. I need to sort them alphabetically by name... The text document looks like this: Company Name: the_first_company's_name_here Address:... (2 Replies)
Discussion started by: quee1763
2 Replies

5. Shell Programming and Scripting

Merge multi-line output into a single line

Hello I did do a search and the past threads doesn't really solve my issue. (using various awk commands) I need to combine the output from java -version into 1 line, but I am having difficulties. When you exec java -version, you get: java version "1.5.0_06" Java(TM) 2 Runtime... (5 Replies)
Discussion started by: flagman5
5 Replies

6. Shell Programming and Scripting

Joining multi-line output to a single line in a group

Hi, My Oracle query is returing below o/p ---------------------------------------------------------- Ins trnas value a lkp1 x a lkp1 y b lkp1 a b lkp2 x b lkp2 y ... (7 Replies)
Discussion started by: gvk25
7 Replies

7. UNIX for Dummies Questions & Answers

Remove multi line and single line comments

Hi, I am trying to remove multi line and single line comments like examples below I have tried this pattern. it works fine for single line comments and multi line comments in a single line only. but this fails when the comments are extended in multiple lines as shown in the comment 2 of... (3 Replies)
Discussion started by: ahmedwaseem2000
3 Replies

8. Shell Programming and Scripting

Help with reformat single-line multi-fasta into multi-line multi-fasta

Input File: >Seq1 ASDADAFASFASFADGSDGFSDFSDFSDFSDFSDFSDFSDFSDFSDFSDFSD >Seq2 SDASDAQEQWEQeqAdfaasd >Seq3 ASDSALGHIUDFJANCAGPATHLACJHPAUTYNJKG ...... Desired Output File >Seq1 ASDADAFASF ASFADGSDGF SDFSDFSDFS DFSDFSDFSD FSDFSDFSDF SD >Seq2 (4 Replies)
Discussion started by: patrick87
4 Replies

9. Shell Programming and Scripting

Multi line log files to single line format

I want to read the log file which was generate from other command . And the output was having multi line in log files for job name and server name. But i need to make all the logs on one line Source file 07/15/2018 17:02:00 TRANSLOG_1700 Server0005_SQL ... (2 Replies)
Discussion started by: ranjancom2000
2 Replies

10. UNIX for Beginners Questions & Answers

Combine multi-row cell into a single line

My CSV file looks similar to this example (the K, L, and M are in the same cell as J but each on a new line within that cell): 1, A, B, C, D 2, E, F, G, H 3, I, J N, O, K L M, 4, P, Q, R, S I would like to have it look like this: 1, A,... (6 Replies)
Discussion started by: Kim Ashby
6 Replies
WWW::RobotRules(3)					User Contributed Perl Documentation					WWW::RobotRules(3)

NAME
WWW::RobotRules - database of robots.txt-derived permissions SYNOPSIS
use WWW::RobotRules; my $rules = WWW::RobotRules->new('MOMspider/1.0'); use LWP::Simple qw(get); { my $url = "http://some.place/robots.txt"; my $robots_txt = get $url; $rules->parse($url, $robots_txt) if defined $robots_txt; } { my $url = "http://some.other.place/robots.txt"; my $robots_txt = get $url; $rules->parse($url, $robots_txt) if defined $robots_txt; } # Now we can check if a URL is valid for those servers # whose "robots.txt" files we've gotten and parsed: if($rules->allowed($url)) { $c = get $url; ... } DESCRIPTION
This module parses /robots.txt files as specified in "A Standard for Robot Exclusion", at <http://www.robotstxt.org/wc/norobots.html> Webmasters can use the /robots.txt file to forbid conforming robots from accessing parts of their web site. The parsed files are kept in a WWW::RobotRules object, and this object provides methods to check if access to a given URL is prohibited. The same WWW::RobotRules object can be used for one or more parsed /robots.txt files on any number of hosts. The following methods are provided: $rules = WWW::RobotRules->new($robot_name) This is the constructor for WWW::RobotRules objects. The first argument given to new() is the name of the robot. $rules->parse($robot_txt_url, $content, $fresh_until) The parse() method takes as arguments the URL that was used to retrieve the /robots.txt file, and the contents of the file. $rules->allowed($uri) Returns TRUE if this robot is allowed to retrieve this URL. $rules->agent([$name]) Get/set the agent name. NOTE: Changing the agent name will clear the robots.txt rules and expire times out of the cache. ROBOTS.TXT The format and semantics of the "/robots.txt" file are as follows (this is an edited abstract of <http://www.robotstxt.org/wc/norobots.html>): The file consists of one or more records separated by one or more blank lines. Each record contains lines of the form <field-name>: <value> The field name is case insensitive. Text after the '#' character on a line is ignored during parsing. This is used for comments. The following <field-names> can be used: User-Agent The value of this field is the name of the robot the record is describing access policy for. If more than one User-Agent field is present the record describes an identical access policy for more than one robot. At least one field needs to be present per record. If the value is '*', the record describes the default access policy for any robot that has not not matched any of the other records. The User-Agent fields must occur before the Disallow fields. If a record contains a User-Agent field after a Disallow field, that constitutes a malformed record. This parser will assume that a blank line should have been placed before that User-Agent field, and will break the record into two. All the fields before the User-Agent field will constitute a record, and the User-Agent field will be the first field in a new record. Disallow The value of this field specifies a partial URL that is not to be visited. This can be a full path, or a partial path; any URL that starts with this value will not be retrieved Unrecognized records are ignored. ROBOTS.TXT EXAMPLES The following example "/robots.txt" file specifies that no robots should visit any URL starting with "/cyberworld/map/" or "/tmp/": User-agent: * Disallow: /cyberworld/map/ # This is an infinite virtual URL space Disallow: /tmp/ # these will soon disappear This example "/robots.txt" file specifies that no robots should visit any URL starting with "/cyberworld/map/", except the robot called "cybermapper": User-agent: * Disallow: /cyberworld/map/ # This is an infinite virtual URL space # Cybermapper knows where to go. User-agent: cybermapper Disallow: This example indicates that no robots should visit this site further: # go away User-agent: * Disallow: / This is an example of a malformed robots.txt file. # robots.txt for ancientcastle.example.com # I've locked myself away. User-agent: * Disallow: / # The castle is your home now, so you can go anywhere you like. User-agent: Belle Disallow: /west-wing/ # except the west wing! # It's good to be the Prince... User-agent: Beast Disallow: This file is missing the required blank lines between records. However, the intention is clear. SEE ALSO
LWP::RobotUA, WWW::RobotRules::AnyDBM_File COPYRIGHT
Copyright 1995-2009, Gisle Aas Copyright 1995, Martijn Koster This library is free software; you can redistribute it and/or modify it under the same terms as Perl itself. perl v5.18.2 2012-02-18 WWW::RobotRules(3)
All times are GMT -4. The time now is 05:01 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy