Sponsored Content
Top Forums UNIX for Dummies Questions & Answers to get a line above the search word Post 302152825 by vgersh99 on Friday 21st of December 2007 01:07:57 PM
Old 12-21-2007
Code:
sed -n '/[Ee]rror/{g;1!p;};h' abc.txt

sed1line
 

10 More Discussions You Might Find Interesting

1. UNIX for Dummies Questions & Answers

query on how to search for a line and read 4th word from that line

Assume I have a text file as below: me con pi ind ken pras ur me con rome ind kent pras urs pintu con mys ind pan pras ki con kit ind sys My requirement, I need to search for "con rome" and if exists, then print 4th word from rome, i.e in above example, since "con rome"... (4 Replies)
Discussion started by: jaggesh
4 Replies

2. Shell Programming and Scripting

How to search word and get the line above it.

Hey all, Could any one help me ... I want to search the word and want the line above it. eg. mytxt.log My name is Jeevan. 240 is my address. My name is Jhon. 390 is my address. -------- i want to search "240" and want output as My name is Jeevan. 240 is my address. ... (7 Replies)
Discussion started by: Jeevan Salunke
7 Replies

3. Shell Programming and Scripting

Search word in a line and print earlier pattern match

Hi All, I have almost 1000+ files and I want to search specific pattern. Looking forwarded your input. Search for: word1.word2 (Which procedure contain this word, I need procedure name in output. Expected output: procedure test1 procedure test2 procedure test3 procedure test4 ... (7 Replies)
Discussion started by: susau_79
7 Replies

4. Shell Programming and Scripting

Search the word to be deleted and delete lines above this word starting from P1 to P3

Hi, I have to search a word in a text file and then I have to delete lines above from the word searched . For eg suppose the file is like this: Records P1 10,23423432 ,77:1 ,234:2 P2 10,9089004 ,77:1 ,234:2 ,87:123 ,9898:2 P3 456456 P1 :123,456456546 P2 abc:324234 (2 Replies)
Discussion started by: vsachan
2 Replies

5. Shell Programming and Scripting

Need help in search for word and return line

Hi, I have this file format and can't seem to think of a solution. I need your help. I want to return lines which says "Record" if it's ID > 0 file.txt Record 1: :::::::::::::::::::::: :::::::::::::::::::::: ID "000001" :::::::::::::::::::::: :::::::::::::::::::::: ... (6 Replies)
Discussion started by: jakSun8
6 Replies

6. Shell Programming and Scripting

Search for the word and exporting 35 characters after that word using shell script?

I have a file input.txt which have loads of weird characters, html tags and useful materials. I want to display 35 characters after the word description excluding weird characters like $$#$#@$#@***$# and without html tags in the new file output.txt. Help me. Thanx in advance. My final goal is to... (11 Replies)
Discussion started by: sachit adhikari
11 Replies

7. Shell Programming and Scripting

Read a File line by line and split into array word by word

Hi All, Hope you guys had a wonderful weekend I have a scenario where in which I have to read a file line by line and check for few words before redirecting to a file I have searched the forum but,either those answers dint work (perhaps because of my wrong under standing of how IFS... (6 Replies)
Discussion started by: Kingcobra
6 Replies

8. Shell Programming and Scripting

Find word in a line and output in which line the word occurs / no. of times it occurred

I have a file: file.txt, which contains the following data in it. This is a file, my name is Karl, what is this process, karl is karl junior, file is a test file, file's name is file.txt My name is not Karl, my name is Karl Joey What is your name? Do you know your name and... (3 Replies)
Discussion started by: anuragpgtgerman
3 Replies

9. UNIX for Beginners Questions & Answers

Search for word in huge logfile and need to continue to print few lines from that line til find date

Guys i need an idea for one logic..in shell scripting am struggling with a logic...So the thing is... i need to search for a word in a huge log file and i need to continue to print few more lines from that line and the consecutive line has to end when it finds the line with date..because i know... (1 Reply)
Discussion started by: Prathi
1 Replies

10. UNIX for Beginners Questions & Answers

How to search for a word in column header that fully matches the word not partially in awk?

I have a multicolumn text file with header in the first row like this The headers are stored in an array called . which contains I want to search for each elements of this array from that multicolumn text file. And I am using this awk approach for ii in ${hdr} do gawk -vcol="$ii" -F... (1 Reply)
Discussion started by: Atta
1 Replies
WWW::RobotRules(3)					User Contributed Perl Documentation					WWW::RobotRules(3)

NAME
WWW::RobotRules - database of robots.txt-derived permissions SYNOPSIS
use WWW::RobotRules; my $rules = WWW::RobotRules->new('MOMspider/1.0'); use LWP::Simple qw(get); { my $url = "http://some.place/robots.txt"; my $robots_txt = get $url; $rules->parse($url, $robots_txt) if defined $robots_txt; } { my $url = "http://some.other.place/robots.txt"; my $robots_txt = get $url; $rules->parse($url, $robots_txt) if defined $robots_txt; } # Now we can check if a URL is valid for those servers # whose "robots.txt" files we've gotten and parsed: if($rules->allowed($url)) { $c = get $url; ... } DESCRIPTION
This module parses /robots.txt files as specified in "A Standard for Robot Exclusion", at <http://www.robotstxt.org/wc/norobots.html> Webmasters can use the /robots.txt file to forbid conforming robots from accessing parts of their web site. The parsed files are kept in a WWW::RobotRules object, and this object provides methods to check if access to a given URL is prohibited. The same WWW::RobotRules object can be used for one or more parsed /robots.txt files on any number of hosts. The following methods are provided: $rules = WWW::RobotRules->new($robot_name) This is the constructor for WWW::RobotRules objects. The first argument given to new() is the name of the robot. $rules->parse($robot_txt_url, $content, $fresh_until) The parse() method takes as arguments the URL that was used to retrieve the /robots.txt file, and the contents of the file. $rules->allowed($uri) Returns TRUE if this robot is allowed to retrieve this URL. $rules->agent([$name]) Get/set the agent name. NOTE: Changing the agent name will clear the robots.txt rules and expire times out of the cache. ROBOTS.TXT The format and semantics of the "/robots.txt" file are as follows (this is an edited abstract of <http://www.robotstxt.org/wc/norobots.html>): The file consists of one or more records separated by one or more blank lines. Each record contains lines of the form <field-name>: <value> The field name is case insensitive. Text after the '#' character on a line is ignored during parsing. This is used for comments. The following <field-names> can be used: User-Agent The value of this field is the name of the robot the record is describing access policy for. If more than one User-Agent field is present the record describes an identical access policy for more than one robot. At least one field needs to be present per record. If the value is '*', the record describes the default access policy for any robot that has not not matched any of the other records. The User-Agent fields must occur before the Disallow fields. If a record contains a User-Agent field after a Disallow field, that constitutes a malformed record. This parser will assume that a blank line should have been placed before that User-Agent field, and will break the record into two. All the fields before the User-Agent field will constitute a record, and the User-Agent field will be the first field in a new record. Disallow The value of this field specifies a partial URL that is not to be visited. This can be a full path, or a partial path; any URL that starts with this value will not be retrieved Unrecognized records are ignored. ROBOTS.TXT EXAMPLES The following example "/robots.txt" file specifies that no robots should visit any URL starting with "/cyberworld/map/" or "/tmp/": User-agent: * Disallow: /cyberworld/map/ # This is an infinite virtual URL space Disallow: /tmp/ # these will soon disappear This example "/robots.txt" file specifies that no robots should visit any URL starting with "/cyberworld/map/", except the robot called "cybermapper": User-agent: * Disallow: /cyberworld/map/ # This is an infinite virtual URL space # Cybermapper knows where to go. User-agent: cybermapper Disallow: This example indicates that no robots should visit this site further: # go away User-agent: * Disallow: / This is an example of a malformed robots.txt file. # robots.txt for ancientcastle.example.com # I've locked myself away. User-agent: * Disallow: / # The castle is your home now, so you can go anywhere you like. User-agent: Belle Disallow: /west-wing/ # except the west wing! # It's good to be the Prince... User-agent: Beast Disallow: This file is missing the required blank lines between records. However, the intention is clear. SEE ALSO
LWP::RobotUA, WWW::RobotRules::AnyDBM_File COPYRIGHT
Copyright 1995-2009, Gisle Aas Copyright 1995, Martijn Koster This library is free software; you can redistribute it and/or modify it under the same terms as Perl itself. perl v5.16.3 2012-02-18 WWW::RobotRules(3)
All times are GMT -4. The time now is 03:42 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy