Sponsored Content
Full Discussion: Help with grep
Top Forums UNIX for Dummies Questions & Answers Help with grep Post 302355437 by upstate_boy on Tuesday 22nd of September 2009 05:41:35 PM
Old 09-22-2009
I'm not one of the experts, but I think this will work for you:

grep -f 'computernamefile.txt' * >> output.txt

tip: make sure there are no blank lines in your input file.
 

10 More Discussions You Might Find Interesting

1. Shell Programming and Scripting

MEM=`ps v $PPID| grep -i db2 | grep -v grep| awk '{ if ( $7 ~ " " ) { print 0 } else

Hi Guys, I need to set the value of $7 to zero in case $7 is NULL. I've tried the below command but doesn't work. Any ideas. thanks guys. MEM=`ps v $PPID| grep -i db2 | grep -v grep| awk '{ if ( $7 ~ " " ) { print 0 } else { print $7}}' ` Harby. (4 Replies)
Discussion started by: hariza
4 Replies

2. UNIX for Dummies Questions & Answers

| help | unix | grep - Can I use grep to return a string with exactly n matches?

Hello, I looking to use grep to return a string with exactly n matches. I'm building off this: ls -aLl /bin | grep '^.\{9\}x' | tr -s ' ' -rwxr-xr-x 1 root root 632816 Nov 25 2008 vi -rwxr-xr-x 1 root root 632816 Nov 25 2008 view -rwxr-xr-x 1 root root 16008 May 25 2008... (7 Replies)
Discussion started by: MykC
7 Replies

3. UNIX for Dummies Questions & Answers

| help | unix | grep (GNU grep) 2.5.1 | advanced regex syntax

Hello, I'm working on unix with grep (GNU grep) 2.5.1. I'm going through some of the newer regex syntax using Regular Expression Reference - Advanced Syntax a guide. ls -aLl /bin | grep "\(x\)" Which works, just highlights 'x' where ever, when ever. I'm trying to to get (?:) to work but... (4 Replies)
Discussion started by: MykC
4 Replies

4. Shell Programming and Scripting

grep for certain files using a file as input to grep and then move

Hi All, I need to grep few files which has words like the below in the file name , which i want to put it in a file and and grep for the files which contain these names and move it to a new directory , full file name -C20091210.1000-20091210.1100_SMGBSC3:1000... (2 Replies)
Discussion started by: anita07
2 Replies

5. UNIX for Dummies Questions & Answers

Difference between grep, egrep & grep -i

Hi All, Please i need to know the difference between grep, egrep & grep -i when used to serach through a file. My platform is SunOS 5.9 & i'm using the korn shell. Regards, - divroro12 - (2 Replies)
Discussion started by: divroro12
2 Replies

6. UNIX for Dummies Questions & Answers

Advanced grep'in... grep for data next to static element.

I have a directory I need to grep which consists of numbered sub directories. The sub directory names change daily. A file resides in this main directory that shows which sub directories are FULL backups or INCREMENTAL backups. My goal is to grep the directory for the word "full" and then... (2 Replies)
Discussion started by: SysAdm2
2 Replies

7. Shell Programming and Scripting

AWK/GREP: grep only lines starting with integer

I have an input file 12.4 1.72849432773174e+01 -7.74784188610632e+01 12.5 9.59432114416327e-01 -7.87018212757537e+01 15.6 5.20139995965960e-01 -5.61612429666624e+01 29.3 3.76696387248366e+00 -7.42896194101892e+01 32.1 1.86899877018077e+01 -7.56508762501408e+01 35 6.98857157014640e+00... (2 Replies)
Discussion started by: chrisjorg
2 Replies

8. UNIX for Dummies Questions & Answers

Bash - CLI - grep - Passing result to grep through pipe

Hello. I want to get all modules which are loaded and which name are exactly 2 characters long and not more than 2 characters and begin with "nv" lsmod | (e)grep '^nv???????????? I want to get all modules which are loaded and which name begin with "nv" and are 2 to 7 characters long ... (1 Reply)
Discussion started by: jcdole
1 Replies

9. UNIX for Dummies Questions & Answers

Piping grep into awk, read the next line using grep

Hi, I have a number of files containing the information below. """"" Fundallinfo 6.3950 14.9715 14.0482 """"" I would like to grep for Fundallinfo and use it to read the next line? I ideally would like to read the three numbers that follow in the next line and... (2 Replies)
Discussion started by: Paul Moghadam
2 Replies

10. Shell Programming and Scripting

Inconsistent `ps -eaf -o args | grep -i sfs_pcard_load_file.ksh | grep -v grep | wc -l`

i have this line of code that looks for the same file if it is currently running and returns the count. `ps -eaf -o args | grep -i sfs_pcard_load_file.ksh | grep -v grep | wc -l` basically it is assigned to a variable ISRUNNING=`ps -eaf -o args | grep -i sfs_pcard_load_file.ksh |... (6 Replies)
Discussion started by: wtolentino
6 Replies
WWW::RobotRules(3)					User Contributed Perl Documentation					WWW::RobotRules(3)

NAME
WWW::RobotRules - database of robots.txt-derived permissions SYNOPSIS
use WWW::RobotRules; my $rules = WWW::RobotRules->new('MOMspider/1.0'); use LWP::Simple qw(get); { my $url = "http://some.place/robots.txt"; my $robots_txt = get $url; $rules->parse($url, $robots_txt) if defined $robots_txt; } { my $url = "http://some.other.place/robots.txt"; my $robots_txt = get $url; $rules->parse($url, $robots_txt) if defined $robots_txt; } # Now we can check if a URL is valid for those servers # whose "robots.txt" files we've gotten and parsed: if($rules->allowed($url)) { $c = get $url; ... } DESCRIPTION
This module parses /robots.txt files as specified in "A Standard for Robot Exclusion", at <http://www.robotstxt.org/wc/norobots.html> Webmasters can use the /robots.txt file to forbid conforming robots from accessing parts of their web site. The parsed files are kept in a WWW::RobotRules object, and this object provides methods to check if access to a given URL is prohibited. The same WWW::RobotRules object can be used for one or more parsed /robots.txt files on any number of hosts. The following methods are provided: $rules = WWW::RobotRules->new($robot_name) This is the constructor for WWW::RobotRules objects. The first argument given to new() is the name of the robot. $rules->parse($robot_txt_url, $content, $fresh_until) The parse() method takes as arguments the URL that was used to retrieve the /robots.txt file, and the contents of the file. $rules->allowed($uri) Returns TRUE if this robot is allowed to retrieve this URL. $rules->agent([$name]) Get/set the agent name. NOTE: Changing the agent name will clear the robots.txt rules and expire times out of the cache. ROBOTS.TXT The format and semantics of the "/robots.txt" file are as follows (this is an edited abstract of <http://www.robotstxt.org/wc/norobots.html>): The file consists of one or more records separated by one or more blank lines. Each record contains lines of the form <field-name>: <value> The field name is case insensitive. Text after the '#' character on a line is ignored during parsing. This is used for comments. The following <field-names> can be used: User-Agent The value of this field is the name of the robot the record is describing access policy for. If more than one User-Agent field is present the record describes an identical access policy for more than one robot. At least one field needs to be present per record. If the value is '*', the record describes the default access policy for any robot that has not not matched any of the other records. The User-Agent fields must occur before the Disallow fields. If a record contains a User-Agent field after a Disallow field, that constitutes a malformed record. This parser will assume that a blank line should have been placed before that User-Agent field, and will break the record into two. All the fields before the User-Agent field will constitute a record, and the User-Agent field will be the first field in a new record. Disallow The value of this field specifies a partial URL that is not to be visited. This can be a full path, or a partial path; any URL that starts with this value will not be retrieved Unrecognized records are ignored. ROBOTS.TXT EXAMPLES The following example "/robots.txt" file specifies that no robots should visit any URL starting with "/cyberworld/map/" or "/tmp/": User-agent: * Disallow: /cyberworld/map/ # This is an infinite virtual URL space Disallow: /tmp/ # these will soon disappear This example "/robots.txt" file specifies that no robots should visit any URL starting with "/cyberworld/map/", except the robot called "cybermapper": User-agent: * Disallow: /cyberworld/map/ # This is an infinite virtual URL space # Cybermapper knows where to go. User-agent: cybermapper Disallow: This example indicates that no robots should visit this site further: # go away User-agent: * Disallow: / This is an example of a malformed robots.txt file. # robots.txt for ancientcastle.example.com # I've locked myself away. User-agent: * Disallow: / # The castle is your home now, so you can go anywhere you like. User-agent: Belle Disallow: /west-wing/ # except the west wing! # It's good to be the Prince... User-agent: Beast Disallow: This file is missing the required blank lines between records. However, the intention is clear. SEE ALSO
LWP::RobotUA, WWW::RobotRules::AnyDBM_File COPYRIGHT
Copyright 1995-2009, Gisle Aas Copyright 1995, Martijn Koster This library is free software; you can redistribute it and/or modify it under the same terms as Perl itself. perl v5.18.2 2012-02-18 WWW::RobotRules(3)
All times are GMT -4. The time now is 09:11 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy