Sponsored Content
Top Forums UNIX for Advanced & Expert Users unix script for repeating a command with a variable Post 302128737 by vino on Thursday 26th of July 2007 08:55:06 AM
Old 07-26-2007
langdatyagi, be aware of the rules.

You have already broken it twice. Once more and you will earn a temporary ban.
 

10 More Discussions You Might Find Interesting

1. Shell Programming and Scripting

Repeating commands in a script

I have written a script that I want to be repeated. The script that I wrote outputs the date, how many people are on the system, how many people logged in before me, and how many people logged in after me. Than what I want it to do is after it outputs the 4 lines I want it to go back to the... (4 Replies)
Discussion started by: Dave2874
4 Replies

2. UNIX for Dummies Questions & Answers

repeating previous argument on command line?

Hi, is there a way in bash--or any other shell--to repeat the preceding argument on the command line? E.g., let's say I want to rename the file "/var/www/conf/httpd.conf" to "/var/www/conf/httpd.conf.bak". I want to be able to type mv /var/www/conf/httpd.conf, and then press a command key that... (6 Replies)
Discussion started by: hadarot
6 Replies

3. Shell Programming and Scripting

Repeating awk command

Hi all, I have an awk command that needs to be ran multiple times in a script on one file containing lots of fields of data. The file look like this (the numbers are made up): 1234 2222 2223 2222 123 2223 3333 2323 3333 3321 3344 4444 The... (2 Replies)
Discussion started by: nistleloy
2 Replies

4. UNIX for Advanced & Expert Users

Repeat output of last command w/o repeating last command

Is there a way to repeat the output of the last command for filtering without running the command again? All I could think of was to copy all the data to a text file and process it that way, is there another way? Like say I want to grep server.server.lan from a dtrace that was pages long after I... (5 Replies)
Discussion started by: glev2005
5 Replies

5. Shell Programming and Scripting

Repeating Substitution Command on VI

Hello Folks, how to write a command on vi that allow to repeat last substitution command? Here what I want to do : 1 2 3 1 2 3 1 2 3 :.,+2s/\n/ /And I obtain : 1 2 3 1 2 3 1 (5 Replies)
Discussion started by: gogol_bordello
5 Replies

6. Shell Programming and Scripting

repeating ftp script question

so i got this request to do this: - Script should check for the file ever 15 minutes on the FTP server…if the file is not found, then the whole script exits. File will only be created one a week at random. i have gotten this far, but am kind of stuck, also sleep command doesnt work... (3 Replies)
Discussion started by: zapatur23
3 Replies

7. UNIX for Dummies Questions & Answers

Using sed command to remove multiple instances of repeating headers in one file?

Hi, I have catenated multiple output files (from a monte carlo run) into one big output file. Each individual file has it's own two line header. So when I catenate, there are multiple two line headers (of the same wording) within the big file. How do I use the sed command to search for the... (1 Reply)
Discussion started by: rebazon
1 Replies

8. Shell Programming and Scripting

SH script, variable built command fails, but works at command line

I am working with a sh script on a solaris 9 zone (sol 10 host) that grabs information to build the configuration command line. the variables Build64, SSLopt, CONFIGopt, and CC are populated in the script. the script includes CC=`which gcc` CONFIGopt=' --prefix=/ --exec-prefix=/usr... (8 Replies)
Discussion started by: oly_r
8 Replies

9. Shell Programming and Scripting

awk and or sed command to sum the value in repeating tags in a XML

I have a XML in which <Amt Ccy="EUR">3.1</Amt> tag repeats. This is under another tag <Main>. I need to sum all the values of <Amt Ccy=""> (Ccy may vary) coming under <Main> using awk and or sed command. can some help? Sample looks like below <root> <Main> ... (6 Replies)
Discussion started by: bk_12345
6 Replies

10. UNIX for Dummies Questions & Answers

Need help with repeating variables in a shell script

I should preface this by saying I have never worked with shell scripts before so this is all new to me. I was able to make something that worked, but is terribly optimized, and I have no idea how to improve it. If anything it's a pretty hilarious script: #/bin/bash get_char() { ... (4 Replies)
Discussion started by: ricco19
4 Replies
WWW::RobotRules(3pm)					User Contributed Perl Documentation				      WWW::RobotRules(3pm)

NAME
WWW::RobotRules - database of robots.txt-derived permissions SYNOPSIS
use WWW::RobotRules; my $rules = WWW::RobotRules->new('MOMspider/1.0'); use LWP::Simple qw(get); { my $url = "http://some.place/robots.txt"; my $robots_txt = get $url; $rules->parse($url, $robots_txt) if defined $robots_txt; } { my $url = "http://some.other.place/robots.txt"; my $robots_txt = get $url; $rules->parse($url, $robots_txt) if defined $robots_txt; } # Now we can check if a URL is valid for those servers # whose "robots.txt" files we've gotten and parsed: if($rules->allowed($url)) { $c = get $url; ... } DESCRIPTION
This module parses /robots.txt files as specified in "A Standard for Robot Exclusion", at <http://www.robotstxt.org/wc/norobots.html> Webmasters can use the /robots.txt file to forbid conforming robots from accessing parts of their web site. The parsed files are kept in a WWW::RobotRules object, and this object provides methods to check if access to a given URL is prohibited. The same WWW::RobotRules object can be used for one or more parsed /robots.txt files on any number of hosts. The following methods are provided: $rules = WWW::RobotRules->new($robot_name) This is the constructor for WWW::RobotRules objects. The first argument given to new() is the name of the robot. $rules->parse($robot_txt_url, $content, $fresh_until) The parse() method takes as arguments the URL that was used to retrieve the /robots.txt file, and the contents of the file. $rules->allowed($uri) Returns TRUE if this robot is allowed to retrieve this URL. $rules->agent([$name]) Get/set the agent name. NOTE: Changing the agent name will clear the robots.txt rules and expire times out of the cache. ROBOTS.TXT The format and semantics of the "/robots.txt" file are as follows (this is an edited abstract of <http://www.robotstxt.org/wc/norobots.html>): The file consists of one or more records separated by one or more blank lines. Each record contains lines of the form <field-name>: <value> The field name is case insensitive. Text after the '#' character on a line is ignored during parsing. This is used for comments. The following <field-names> can be used: User-Agent The value of this field is the name of the robot the record is describing access policy for. If more than one User-Agent field is present the record describes an identical access policy for more than one robot. At least one field needs to be present per record. If the value is '*', the record describes the default access policy for any robot that has not not matched any of the other records. The User-Agent fields must occur before the Disallow fields. If a record contains a User-Agent field after a Disallow field, that constitutes a malformed record. This parser will assume that a blank line should have been placed before that User-Agent field, and will break the record into two. All the fields before the User-Agent field will constitute a record, and the User-Agent field will be the first field in a new record. Disallow The value of this field specifies a partial URL that is not to be visited. This can be a full path, or a partial path; any URL that starts with this value will not be retrieved Unrecognized records are ignored. ROBOTS.TXT EXAMPLES The following example "/robots.txt" file specifies that no robots should visit any URL starting with "/cyberworld/map/" or "/tmp/": User-agent: * Disallow: /cyberworld/map/ # This is an infinite virtual URL space Disallow: /tmp/ # these will soon disappear This example "/robots.txt" file specifies that no robots should visit any URL starting with "/cyberworld/map/", except the robot called "cybermapper": User-agent: * Disallow: /cyberworld/map/ # This is an infinite virtual URL space # Cybermapper knows where to go. User-agent: cybermapper Disallow: This example indicates that no robots should visit this site further: # go away User-agent: * Disallow: / This is an example of a malformed robots.txt file. # robots.txt for ancientcastle.example.com # I've locked myself away. User-agent: * Disallow: / # The castle is your home now, so you can go anywhere you like. User-agent: Belle Disallow: /west-wing/ # except the west wing! # It's good to be the Prince... User-agent: Beast Disallow: This file is missing the required blank lines between records. However, the intention is clear. SEE ALSO
LWP::RobotUA, WWW::RobotRules::AnyDBM_File COPYRIGHT
Copyright 1995-2009, Gisle Aas Copyright 1995, Martijn Koster This library is free software; you can redistribute it and/or modify it under the same terms as Perl itself. perl v5.10.1 2011-03-13 WWW::RobotRules(3pm)
All times are GMT -4. The time now is 05:23 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy