Sponsored Content
Full Discussion: Quick Question
Top Forums UNIX for Dummies Questions & Answers Quick Question Post 69037 by IAMTHEEVILBEAN on Sunday 10th of April 2005 06:16:52 PM
Old 04-10-2005
Quick Question

Hi, I am new to UNIX, and am learning from this tutorial : http://www.ee.surrey.ac.uk/Teaching/Unix/index.html

It keeps telling me to files downloaded from the internet (like .txt files) to the directory, and I dont know how to.

How do I add .txt files to my directory? Thanks.
 

10 More Discussions You Might Find Interesting

1. UNIX for Dummies Questions & Answers

Quick Question

I know in DOS, when you want to pull up your last/previous command, you hit the up/down arrows. How do you do that with UNIX? (3 Replies)
Discussion started by: Tracy Hunt
3 Replies

2. Shell Programming and Scripting

A very quick question

Just a super quick question: how do you put a link in your php code. I want to make a link to something in /tmp directory. i.e. how do you put a href into php, I think it's done a bit differently. thanks john (1 Reply)
Discussion started by: jmg5
1 Replies

3. UNIX for Dummies Questions & Answers

Quick Question

Hello There! I am trying to write this SIMPLE script in Bourne Shell but I keep on getting syntax errors. Can you see what I am doing wrong? I've done this before but I don't see the difference. I am simply trying to take the day of the week from our system and when the teachers sign on I want... (7 Replies)
Discussion started by: catbad
7 Replies

4. UNIX for Dummies Questions & Answers

Another quick question

Hi guys sed -e "s/$<//g" the $< can allow me to assign an input value to the variable right? do the double quotes check the previous context? (1 Reply)
Discussion started by: hamoudzz
1 Replies

5. Shell Programming and Scripting

quick question

does anyone know what $? means? i echoed it on my box (running AIX Korn shell) and got 127 (2 Replies)
Discussion started by: penfold
2 Replies

6. UNIX for Dummies Questions & Answers

Quick question

Hi, Is there a simple way, using ksh, to find the byte position in a file that a stated character appears? Many thanks Helen (2 Replies)
Discussion started by: Bab00shka
2 Replies

7. UNIX for Dummies Questions & Answers

quick question

from command prompt I did grep two words on a same line for eg: grep abc | grep xyz and I got tht particular line, but I want to know when I vi that file how to directly search for that particular line? I appreciate if any one can provide answer, thanks in advance (2 Replies)
Discussion started by: pkolishetty
2 Replies

8. UNIX for Dummies Questions & Answers

Quick question

Hello all, Quick question from a fairly new to Unix developer. if then completedLogFile=$logfile.$(date +%Y%m%d-%H:%M:%S) mv $logfile $completedLogFile fi I understand that this portion of code is simply copying a tmp logfile to a completed logfile when a condition is true. The... (2 Replies)
Discussion started by: JohnnyBoy
2 Replies

9. UNIX for Dummies Questions & Answers

Quick question.

I'd like to list all userid's on the system that have a .bashrc file in their home directory with a command like "cat /etc/passwd | grep -f", however I'm not quite familiar with using grep. Any suggestions? (2 Replies)
Discussion started by: raidkridley
2 Replies

10. Shell Programming and Scripting

Quick question

When I have a file like this: 0084AF aj-123-a NAME Ajay NAME Kumar Engineer 015ED6 ck-345-c 020B25 ef-456-e 027458 pq-890-p NAME Peter NAME Salob Doctor 0318F0 xy-123-x NAME Xavier Arul NAME Yesu Supervisor 0344CA de-456-d where - The first NAME is followed by... (6 Replies)
Discussion started by: ajay41aj
6 Replies
WWW::RobotRules(3)					User Contributed Perl Documentation					WWW::RobotRules(3)

NAME
WWW::RobotsRules - Parse robots.txt files SYNOPSIS
require WWW::RobotRules; my $robotsrules = new WWW::RobotRules 'MOMspider/1.0'; use LWP::Simple qw(get); $url = "http://some.place/robots.txt"; my $robots_txt = get $url; $robotsrules->parse($url, $robots_txt); $url = "http://some.other.place/robots.txt"; my $robots_txt = get $url; $robotsrules->parse($url, $robots_txt); # Now we are able to check if a URL is valid for those servers that # we have obtained and parsed "robots.txt" files for. if($robotsrules->allowed($url)) { $c = get $url; ... } DESCRIPTION
This module parses a /robots.txt file as specified in "A Standard for Robot Exclusion", described in <http://info.webcrawler.com/mak/projects/robots/norobots.html> Webmasters can use the /robots.txt file to disallow conforming robots access to parts of their web site. The parsed file is kept in the WWW::RobotRules object, and this object provides methods to check if access to a given URL is prohibited. The same WWW::RobotRules object can parse multiple /robots.txt files. The following methods are provided: $rules = WWW::RobotRules->new($robot_name) This is the constructor for WWW::RobotRules objects. The first argument given to new() is the name of the robot. $rules->parse($robot_txt_url, $content, $fresh_until) The parse() method takes as arguments the URL that was used to retrieve the /robots.txt file, and the contents of the file. $rules->allowed($uri) Returns TRUE if this robot is allowed to retrieve this URL. $rules->agent([$name]) Get/set the agent name. NOTE: Changing the agent name will clear the robots.txt rules and expire times out of the cache. ROBOTS.TXT The format and semantics of the "/robots.txt" file are as follows (this is an edited abstract of <http://info.webcrawler.com/mak/projects/robots/norobots.html>): The file consists of one or more records separated by one or more blank lines. Each record contains lines of the form <field-name>: <value> The field name is case insensitive. Text after the '#' character on a line is ignored during parsing. This is used for comments. The following <field-names> can be used: User-Agent The value of this field is the name of the robot the record is describing access policy for. If more than one User-Agent field is present the record describes an identical access policy for more than one robot. At least one field needs to be present per record. If the value is '*', the record describes the default access policy for any robot that has not not matched any of the other records. Disallow The value of this field specifies a partial URL that is not to be visited. This can be a full path, or a partial path; any URL that starts with this value will not be retrieved ROBOTS.TXT EXAMPLES The following example "/robots.txt" file specifies that no robots should visit any URL starting with "/cyberworld/map/" or "/tmp/": User-agent: * Disallow: /cyberworld/map/ # This is an infinite virtual URL space Disallow: /tmp/ # these will soon disappear This example "/robots.txt" file specifies that no robots should visit any URL starting with "/cyberworld/map/", except the robot called "cybermapper": User-agent: * Disallow: /cyberworld/map/ # This is an infinite virtual URL space # Cybermapper knows where to go. User-agent: cybermapper Disallow: This example indicates that no robots should visit this site further: # go away User-agent: * Disallow: / SEE ALSO
LWP::RobotUA, WWW::RobotRules::AnyDBM_File libwww-perl-5.65 2001-04-20 WWW::RobotRules(3)
All times are GMT -4. The time now is 04:13 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy