Sponsored Content
Top Forums Shell Programming and Scripting Downloading jpgs from a gallery type website Post 302899542 by workisnotfun on Tuesday 29th of April 2014 06:24:42 PM
Old 04-29-2014
I am having the most trouble with step two I think.

I'm assuming the -oh is the option -o and -h?

Step 1 downloads the files fine but they're stored in my directory as 1.html *dot* , 2.html *dot* etc, with a dot right after (not a period), not sure if this is a problem but Step 2 doesn't seem to be able to find any .html files

and so Step 3 fails because there is no urls.txt. What could be the problem?

---------- Post updated at 05:24 PM ---------- Previous update was at 04:55 PM ----------

I think I might have found a different problem actually.

Running this in terminal works fine
Code:
wget -nd -H -p -A jpg,jpeg,png,gif -e robots=off www.url.example

but when I put this in my bash script and run it I get awaiting response... 404 Not Found. At the end of the jpg url, for some reason a %0D gets appended to the end which I'm thinking makes wget go to the wrong url.

I've been trying a different approach than my earlier one since I couldn't get that working. What could be the problem now so that I can automate the downloading?

Last edited by workisnotfun; 04-29-2014 at 08:13 PM..
 

8 More Discussions You Might Find Interesting

1. UNIX Desktop Questions & Answers

Recommendations for good shell utility to resize JPGs?

Not sure if this is the right place to be posting this. If not, let me know where it fits. I am running RedHat Linux 8.0. I've recently acquired a Sony Mavica digital camera. In short, this thing is awesome (uses cd-rw!). I have been taking high quality images, but I now have the need to try... (2 Replies)
Discussion started by: deckard
2 Replies

2. UNIX for Dummies Questions & Answers

Lynx - Downloading - extension handling - changing mime type?

Using Lynx, when I try to download a .rar, it confirms I want to download and its got it as an appication/rar file. However, split archives that end in .r## (.r00, .r01 ...) are not recognized as an appication/rar file and it reads the file like a .txt or .html. How can I fix this? Thanks! (2 Replies)
Discussion started by: yitzle
2 Replies

3. Shell Programming and Scripting

String type to date type

Can one string type variable changed into the date type variable. (1 Reply)
Discussion started by: rinku
1 Replies

4. Programming

array type has incomplete element type

Dear colleagues, One of my friend have a problem with c code. While compiling a c program it displays a message like "array type has incomplete element type". Any body can provide a solution for it. Jaganadh.G (1 Reply)
Discussion started by: jaganadh
1 Replies

5. Shell Programming and Scripting

Downloading info from website to database

Hi guys! I created a database using mysql in bash now i would like to download weather info from the data.(temp, date and time)...and just store this in the database to display after every 3 hours or so... i have tried to get the website using wget and now dont exactly now how to go from here... (0 Replies)
Discussion started by: vadharah
0 Replies

6. Shell Programming and Scripting

File Management: How do I move all JPGS in a folder structure to a single folder?

This is the file structure: DESKTOP/Root of Photo Folders/Folder1qweqwasdfsd/*jpg DESKTOP/Root of Photo Folders/Folder2asdasdasd/*jpg DESKTOP/Root of Photo Folders/Folder3asdadfhgasdf/*jpg DESKTOP/Root of Photo Folders/Folder4qwetwdfsdfg/*jpg DESKTOP/Root of Photo... (4 Replies)
Discussion started by: guptaxpn
4 Replies

7. Windows & DOS: Issues & Discussions

Downloading a file from Website to a Windows Folder

Hi, Is it possible to download a file using Wget or some other command from a Windows machine? Say I want to download something from https server to C:\ABC\abc.xls Any ideas, Thanks. (4 Replies)
Discussion started by: dohko
4 Replies

8. Shell Programming and Scripting

Wget error while downloading from https website

Hi, I would like to download a file from a https website. I don't have the file name as it changes every day. I am using the following command: wget --no-check-certificate -r -np --user=ABC --password=DEF -O temp.txt https://<website/directory> I am getting followin error in my... (9 Replies)
Discussion started by: pinnacle
9 Replies
WWW::RobotRules(3)					User Contributed Perl Documentation					WWW::RobotRules(3)

NAME
WWW::RobotRules - database of robots.txt-derived permissions SYNOPSIS
use WWW::RobotRules; my $rules = WWW::RobotRules->new('MOMspider/1.0'); use LWP::Simple qw(get); { my $url = "http://some.place/robots.txt"; my $robots_txt = get $url; $rules->parse($url, $robots_txt) if defined $robots_txt; } { my $url = "http://some.other.place/robots.txt"; my $robots_txt = get $url; $rules->parse($url, $robots_txt) if defined $robots_txt; } # Now we can check if a URL is valid for those servers # whose "robots.txt" files we've gotten and parsed: if($rules->allowed($url)) { $c = get $url; ... } DESCRIPTION
This module parses /robots.txt files as specified in "A Standard for Robot Exclusion", at <http://www.robotstxt.org/wc/norobots.html> Webmasters can use the /robots.txt file to forbid conforming robots from accessing parts of their web site. The parsed files are kept in a WWW::RobotRules object, and this object provides methods to check if access to a given URL is prohibited. The same WWW::RobotRules object can be used for one or more parsed /robots.txt files on any number of hosts. The following methods are provided: $rules = WWW::RobotRules->new($robot_name) This is the constructor for WWW::RobotRules objects. The first argument given to new() is the name of the robot. $rules->parse($robot_txt_url, $content, $fresh_until) The parse() method takes as arguments the URL that was used to retrieve the /robots.txt file, and the contents of the file. $rules->allowed($uri) Returns TRUE if this robot is allowed to retrieve this URL. $rules->agent([$name]) Get/set the agent name. NOTE: Changing the agent name will clear the robots.txt rules and expire times out of the cache. ROBOTS.TXT The format and semantics of the "/robots.txt" file are as follows (this is an edited abstract of <http://www.robotstxt.org/wc/norobots.html>): The file consists of one or more records separated by one or more blank lines. Each record contains lines of the form <field-name>: <value> The field name is case insensitive. Text after the '#' character on a line is ignored during parsing. This is used for comments. The following <field-names> can be used: User-Agent The value of this field is the name of the robot the record is describing access policy for. If more than one User-Agent field is present the record describes an identical access policy for more than one robot. At least one field needs to be present per record. If the value is '*', the record describes the default access policy for any robot that has not not matched any of the other records. The User-Agent fields must occur before the Disallow fields. If a record contains a User-Agent field after a Disallow field, that constitutes a malformed record. This parser will assume that a blank line should have been placed before that User-Agent field, and will break the record into two. All the fields before the User-Agent field will constitute a record, and the User-Agent field will be the first field in a new record. Disallow The value of this field specifies a partial URL that is not to be visited. This can be a full path, or a partial path; any URL that starts with this value will not be retrieved Unrecognized records are ignored. ROBOTS.TXT EXAMPLES The following example "/robots.txt" file specifies that no robots should visit any URL starting with "/cyberworld/map/" or "/tmp/": User-agent: * Disallow: /cyberworld/map/ # This is an infinite virtual URL space Disallow: /tmp/ # these will soon disappear This example "/robots.txt" file specifies that no robots should visit any URL starting with "/cyberworld/map/", except the robot called "cybermapper": User-agent: * Disallow: /cyberworld/map/ # This is an infinite virtual URL space # Cybermapper knows where to go. User-agent: cybermapper Disallow: This example indicates that no robots should visit this site further: # go away User-agent: * Disallow: / This is an example of a malformed robots.txt file. # robots.txt for ancientcastle.example.com # I've locked myself away. User-agent: * Disallow: / # The castle is your home now, so you can go anywhere you like. User-agent: Belle Disallow: /west-wing/ # except the west wing! # It's good to be the Prince... User-agent: Beast Disallow: This file is missing the required blank lines between records. However, the intention is clear. SEE ALSO
LWP::RobotUA, WWW::RobotRules::AnyDBM_File COPYRIGHT
Copyright 1995-2009, Gisle Aas Copyright 1995, Martijn Koster This library is free software; you can redistribute it and/or modify it under the same terms as Perl itself. perl v5.18.2 2012-02-18 WWW::RobotRules(3)
All times are GMT -4. The time now is 09:01 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy