Sponsored Content
Top Forums Shell Programming and Scripting Encapsulating output of CURL and/or WGET Post 302706839 by Corona688 on Thursday 27th of September 2012 01:35:32 PM
Old 09-27-2012
How about just letting it print instead of continually cramming everything into backticks? You don't have to always have to do that.

Also, how about checking their return codes instead of grepping their output? Every process you create returns an error code for success or failure, and wget is not an exception. If it fails to download a page, it willl tell you so directly. There's no need to grep for 'error' in the output.

If you're downloading multiple pages and wish to see which succeeded via script, wget has the -nv option, which outputs success or failure for individual files in a simple line-by-line list.

The real problem is, they're not one thing, they're two streams. stdout is used for data, stderr is used for errors.

If you want them both to go to stdout: wget ... 2>&1
 

10 More Discussions You Might Find Interesting

1. Shell Programming and Scripting

Help needed in Curl & Wget

We are trying to invoke a https service from our unix script using curl command. The service is not getting invoked because it is SSL configured. Bypassing certification (using curl –k) does not work. curl -k https://site curl -k -x IP:Port https://site curl -k -x IP:443 https://id:pwd@site ... (0 Replies)
Discussion started by: dineshbabu01
0 Replies

2. Shell Programming and Scripting

Proxy with curl/wget support

I need a proxy that would enable me to use cli curl/wget with another ip address. How do I find a paid proxy server that supports curl/wget? (1 Reply)
Discussion started by: locoroco
1 Replies

3. Shell Programming and Scripting

ery weird wget/curl output - what should I do?

Hi, I'm trying to write a script to download RedHat's errata digest. It comes in a txt.gz format, and i can get it easily with firefox. HOWEVER: output is VERY strange when donwloading it in a script. It seems I'm getting a file of the same size - but partially text and partly binary! It... (5 Replies)
Discussion started by: jstilby
5 Replies

4. Shell Programming and Scripting

Specifying IP address with curl/wget

Hello, I am wondering does anyone know of a method using curl/wget or other where by I could specify the IP address of the server I wish to query for a website. Something similar to editing /etc/hosts but that can be done directly from the command line. I have looked through the man pages... (4 Replies)
Discussion started by: colinireland
4 Replies

5. Shell Programming and Scripting

How to download file without curl and wget

Hi I need a Shell script that will download a zip file every second from a http server but i can't use neither curl nor wget. Can anyone will help me go about this task ??? Thanks!! (1 Reply)
Discussion started by: rubber08
1 Replies

6. Shell Programming and Scripting

Wget vs Curl - Proxy issue

Hi, My script needs to crawl the data from a third party site. Currently it is written in wget. The third party site is of shared interface with different IP addresses. My wget works with all the IP address but not with one. Whereas the curl is able to hit that IP address and comes out... (2 Replies)
Discussion started by: sathyaonnuix
2 Replies

7. Shell Programming and Scripting

Wget/curl credentials validation

Experts, I login to a 3rd party and pull some valuable information with my credentials. I pass my credentials via --post-data in wget. Now my Account is locked. I want my wget to alert that the Account is locked. How can i achieve this. My idea is, get the Source page html from the... (2 Replies)
Discussion started by: sathyaonnuix
2 Replies

8. Shell Programming and Scripting

How to get content of a webpage Curl vs Wget?

Hello, What I am trying to do is to get html data of a website automatically. Firstly I decided to do it manually and via terminal I entered below code: $ wget http://www.***.*** -q -O code.html Unfortunately code.html file was empty. When I enter below code it gave Error 303-304 $... (1 Reply)
Discussion started by: baris35
1 Replies

9. Shell Programming and Scripting

Wget and curl to post data

i'm using this command to post data to a remote host: wget --post-data="My Data" http://<my-ip>:80 -O /dev/null -q and curl --data "My Data" http://<my-ip>:80 however, when i run the above, i see the following in my access log on the remote host: Wget: 10.10.10.10 - - "POST /... (1 Reply)
Discussion started by: SkySmart
1 Replies

10. Web Development

Wget/curl and javascript

What can I use instead of wget/curl when I need to log into websites that use javascript? Wget and curl don't handle javascript. (6 Replies)
Discussion started by: locoroco
6 Replies
httpindex(1)						      General Commands Manual						      httpindex(1)

NAME
httpindex - HTTP front-end for SWISH++ indexer SYNOPSIS
wget [ options ] URL... 2>&1 | httpindex [ options ] DESCRIPTION
httpindex is a front-end for index++(1) to index files copied from remote servers using wget(1). The files (in a copy of the remote direc- tory structure) can be kept, deleted, or replaced with their descriptions after indexing. OPTIONS
wget Options The wget(1) options that are required are: -A, -nv, -r, and -x; the ones that are highly recommended are: -l, -nh, -t, and -w. (See the EXAMPLE.) httpindex Options httpindex accepts the same short options as index++(1) except for -H, -I, -l, -r, -S, and -V. The following options are unique to httpindex: -d Replace the text of local copies of retrieved files with their descriptions after they have been indexed. This is useful to display file descriptions in search results without having to have complete copies of the remote files thus saving filesystem space. (See the extract_description() function in WWW(3) for details about how descriptions are extracted.) -D Delete the local copies of retrieved files after they have been indexed. This prevents your local filesystem from filling up with copies of remote files. EXAMPLE
To index all HTML and text files on a remote web server keeping descriptions locally: wget -A html,txt -linf -t2 -rxnv -nh -w2 http://www.foo.com 2>&1 | httpindex -d -e'html:*.html,text:*.txt' Note that you need to redirect wget(1)'s output from standard error to standard output in order to pipe it to httpindex. EXIT STATUS
Exits with a value of zero only if indexing completed sucessfully; non-zero otherwise. CAVEATS
In addition to those for index++(1), httpindex does not correctly handle the use of multiple -e, -E, -m, or -M options (because the Perl script uses the standard GetOpt::Std package for processing command-line options that doesn't). The last of any of those options ``wins.'' The work-around is to use multiple values for those options seperated by commas to a single one of those options. For example, if you want to do: httpindex -e'html:*.html' -e'text:*.txt' do this instead: httpindex -e'html:*.html,text:*.txt' SEE ALSO
index++(1), wget(1), WWW(3) AUTHOR
Paul J. Lucas <pauljlucas@mac.com> SWISH++ August 2, 2005 httpindex(1)
All times are GMT -4. The time now is 02:09 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy