wget is not doing what cat or echo does - it is not writing to stdout in this case.
I'm not sure about this - I have an old version of wget which does not support this kind of thing
The - character is usually translated to mean stdout in the context I'm using it. I cannot test this so I cannot say it works. Piping into gunzip as shown below does work. So, -O - means output document to the file named "-" which ought to be stdout.
I have noticed a lot of expensive books appearing
online so I have decided to copy them to CD.
I was going to write a program in java to do this,
but remembered that wget GNU program some of
you guys were talking about.
Instead of spending two hours or so writing a
program to do this.... (1 Reply)
i am trying to ftp files/dirs with wget. i am having an issue where the path always takes me to my home dir even when i specify something else. For example:
wget -m ftp://USER:PASS@IP_ADDRESS/Path/on/remote/box
...but if that path on the remote box isn't in my home dir it doesn't change to... (0 Replies)
Hi,
i need temperature hourly from a web page
Im using wget to get the web page. I would like to save the page downloaded in a file called page. I check the file everytime i run the wget function but its not saving but instead creates a wx.php file....Each time i run it...a new wx.php file is... (2 Replies)
Hi Friends,
I have an url like this
https://www.unix.com/help/
In this help directory, I have more than 300 directories which contains file or files.
So, the 300 directories are like this
http://unix.com/help/
dir1
file1
dir2
file2
dir3
file3_1
file3_2... (4 Replies)
If I run the following command
wget -r --no-parent --reject "index.html*" 10.11.12.13/backups/
A local directory named 10.11.12.13/backups with the content of web site data is created.
What I want to do is have the data placed in a local directory called $HOME/backups.
Thanks for... (1 Reply)
How can I download only *.zip and *.rar files from a website <index> who has multiple directories in root parent directory?
I need wget to crawl every directory and download only zip and rar files. Is there anyway I could do it? (7 Replies)
Hi,
I need to download a zip file from my the below US govt link.
https://www.sam.gov/SAMPortal/extractfiledownload?role=WW&version=SAM&filename=SAM_PUBLIC_MONTHLY_20160207.ZIP
I only have wget utility installed on the server.
When I use the below command, I am getting error 403... (2 Replies)
Discussion started by: Prasannag87
2 Replies
LEARN ABOUT DEBIAN
br_pmfetch
BIORUBY(1) General Commands Manual BIORUBY(1)NAME
br_pmfetch -- PubMed Client
SYNOPSIS
br_pmfetch [options...] ["query string"]
br_pmfetch [--query"query string"] [other options...]
DESCRIPTION
This manual page documents briefly the br_pmfetch.
br_pmfetch is a command line program to query PubMed. It can take a variety of options (documented below) to restrict your search query,
which is specified by the query string.
OPTIONS -q--query
Query string for PubMed search.
-t--title
Title of the article to search.
-j--journal
Journal title to search.
-v--volume
Journal volume to search.
-i--issue
Journal issue to search.
-p--page
First page number of the article to search.
-a--author
Author name to search.
-m--mesh
MeSH term to search.
-f--format
Summary output format. Options are endnote, medline, bibitem, bibtex, report, abstract nature, science, genome_res, genome_biol,
nar, current, trends, cell.
--pmidlist
Output only a list of PudMed IDs.
-n--retmax
Number of articles to retrieve at the maximum.
-N--retstart
Starting number of articles to retrieve.
-s--sort
Sort method for the summary output. Options are author, journal, pub+date.
--reldate
Search articles published within recent # of days.
--mindate
Search articles published after the date YYYY/MM/DD.
--maxdate
Search articles published before the date YYYY/MM/DD.
--help
Output help and then exit.
--examples
Output example usages and then exit.
--version
Output version number and then exit.
SEE ALSO
The following pages have information on the PubMed search options: http://www.ncbi.nlm.nih.gov/entrez/query/static/help/pmhelp.html
http://www.ncbi.nlm.nih.gov/entrez/query/static/esearch_help.html
AUTHOR
This manual page was written by David Nusinow dnusinow@debian.org for the Debian system (but may be used by others). Permission is granted
to copy, distribute and/or modify this document under the terms of the GNU General Public License, Version 2 any later version published by
the Free Software Foundation.
On Debian systems, the complete text of the GNU General Public License can be found in /usr/share/common-licenses/GPL.
BIORUBY(1)