Sponsored Content
Full Discussion: download files
Top Forums Shell Programming and Scripting download files Post 302259153 by Corona688 on Monday 17th of November 2008 11:21:25 AM
Old 11-17-2008
The links won't work if your webserver is not configured to allow access these files, and your webserver can probably make its own listing of them anyway, no script required.

Also, please put code in [ code ] code tags [ /code ], it makes it much more readable.
 

10 More Discussions You Might Find Interesting

1. Shell Programming and Scripting

shellscript for download files

Hi, iam new to shell script how to download files from one Linux box to other Linux box folder. example : Box1: folder---/home/test/* Box2:folder---/home/download/ using FTP shellscript..? Thanks lot. Ram (3 Replies)
Discussion started by: ram2s2001
3 Replies

2. UNIX for Dummies Questions & Answers

Download files using perl

What is the easiest way to download some file using perl for it. (2 Replies)
Discussion started by: mirusnet
2 Replies

3. Shell Programming and Scripting

script for download files from ftp site

I'm new to scripting. I'm trying to write a script to download files from ftp site, the following is the script and the message i get after running the script. no files were downloaded :( Thanks advance! script: #!/usr/bin/ksh DAY=`date --date="-1 days" +%y%m%d` ftp -v -n "ftp.address" <<... (5 Replies)
Discussion started by: tiff-matt
5 Replies

4. Linux

shell script to download files from a site?

Hey everyone, my wife has purchased a bundle package of a bunch of images from a site, and now has to download each one of them manually. There are about 500 downloads, and it's quite a hassle to browse to each page and download them all individually. I would like to write a shell script to... (2 Replies)
Discussion started by: paqman
2 Replies

5. Shell Programming and Scripting

shell script to download variables files

Hello all i am working on creating shell script to download files daily example : file12_10_2009.txt.gz next day this file will be file13_10_2009.txt.gz and so on.. i need help to know how to download this incrimental date files daily ? regards (1 Reply)
Discussion started by: mogabr
1 Replies

6. Shell Programming and Scripting

LFTP - to download files that are between 0 and 3 days old

Im writing a script and i specifically need it to download files that are between 0-2 days old. This will run every 2 days. I understand lftp supports newer files only, but these files will be removed from the target so this is not what we want. Does anyone know how to do this? ----------... (0 Replies)
Discussion started by: mokachoka
0 Replies

7. UNIX for Advanced & Expert Users

Help with using curl to download files from https

Hi I'm trying to download an xml file from a https server using curl on a Linux machine with Ubuntu 10.4.2 I am able to connect to the remote server with my username and password but the output is only "Virtual user <username> logged in". I am expecting to download the xml file. My output... (4 Replies)
Discussion started by: henryN
4 Replies

8. Shell Programming and Scripting

Files download using wget

Hi, I need to implement below logic to download files daily from a URL. * Need to check if it is yesterday's file (YYYY-DD-MM.dat) * If present then download from URL (sample_url/2013-01-28.dat) * Need to implement wait logic if not present * if it still not able to find the file... (1 Reply)
Discussion started by: rakesh5300
1 Replies

9. Shell Programming and Scripting

Download files every one second using ftp script

Our main Server "Srv1" is used to generate text files based on specified criteria and it is also connected to two clients (pc1 and pc2) which are responsible for getting the files from Srv1 as it follows: 1. pc1 ( which represents my UNIX machine ) uses shell script to copy the files from Srv1 2.... (3 Replies)
Discussion started by: arm
3 Replies

10. UNIX for Dummies Questions & Answers

List and download web page files

Hello, Does anyone know of a way to list all files related to a single web page and then to download say 4 files at a time simultaneously until the entire web page has been downloaded successfully? I'm essentially trying to mimic a browser. Thanks. (2 Replies)
Discussion started by: shadyuk
2 Replies
MOIN(1) 							   User Commands							   MOIN(1)

NAME
moin - Moinmoin wiki management command-line interface SYNOPSIS
moin [general options] [command command-subcommand] [specific options] moin [--help|--version] DESCRIPTION
moin is a tool to interact with moinmoin wiki from the command line. The command can manipulate moinmoin user accounts, print/dump data, import irclogs, do maintenance task, etc. This command should be executed under the operating system account that "own" the wiki content (files). OPTIONS
--config-dir=DIR Path to the directory containing the wiki configuration files. [default: current directory] (Debian's /usr/bin/moin defaults to /etc/moin/) --wiki-url=WIKIURL URL of a single wiki to migrate e.g. http://localhost/mywiki/ [default: CLI] --page=PAGE wiki page name [default: all pages] --version show program's version number and exit -q, --quiet Be quiet (no informational messages) --show-timing Show timing values [default: False] MOIN COMMANDS
moin command supports many commands, which in turns have sub-commands. account check --help [check-option] When using ACLs, a wiki user name has to be unique, there must not be multiple accounts having the same username. The problem is, that this was possible before the introduction of ACLs and many users, who forgot their ID, simply created a new ID using the same user name. Because access rights (when using ACLs) depend on the NAME (not the ID), this must be cleaned up before using ACLs or users will have difficulties changing settings and saving their account data (system won't accept the save, if the user name and email is not unique). account create --help [create-option] This tool allows you to create user accounts via a command line interface. account disable --help [disable-option] This tool allows you to disable user accounts via a command line interface. account homepage --help [homepage-option] This tool allows you to create user homepages via a command line interface. account resetpw --help [resetpw-option] This tool allows you to change a user password via a command line interface. cli show --help [show-option] Just run a CLI request and show the output. export dump --help [dump-option] This tool allows you to dump MoinMoin wiki pages to static HTML files. export package --help [package-option] This tool allows you to create a package of certain wiki pages. import irclog --help [irclog-option] This script pushes files from a directory into the wiki (to be exact: it pushes all except the last file, as this is maybe still written to in case of irc logs). One application is to use it to store IRC logs into the wiki. import wikipage --help [wikipage-option] index build --help [build-option] This tool allows you to control xapian's index of Moin. maint cleancache --help [cleancache-option] This script allows you to globally delete all the cache files in data/pages/PageName/cache/ and /data/cache directories You will usually do this after changing MoinMoin code, by either upgrading version, installing or removing macros or changing the regex expression for dicts or groups. This often makes the text_html file invalid, so you have to remove it (the wiki will recreate it automatically). text_html is the name of the cache file used for compiled pages formatted by the wiki text to html formatter. maint cleanpage --help [cleanpage-option] This tool outputs a shell script which upon execution will remove unused or trashed pages from the wiki. maint cleansessions --help [cleansessions-option] This script allows you to clean up session files (usually used to maintain a "logged-in session" for http(s) or xmlrpc). maint globaledit --help [globaledit-option] This tool allows you to edit all the pages in a wiki. maint mailtranslators --help [mailtranslators-option] This tool allows you to have a message read in from standard input, and then sent to all translators via email. If you use %(lang)s in the message it will be replaced with the appropriate language code for the translator. maint makecache --help [makecache-option] This script allows you to create cache files in data/pages/PageName/cache/ and /data/cache directories You will usually do this after changing MoinMoin code and calling "maint cleancache", by either upgrading version, installing or removing macros. text_html is the name of the cache file used for compiled pages formatted by the wiki text to html formatter. maint mkpagepacks --help [mkpagepacks-option] This tool generates a set of packages from all the pages in a wiki. maint reducewiki --help [reducewiki-option] This tool allows you to reduce a data/ directory to just the latest page revision of each non-deleted page (plus all attachments). This is used to make the distributed underlay directory, but can also be used for other purposes. So we change like this: * data/pages/PageName/revisions/{1,2,3,4} -> data/pages/revisions/1 (with content of 4) * data/pages/PageName/current (pointing to e.g. 4) -> same (pointing to 1) * data/pages/PageName/edit-log and data/edit-log -> do not copy * data/pages/PageName/attachments/* -> just copy migration data --help [data-option] This tool allow you to migrate data of pages to a newer version server standalone --help [standalone-option] This tool allows you to start a standalone server xmlrpc mailimport --help [mailimport-option] This tool allows you to import mail into the wiki. xmlrpc remote --help [remote-option] This tool allows you to execute moin scripts remotely. xmlrpc retrieve --help [retrieve-option] This tool allows you to print out the contents of a page via xmlrpc. xmlrpc write --help [write-option] This tool allows you to edit a page with xmlrpc. It is more of a commented example than an actual script. EXAMPLES
Clean the cache containing pre-computed/pre-rendered pages. $ moin --config-dir=/etc/moin --wiki-url=http://webserver/mywiki maint cleancache Manually migrate the wiki content. $ moin --config-dir=/where/your/configdir/is --wiki-url=http://webserver/mywiki migration data Create the initial Xapian index (after enabling it in the configuration file) $ moin --config-dir=/etc/moin --wiki-url=http://webserver/mywiki index build --mode=add SEE ALSO
The full documentation for moin command line is maintained as a Wiki page (HelpOnMoinCommand). A copy is available at /usr/share/doc/python-moinmoin/HelpOnMoinCommand. Read the help page on your running instance of moinmoin because other MoinMoin instances, like http://moinmo.in/HelpOnMoinCommand may run a different version. moin 2010-04-06 MOIN(1)
All times are GMT -4. The time now is 03:20 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy