Sponsored Content
Top Forums Shell Programming and Scripting I dont want to know any search engines Post 21007 by memattmyself on Wednesday 8th of May 2002 09:36:07 PM
Old 05-08-2002
I dont want to know any search engines

I just want to know where I can download it on this website plz
 

3 More Discussions You Might Find Interesting

1. UNIX for Dummies Questions & Answers

Search for Files that DONT contain a string

How do I search for files that dont contain a certain string? I am currently trying find ./logs -size +1c -exec grep -l 'Process Complete' {} \; -exec ls -l {} \; > $TOD Which gives me files that are reater han 0 file size and contain the string 'Process complete' but I want files that DONT... (13 Replies)
Discussion started by: tonydsam
13 Replies

2. Shell Programming and Scripting

Checking status of engines using C-shell

I am relatively new to scripting. I am trying to develop a script that will 1. Source an executable file as an argument to the script that sets up the environment 2. Run a command "stat" that gives the status of 5 Engines running on the system 3. Check the status of the 5 Engines as either... (0 Replies)
Discussion started by: paslas
0 Replies

3. Linux

Learning scrapers, webcrawlers, search engines and CURL

All, I'm trying to learn scrapers, webcrawlers, search engines and CURL. I've chosen to interrogate the following sites: Manta, SuperPages, Yellow Book, Yellow Pages. These show organizations/businesses by search type/category, so effective in finding potential clients. ... (3 Replies)
Discussion started by: TBotNik
3 Replies
WEB2DISK(1)							      calibre							       WEB2DISK(1)

NAME
web2disk - part of calibre SYNOPSIS
web2disk URL DESCRIPTION
Where URL is for example http://google.com Whenever you pass arguments to web2disk that have spaces in them, enclose the arguments in quotation marks. OPTIONS
--version show program's version number and exit -h, --help show this help message and exit -d, --base-dir Base directory into which URL is saved. Default is . -t, --timeout Timeout in seconds to wait for a response from the server. Default: 10.0 s -r, --max-recursions Maximum number of levels to recurse i.e. depth of links to follow. Default 1 -n, --max-files The maximum number of files to download. This only applies to files from <a href> tags. Default is 2147483647 --delay Minimum interval in seconds between consecutive fetches. Default is 0 s --encoding The character encoding for the websites you are trying to download. The default is to try and guess the encoding. --match-regexp Only links that match this regular expression will be followed. This option can be specified multiple times, in which case as long as a link matches any one regexp, it will be followed. By default all links are followed. --filter-regexp Any link that matches this regular expression will be ignored. This option can be specified multiple times, in which case as long as any regexp matches a link, it will be ignored. By default, no links are ignored. If both filter regexp and match regexp are speci- fied, then filter regexp is applied first. --dont-download-stylesheets Do not download CSS stylesheets. --verbose Show detailed output information. Useful for debugging SEE ALSO
The User Manual is available at http://manual.calibre-ebook.com Created by Kovid Goyal <kovid@kovidgoyal.net> web2disk (calibre 0.8.51) January 2013 WEB2DISK(1)
All times are GMT -4. The time now is 09:17 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy