Linux and UNIX Man Pages

Linux & Unix Commands - Search Man Pages

gurlchecker(1) [debian man page]

GURLCHECKER(1)						    http://labs.libre-entrepris 					    GURLCHECKER(1)

NAME
gurlchecker - A graphical web link checker that works on a whole site, a single local page or a browser bookmarks file. SYNOPSIS
gurlchecker [-h|--help] [-v|--version] [-u|--http-user user] [-p|--http-passwd password] [-A|--no-urls-args] [URL] DESCRIPTION
This manual page documents the gurlchecker command. gurlchecker is a graphical web link checker. It searches URLs in a whole site, a single local page or a browser bookmarks page, checking them, one by one, in order to know their validity. Furthermore, it can manage entire projects. OPTIONS
This program follows the usual GNU command line syntax, with long options starting with two dashes (`-'). A summary of options is included below. -h, --help Show summary of options. -v, --version Show version of program. -u, --http-user User name for HTTP Basic Authentication. -p, --http-passwd Password for HTTP Basic Authentication. -A, --no-urls-args Ignore URLs arguments. With this option gurlchecker will systematically remove arguments on URLs. It is useful to avoid nasty infinite loops when arguments are randomly generated, for example. AUTHORS
This manual page was written by Daniel Pecos Martinez <dani@netpecos.org> and Emmanuel Saracco <esaracco@users.labs.libre-entreprise.org> for the Debian system (but may be used by others). Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.1 or any later version published by the Free Software Foundation; with no Invariant Sections, no Front-Cover Texts and no Back-Cover Texts. AUTHORS
Daniel Pecos Martinez Author. Emmanuel Saracco Author. COPYRIGHT
Copyright (C) 2003-2005 Daniel Pecos Martinez, Emmanuel Saracco http://gurlchecker.labs.li Apr 8, 2009 GURLCHECKER(1)

Check Out this Related Man Page

ELZA(1) 						      General Commands Manual							   ELZA(1)

NAME
elza -- script language for automating HTTP requests SYNOPSIS
elza [scriptfile] DESCRIPTION
The elza is a scripting language aimed at automating requests on web pages. Scripts written in elza are capable of mimicking browser behavior almost perfectly, making it extremely difficult for remote servers to distinguish their activity from the activity generated by ordinary users and browsers. This gives those scripts the opportunity to act upon servers that will not respond to requests generated using netcat, gammaprog, rebol, or similar tool. This manual page was written for the Debian GNU/Linux distribution because the original program does not have a manual page. OPTIONS
scriptfile The name and location of your script to be executed by elza. Variables are specified either in the .elz script, or in elza.def AUTHOR
This manual page was written by Stijn de Bekker stijn@debian.org for the Debian GNU/Linux system (but may be used by others). Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.1 or any later version published by the Free Software Foundation; with no Invariant Sections, no Front-Cover Texts and no Back-Cover Texts. A copy of the license can be found under /usr/share/common-licenses/FDL. FILES
/etc/elza.def Default values for various elza variables. ELZA(1)
Man Page

4 More Discussions You Might Find Interesting

1. UNIX for Dummies Questions & Answers

Selecting information from several web pages...

Hi All! Is this possible? I know of several hundreds of urls linking to similar looking hp-ux man pages, like these. In these urls only the last words separated by / are changing in numbering, so we can generate these... http://docs.hp.com/hpux/onlinedocs/B3921-90010/00/00/31-con.html... (2 Replies)
Discussion started by: Vishnu
2 Replies

2. Shell Programming and Scripting

Extract URLs from HTML code using sed

Hello, i try to extract urls from google-search-results, but i have problem with sed filtering of html-code. what i wont is just list of urls thay apears between ........<p><a href=" and next following " in html code. here is my code, i use wget and pipelines to filtering. wget works, but... (13 Replies)
Discussion started by: L0rd
13 Replies

3. UNIX for Dummies Questions & Answers

AWK help please

I am creating a script to pull URLs out of a Firefox .json file and create a bookmarks.html file from the URLs. I need to know how to grab the URL from each line of output and to copy it >HERE</A> Each line will have a different URL, I need on each line of output to have the URL copied before... (1 Reply)
Discussion started by: glev2005
1 Replies

4. Shell Programming and Scripting

How to remove urls from html files

Does anybody know how to remove all urls from html files? all urls are links with anchor texts in the form of <a href="http://www.anydomain.com">ANCHOR</a> they may start with www or not. Goal is to delete all urls and keep the ANCHOR text and if possible to change tags around anchor to... (2 Replies)
Discussion started by: georgi58
2 Replies