Sponsored Content
Full Discussion: Cannot logon using elinks
Contact Us Post Here to Contact Site Administrators and Moderators Cannot logon using elinks Post 302481653 by Corona688 on Saturday 18th of December 2010 09:06:04 PM
Old 12-18-2010
It may also be a javascript issue, not forwarding you to the next page without it.

The links text browser may be better able to handle it, being the only text browser I know of that has javascript at all. I'll check it out in a minute.

---------- Post updated at 08:06 PM ---------- Previous update was at 08:03 PM ----------

It definitely works in text-mode links.
This User Gave Thanks to Corona688 For This Post:
 

5 More Discussions You Might Find Interesting

1. UNIX for Dummies Questions & Answers

Can't logon

I stupidly changed the shell of the root user to one that does not exist, and now when I try to lgon it says it cannot find the path to my shell and will not let me proceed any further. Is there any way I can get round this without re-installing the OS? Thanks for any replies. (8 Replies)
Discussion started by: SRP
8 Replies

2. Shell Programming and Scripting

About Logon

hi how can I know abt the details of current user who are logged on and as well as those users who currently have an account but are not logged on? Thanks (1 Reply)
Discussion started by: nokia1100
1 Replies

3. UNIX for Dummies Questions & Answers

Pine, Alpine and ELinks

Hello. I'm running Mac OS X 10.5. I am completely new to UNIX and also to command line. I'm trying to setup an email client and a web browser. Google told me Pine was a good idea for email, so I got Pine. I thought it would be like other email clients, like Thunderbird, so that I can use IMAP... (1 Reply)
Discussion started by: saithesci
1 Replies

4. UNIX for Dummies Questions & Answers

squid&elinks config

Hi guys, I need to restrict the HTTP traffic for yahoo.com(and all its sub-domains) for elinks but when i use ping to receive packages from it. I need to do that with squid and use dstdom_regex in /etc/squid/squid.conf ,and i need to configure proxy in /etc/elinks/elinks.conf. Can someone help... (2 Replies)
Discussion started by: G30
2 Replies

5. Shell Programming and Scripting

Controlling elinks from Pipe, CLI or Script

Hi guys, I’m looking for an automatic way to save the source code of an website to a file. My questions is, it is possible to control elinks from a script, pipe or cli? I need to open a website, login on this site by a webform, submit the form, open an new url and save the source code of a... (2 Replies)
Discussion started by: digg_de
2 Replies
HTML::LinkExtor(3)					User Contributed Perl Documentation					HTML::LinkExtor(3)

NAME
HTML::LinkExtor - Extract links from an HTML document SYNOPSIS
require HTML::LinkExtor; $p = HTML::LinkExtor->new(&cb, "http://www.perl.org/"); sub cb { my($tag, %links) = @_; print "$tag @{[%links]} "; } $p->parse_file("index.html"); DESCRIPTION
HTML::LinkExtor is an HTML parser that extracts links from an HTML document. The HTML::LinkExtor is a subclass of HTML::Parser. This means that the document should be given to the parser by calling the $p->parse() or $p->parse_file() methods. $p = HTML::LinkExtor->new([$callback[, $base]]) The constructor takes two optional arguments. The first is a reference to a callback routine. It will be called as links are found. If a callback is not provided, then links are just accumulated internally and can be retrieved by calling the $p->links() method. The $base argument is an optional base URL used to absolutize all URLs found. You need to have the URI module installed if you provide $base. The callback is called with the lowercase tag name as first argument, and then all link attributes as separate key/value pairs. All non-link attributes are removed. $p->links Returns a list of all links found in the document. The returned values will be anonymous arrays with the follwing elements: [$tag, $attr => $url1, $attr2 => $url2,...] The $p->links method will also truncate the internal link list. This means that if the method is called twice without any parsing between them the second call will return an empty list. Also note that $p->links will always be empty if a callback routine was provided when the HTML::LinkExtor was created. EXAMPLE
This is an example showing how you can extract links from a document received using LWP: use LWP::UserAgent; use HTML::LinkExtor; use URI::URL; $url = "http://www.perl.org/"; # for instance $ua = LWP::UserAgent->new; # Set up a callback that collect image links my @imgs = (); sub callback { my($tag, %attr) = @_; return if $tag ne 'img'; # we only look closer at <img ...> push(@imgs, values %attr); } # Make the parser. Unfortunately, we don't know the base yet # (it might be diffent from $url) $p = HTML::LinkExtor->new(&callback); # Request document and parse it as it arrives $res = $ua->request(HTTP::Request->new(GET => $url), sub {$p->parse($_[0])}); # Expand all image URLs to absolute ones my $base = $res->base; @imgs = map { $_ = url($_, $base)->abs; } @imgs; # Print them out print join(" ", @imgs), " "; SEE ALSO
HTML::Parser, HTML::Tagset, LWP, URI::URL COPYRIGHT
Copyright 1996-2001 Gisle Aas. This library is free software; you can redistribute it and/or modify it under the same terms as Perl itself. perl v5.8.0 2001-04-10 HTML::LinkExtor(3)
All times are GMT -4. The time now is 06:51 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy