12-06-2019
Update:
Looks like the site continues to move up in SERP results with Google, staying around the same with Bing and currently down a bit with DuckDuck has the following current search position for the "unix" keywords:
- Google (US): Between #7 and #9 (page 1) still moving up on page 1.
- Bing (US): #1 - #2 (competing with Wikipedia for the #1 slot, alternating between 1 and 2.)
- DuckDuckGo (US): Around #4 (has dropped a bit from before).
Traffic from search referrals continues to rise (not taking into consideration the US Thanksgiving holiday, of course).
This User Gave Thanks to Neo For This Post:
7 More Discussions You Might Find Interesting
1. Shell Programming and Scripting
Hi All,
I have a file in which the contents are as shown below:
Number of Dynamic Addresses Allocated : 107790 Addresses:
10.3.29.202,10.47.1.145,10.2.4.98,190.1.89.95,.. (many ip addresses separated by comma)
----... (3 Replies)
Discussion started by: imas
3 Replies
2. Shell Programming and Scripting
Hi,
I have a fixed length text file that needs to be cut into individual files in aix and facing padding issues. If I have multiple blank spaces in the file it is just making it one while cutting the files..
Eg:-
$ - blank space
filename:file.txt
... (2 Replies)
Discussion started by: techmoris
2 Replies
3. UNIX for Advanced & Expert Users
hi there, I need that when user input
mysite.com/ponuka/AAA2869
it shows
mysite.com/ukaz.php?ponuka=AAA2869
because of facebook likes, and I found out that this is set up as rewrite rule in .htaccess file? how to achieve it?
thank you... :confused:
---------- Post updated at 04:47... (0 Replies)
Discussion started by: vogueestylee
0 Replies
4. What is on Your Mind?
Have just added (after missing for some time), the latest version of Google Site Search for our site in the Navbar Search Menu:
https://www.unix.com/members/1-albums215-picture791.png
Cheers and Enjoy.
Here is the URL for that link in case you need it:
https://goo.gl/P8p82c (4 Replies)
Discussion started by: Neo
4 Replies
5. Web Development
Over the past 10 plus years, we have countless posts where the user did not use CODE tags or they used ICODE tags incorrectly.
This has has the results of this site penalized by Google for having pages which are "not mobile friendly".
So, working quietly in the background, in the thankless... (0 Replies)
Discussion started by: Neo
0 Replies
6. What is on Your Mind?
For the first time in the history of the site Google Search Console (GSC) has unix.com showing "no mobile viewability errors". This is no small achievement considering the hundreds of thousand of lines of legacy code we run at a site which has been around much longer than Facebook or LinkedIn:
... (0 Replies)
Discussion started by: Neo
0 Replies
7. What is on Your Mind?
Getting a bit more comfortable making quick YT videos in 4K, here is:
Search Engine Optimization | How To Fix Soft 404 Errors and A.I. Tales from Google Search Console
https://youtu.be/I6b9T2qcqFo (0 Replies)
Discussion started by: Neo
0 Replies
LEARN ABOUT MOJAVE
html::linkextor5.18
HTML::LinkExtor(3) User Contributed Perl Documentation HTML::LinkExtor(3)
NAME
HTML::LinkExtor - Extract links from an HTML document
SYNOPSIS
require HTML::LinkExtor;
$p = HTML::LinkExtor->new(&cb, "http://www.perl.org/");
sub cb {
my($tag, %links) = @_;
print "$tag @{[%links]}
";
}
$p->parse_file("index.html");
DESCRIPTION
HTML::LinkExtor is an HTML parser that extracts links from an HTML document. The HTML::LinkExtor is a subclass of HTML::Parser. This means
that the document should be given to the parser by calling the $p->parse() or $p->parse_file() methods.
$p = HTML::LinkExtor->new
$p = HTML::LinkExtor->new( $callback )
$p = HTML::LinkExtor->new( $callback, $base )
The constructor takes two optional arguments. The first is a reference to a callback routine. It will be called as links are found. If
a callback is not provided, then links are just accumulated internally and can be retrieved by calling the $p->links() method.
The $base argument is an optional base URL used to absolutize all URLs found. You need to have the URI module installed if you provide
$base.
The callback is called with the lowercase tag name as first argument, and then all link attributes as separate key/value pairs. All
non-link attributes are removed.
$p->links
Returns a list of all links found in the document. The returned values will be anonymous arrays with the following elements:
[$tag, $attr => $url1, $attr2 => $url2,...]
The $p->links method will also truncate the internal link list. This means that if the method is called twice without any parsing
between them the second call will return an empty list.
Also note that $p->links will always be empty if a callback routine was provided when the HTML::LinkExtor was created.
EXAMPLE
This is an example showing how you can extract links from a document received using LWP:
use LWP::UserAgent;
use HTML::LinkExtor;
use URI::URL;
$url = "http://www.perl.org/"; # for instance
$ua = LWP::UserAgent->new;
# Set up a callback that collect image links
my @imgs = ();
sub callback {
my($tag, %attr) = @_;
return if $tag ne 'img'; # we only look closer at <img ...>
push(@imgs, values %attr);
}
# Make the parser. Unfortunately, we don't know the base yet
# (it might be different from $url)
$p = HTML::LinkExtor->new(&callback);
# Request document and parse it as it arrives
$res = $ua->request(HTTP::Request->new(GET => $url),
sub {$p->parse($_[0])});
# Expand all image URLs to absolute ones
my $base = $res->base;
@imgs = map { $_ = url($_, $base)->abs; } @imgs;
# Print them out
print join("
", @imgs), "
";
SEE ALSO
HTML::Parser, HTML::Tagset, LWP, URI::URL
COPYRIGHT
Copyright 1996-2001 Gisle Aas.
This library is free software; you can redistribute it and/or modify it under the same terms as Perl itself.
perl v5.18.2 2013-03-25 HTML::LinkExtor(3)