Sponsored Content
Special Forums IP Networking Aggregate two internet connections Post 302281877 by tafil on Thursday 29th of January 2009 04:40:00 PM
Old 01-29-2009
Thanks for your replay i checked the how to but i think this will work only as a fail over (if one link is down the other one will take the traffic) i want to use both links to speed up the internet connection.
Any ideas?
 

10 More Discussions You Might Find Interesting

1. UNIX for Dummies Questions & Answers

aggregate ethernet ports under Solaris

I have been looking for info on how to aggregate 2 ore 3 NIC's into into one big pipe. Any advice would be appreciated. -Chuck (4 Replies)
Discussion started by: 98_1LE
4 Replies

2. UNIX for Advanced & Expert Users

combining two internet connections

hey guys Do you guys know of a good way that I can combine and load balance my 2 Internet connections using a Linux or Bsd box? Would LVS be able to do this? Thanks in advance (1 Reply)
Discussion started by: arya6000
1 Replies

3. UNIX for Advanced & Expert Users

AWK aggregate records

Hy all, I have a problem...can some one help me... I have a file of records sort: 30|239|ORD|447702936929 |blackberry.net |20080728|141304|00000900|2|0000000000000536|28181|0000000006|0000000001|10|1 30|239|ORD|447702936929 |blackberry.net ... (4 Replies)
Discussion started by: anaconga
4 Replies

4. Windows & DOS: Issues & Discussions

42 UDP internet connections

First I had a problem: My internet was slow, now I know why, I have opened 42 connections to internet. What can I do? Thanks, YourDestinity (1 Reply)
Discussion started by: YourDestinity
1 Replies

5. IP Networking

Bonding Internet Connections

I’m familiar with load balancing.. but Is it possible to actually bond multiple DSL lines together? I hear of ways to bond using MLPPP but that requires support from an ISP. Is there a way to actually bond without support from my ISP, or use say a cable modem and a DSL line together for faster... (0 Replies)
Discussion started by: harley313
0 Replies

6. Shell Programming and Scripting

simple aggregate task

Hi experts, I need an help on the task below. INPUT: values separated by the tab,first row is the header 20110609 AS A 300.5000 20110609 AS R 200.5000 20110609 BR A 111.5000 20110609 BR R 222.5000 20110610 AS A 100.5500 20110610 AS ... (2 Replies)
Discussion started by: hernand
2 Replies

7. Red Hat

How to Multiple internet connections manage into a single connection.

Dear all, Hope you are all fine & enjoying your good health. Look at this equation 1+1+1=3 So simple I just want to say that I have three internet connections of 1mb, 1mb & 1mb but I can use only 1mb connection at a time & other two connections are useless for me. But now I want to make all... (0 Replies)
Discussion started by: saqlain.bashir
0 Replies

8. Solaris

IPMP over aggregate in Solaris 11

hi all, i start with solaris 11 and i am disapointed by the change on ip managing. i want to set a ipmp over tow aggregate but i dont find any doc and i am lost with the new commande switch1 net0 aggregate1 | net1 aggregate1 |-----| |... (1 Reply)
Discussion started by: sylvain
1 Replies

9. Shell Programming and Scripting

Aggregate data within the file

Guys, I need to roll up data within the file and build a new file with the output and the same format as the original file. The data should be rolled up for each unique combination of ord,line,date and hour.. The last column appr is always " " Below is the format Original File: ... (8 Replies)
Discussion started by: venky338
8 Replies

10. Shell Programming and Scripting

Log of lost internet connections

I am having a big problem with lost internet connections with my DSL. I would like to create a log to be able to show the technician when he comes next week. I would like for it to only log pings that generate 100% packet loss. Thanks.. This script generates all ping attempts including... (4 Replies)
Discussion started by: drew77
4 Replies
HTML::LinkExtor(3)					User Contributed Perl Documentation					HTML::LinkExtor(3)

NAME
HTML::LinkExtor - Extract links from an HTML document SYNOPSIS
require HTML::LinkExtor; $p = HTML::LinkExtor->new(&cb, "http://www.perl.org/"); sub cb { my($tag, %links) = @_; print "$tag @{[%links]} "; } $p->parse_file("index.html"); DESCRIPTION
HTML::LinkExtor is an HTML parser that extracts links from an HTML document. The HTML::LinkExtor is a subclass of HTML::Parser. This means that the document should be given to the parser by calling the $p->parse() or $p->parse_file() methods. $p = HTML::LinkExtor->new $p = HTML::LinkExtor->new( $callback ) $p = HTML::LinkExtor->new( $callback, $base ) The constructor takes two optional arguments. The first is a reference to a callback routine. It will be called as links are found. If a callback is not provided, then links are just accumulated internally and can be retrieved by calling the $p->links() method. The $base argument is an optional base URL used to absolutize all URLs found. You need to have the URI module installed if you provide $base. The callback is called with the lowercase tag name as first argument, and then all link attributes as separate key/value pairs. All non-link attributes are removed. $p->links Returns a list of all links found in the document. The returned values will be anonymous arrays with the following elements: [$tag, $attr => $url1, $attr2 => $url2,...] The $p->links method will also truncate the internal link list. This means that if the method is called twice without any parsing between them the second call will return an empty list. Also note that $p->links will always be empty if a callback routine was provided when the HTML::LinkExtor was created. EXAMPLE
This is an example showing how you can extract links from a document received using LWP: use LWP::UserAgent; use HTML::LinkExtor; use URI::URL; $url = "http://www.perl.org/"; # for instance $ua = LWP::UserAgent->new; # Set up a callback that collect image links my @imgs = (); sub callback { my($tag, %attr) = @_; return if $tag ne 'img'; # we only look closer at <img ...> push(@imgs, values %attr); } # Make the parser. Unfortunately, we don't know the base yet # (it might be different from $url) $p = HTML::LinkExtor->new(&callback); # Request document and parse it as it arrives $res = $ua->request(HTTP::Request->new(GET => $url), sub {$p->parse($_[0])}); # Expand all image URLs to absolute ones my $base = $res->base; @imgs = map { $_ = url($_, $base)->abs; } @imgs; # Print them out print join(" ", @imgs), " "; SEE ALSO
HTML::Parser, HTML::Tagset, LWP, URI::URL COPYRIGHT
Copyright 1996-2001 Gisle Aas. This library is free software; you can redistribute it and/or modify it under the same terms as Perl itself. perl v5.18.2 2013-03-25 HTML::LinkExtor(3)
All times are GMT -4. The time now is 03:47 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy