Sponsored Content
Full Discussion: About search engine in unix
Top Forums Shell Programming and Scripting About search engine in unix Post 302700541 by Corona688 on Thursday 13th of September 2012 02:40:28 PM
Old 09-13-2012
You will want ways to

1) retrieve webpages. easy enough -- wget.
2) extract HTTP links from pages.
3) process these links for information.

But since you have told me this is a school project, you need to ask in the homework forum. Be sure to post information about your school when you do so, or we won't be able to help you.

Moderator's Comments:
Mod Comment Do not post classroom or homework problems in the main forums. Homework and coursework questions can only be posted in this forum under special homework rules.

Please review the rules, which you agreed to when you registered, if you have not already done so.

More-than-likely, posting homework in the main forums has resulting in a forum infraction. If you did not post homework, please explain the company you work for and the nature of the problem you are working on.

If you did post homework in the main forums, please review the guidelines for posting homework and repost.

Thank You.

The UNIX and Linux Forums.
 

6 More Discussions You Might Find Interesting

1. Shell Programming and Scripting

Search Engine

How do you write a search engline to search offline scripts? (3 Replies)
Discussion started by: hawaiifiver
3 Replies

2. Programming

Search Engine in C

Hello everybody, I need help with this, I need to design a CGI search engine in C but i have no idea on what or how to do it. Do i have to open all the html files one by one and search for the given strings? i think this process will be slow, and will take too much of the server processing... (11 Replies)
Discussion started by: semash!
11 Replies

3. Web Development

Search Engine

Hey guys. I have a quick question. My friends and I are working on a search engine project that will hopefully be up and running by December of 2011. Here's my concern. What programs should I use to create the search engine. Thanks guys! :b: (9 Replies)
Discussion started by: OussenkoSearch
9 Replies

4. What is on Your Mind?

Patching Google Search engine/application in Unix.

Hi Unix Gurus, In my Co. we have intranet site hosted on Unix box. In Explorer there is a text box for searching information on internet. By default it is using Google Custom Search. This search engine is little old one. Now I want to patch this search engine with latest patch. If any one know... (0 Replies)
Discussion started by: sriramis4u
0 Replies

5. Homework & Coursework Questions

About search engine in unix

Use and complete the template provided. The entire template must be completed. If you don't, your post may be deleted! 1. The problem statement, all variables and given/known data: How to create a search engine in unix using commands?...Atleast guidelines to craete this search engine...Thank... (1 Reply)
Discussion started by: Sindhu R
1 Replies

6. What is on Your Mind?

YouTube: Search Engine Optimization | How To Fix Soft 404 Errors and A.I. Tales from Google Search

Getting a bit more comfortable making quick YT videos in 4K, here is: Search Engine Optimization | How To Fix Soft 404 Errors and A.I. Tales from Google Search Console https://youtu.be/I6b9T2qcqFo (0 Replies)
Discussion started by: Neo
0 Replies
HTTP::Proxy::Engine::Legacy(3pm)			User Contributed Perl Documentation			  HTTP::Proxy::Engine::Legacy(3pm)

NAME
HTTP::Proxy::Engine::Legacy - The "older" HTTP::Proxy engine SYNOPSIS
my $proxy = HTTP::Proxy->new( engine => 'Legacy' ); DESCRIPTION
This engine reproduces the older child creation algorithm of HTTP::Proxy. Angelos Karageorgiou "<angelos@unix.gr>" reports: I got the Legacy engine to work really fast under "Win32" with the following trick: max_keep_alive_requests(1); max_clients(120); $HTTP::VERSION(1.0); # just in case and it smokes. It seems that forked children are really slow when calling select for handling "keep-alive"d requests! METHODS
The module defines the following methods, used by HTTP::Proxy main loop: start() Initialise the engine. run() Implements the forking logic: a new process is forked for each new incoming TCP connection. stop() Reap remaining child processes. The following method is used by the engine internally: reap_zombies() Process the dead child processes. SEE ALSO
HTTP::Proxy, HTTP::Proxy::Engine. AUTHOR
Philippe "BooK" Bruhat, "<book@cpan.org>". COPYRIGHT
Copyright 2005, Philippe Bruhat. LICENSE
This module is free software; you can redistribute it or modify it under the same terms as Perl itself. perl v5.12.4 2011-07-03 HTTP::Proxy::Engine::Legacy(3pm)
All times are GMT -4. The time now is 09:22 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy