The OS is linux, it's a one time job(occasionally). these are offline files and not being updated. Need to make a process for future requirements.
Its not in DB.. actually these are application log files.
The size of files are 1.5G approx. Right now only thinking of the best way/approach to complete the task...
Had tried using perl hashes(didn't work), i guess keeping that much data in memory is not possible... hence algorithm has to be really efficient here.
Sample files:
Input.txt
Status.txt
need to update the status field in input.txt with the status(success/failure) in status.txt
Last edited by Franklin52; 06-13-2012 at 08:47 AM..
Reason: Please use code tags for data and code samples
Hello,
I have got one file with more than 120+ million records(35 GB in size). I have to extract some relevant data from file based on some parameter and generate other output file.
What will be the besat and fastest way to extract the ne file.
sample file format :--... (2 Replies)
hi,
I'm trying to sort a file which has 3.7 million records an gettign the following error...any help is appreciated...
sort: Write error while merging.
Thanks (6 Replies)
Here is an easy game!
I wrote a number between 0 and 20 (that can include 0 and 20) on a piece of paper. I am staring at it now, imagining the number so you can read my mind ;)
Reply once, and only once, with a number from 0 to 20 and the first person to guess it wins 1,000,000 Bits.
... (24 Replies)
I have a log file that is about 1.2 million lines long and about 300MB.
we need a way to clean up this file and only keep the last few thousand lines.
if i use tail command we run our of memory as the file is too big.
I do have a key word to match on.
example, we want to keep every line... (8 Replies)
Hi,
one of the server, log directory was never cleaned up. We have so many files. I want to remove all the files that starts with dfr* but I get error message when I use the *.
rm qfr*
bash: /usr/bin/rm: Arg list too long
I am trying to write this script but not working.
... (4 Replies)
Hi,
here is my problem:
I've got a file with 6 columns (file1):
a b c d e f
a b c d e f
a b c d e f
a b c d e f
I need to add 1 million columns to this file, each column needs to be a zero.
Here is how the result file (file2) should look like (for the sake of the example, I've only... (7 Replies)
Discussion started by: zajtat
7 Replies
LEARN ABOUT REDHAT
www::robotrules
WWW::RobotRules(3) User Contributed Perl Documentation WWW::RobotRules(3)NAME
WWW::RobotsRules - Parse robots.txt files
SYNOPSIS
require WWW::RobotRules;
my $robotsrules = new WWW::RobotRules 'MOMspider/1.0';
use LWP::Simple qw(get);
$url = "http://some.place/robots.txt";
my $robots_txt = get $url;
$robotsrules->parse($url, $robots_txt);
$url = "http://some.other.place/robots.txt";
my $robots_txt = get $url;
$robotsrules->parse($url, $robots_txt);
# Now we are able to check if a URL is valid for those servers that
# we have obtained and parsed "robots.txt" files for.
if($robotsrules->allowed($url)) {
$c = get $url;
...
}
DESCRIPTION
This module parses a /robots.txt file as specified in "A Standard for Robot Exclusion", described in
<http://info.webcrawler.com/mak/projects/robots/norobots.html> Webmasters can use the /robots.txt file to disallow conforming robots access
to parts of their web site.
The parsed file is kept in the WWW::RobotRules object, and this object provides methods to check if access to a given URL is prohibited.
The same WWW::RobotRules object can parse multiple /robots.txt files.
The following methods are provided:
$rules = WWW::RobotRules->new($robot_name)
This is the constructor for WWW::RobotRules objects. The first argument given to new() is the name of the robot.
$rules->parse($robot_txt_url, $content, $fresh_until)
The parse() method takes as arguments the URL that was used to retrieve the /robots.txt file, and the contents of the file.
$rules->allowed($uri)
Returns TRUE if this robot is allowed to retrieve this URL.
$rules->agent([$name])
Get/set the agent name. NOTE: Changing the agent name will clear the robots.txt rules and expire times out of the cache.
ROBOTS.TXT
The format and semantics of the "/robots.txt" file are as follows (this is an edited abstract of
<http://info.webcrawler.com/mak/projects/robots/norobots.html>):
The file consists of one or more records separated by one or more blank lines. Each record contains lines of the form
<field-name>: <value>
The field name is case insensitive. Text after the '#' character on a line is ignored during parsing. This is used for comments. The
following <field-names> can be used:
User-Agent
The value of this field is the name of the robot the record is describing access policy for. If more than one User-Agent field is
present the record describes an identical access policy for more than one robot. At least one field needs to be present per record. If
the value is '*', the record describes the default access policy for any robot that has not not matched any of the other records.
Disallow
The value of this field specifies a partial URL that is not to be visited. This can be a full path, or a partial path; any URL that
starts with this value will not be retrieved
ROBOTS.TXT EXAMPLES
The following example "/robots.txt" file specifies that no robots should visit any URL starting with "/cyberworld/map/" or "/tmp/":
User-agent: *
Disallow: /cyberworld/map/ # This is an infinite virtual URL space
Disallow: /tmp/ # these will soon disappear
This example "/robots.txt" file specifies that no robots should visit any URL starting with "/cyberworld/map/", except the robot called
"cybermapper":
User-agent: *
Disallow: /cyberworld/map/ # This is an infinite virtual URL space
# Cybermapper knows where to go.
User-agent: cybermapper
Disallow:
This example indicates that no robots should visit this site further:
# go away
User-agent: *
Disallow: /
SEE ALSO
LWP::RobotUA, WWW::RobotRules::AnyDBM_File
libwww-perl-5.65 2001-04-20 WWW::RobotRules(3)