Finding a String and Replacing it with an incremental value
Hi Friends,
I have a text file which has about 200,000 records in it. we have a string which repeats in each and every record. So we have to write a script in ksh which finds that string on each line and replaces it with a new string(incremental value) for a set every four records.
To be more clear please look at the sample data below
abc12345 is the repeating string in each and every record, we have to replace it with an incremental value say 12345678 in first four records and then with 12345679 in the next four records and so on.
Newby here.....
Is there any way of doing a find & replace within a huge file other than using something like : " vi - FileForReplacing < /u1/excel/consultants "
contents of consultants file looks like
:1,$s/0000031/CON/g
:1,$s/0001032/JONES/g
:1,$s/0001355/SMITH/g... (3 Replies)
I have files in a folder like ..
citi11082006_1.trn
citi11082006_2.trn
citi11082006_3.trn
...
...
...
citi11082006_13.trn
.,...
citi11082006_nn.trn
Each file will have one string inside and I am builting a master string based out the string in the sub files by merging all into one by... (6 Replies)
Hi,
I have several files with data that have to be imported to a database. These files contain records with separator characters. Some records are corrupt (2 separators are missing) and I need to correct them prior to importing them into the db.
Example:
... (5 Replies)
Hi all
i need some help in writing a small script that searches a string and then replaces it by a new string
for searching the best result i get is from find comand combined with xargs
e.g.
find . -name "*.*" |xargs grep -l "search string"
this command I use in root directory and this... (4 Replies)
Hi All,
I am trying to write a shell script which firstly will search some files and then increase the port numbers mentioned in them by a certain no.
let me clear it with an example-
suppose there r few files a,b,c,d....
file a's content-
<serverEntries xmi:id="ServerEntry_1"... (3 Replies)
All
I have a very large file (aproximately 150,000) as shown below separated by pipe "|". I need to replace data in 2, 16, 17, 23 fields that are of time stamp format. My goal is to look in those fields and it ends with "000000|" then replace it with "000|". In other words, make it as 6 digit... (2 Replies)
Hi,
I would like to do a scripting for finding the links based on the name I have and replace the links with the new name. General find command lists everything for that links ( means all the sub-sirs and all the files), i need only the main link and replace.
Can you anyone give me some... (1 Reply)
Use and complete the template provided. The entire template must be completed. If you don't, your post may be deleted!
1. The problem statement, all variables and given/known data:
What command would rename "sequentialInsert", in
~cs252/Assignments/commandsAsst/project/arrayops.h, to... (2 Replies)
Hi all,
assume that i am having the following line in a file called file1.
triumph and disaster must be treated same.
I want to replace this line with.
follow excellence success will chase you.
is it possible to do this using sed. if possible kindly post me the... (2 Replies)
Hi,
I need help in the following:
I have a file in directory with mutiple comma seperated values. One of the value is a date and time format like 2012-04-10 xx:yy:zz
I need to find that time format in the file and then replace it with xx:yy+1:zz
and then save it as a new file and copy it to a... (3 Replies)
Discussion started by: rabh
3 Replies
LEARN ABOUT MOJAVE
www::robotrules5.18
WWW::RobotRules(3) User Contributed Perl Documentation WWW::RobotRules(3)NAME
WWW::RobotRules - database of robots.txt-derived permissions
SYNOPSIS
use WWW::RobotRules;
my $rules = WWW::RobotRules->new('MOMspider/1.0');
use LWP::Simple qw(get);
{
my $url = "http://some.place/robots.txt";
my $robots_txt = get $url;
$rules->parse($url, $robots_txt) if defined $robots_txt;
}
{
my $url = "http://some.other.place/robots.txt";
my $robots_txt = get $url;
$rules->parse($url, $robots_txt) if defined $robots_txt;
}
# Now we can check if a URL is valid for those servers
# whose "robots.txt" files we've gotten and parsed:
if($rules->allowed($url)) {
$c = get $url;
...
}
DESCRIPTION
This module parses /robots.txt files as specified in "A Standard for Robot Exclusion", at <http://www.robotstxt.org/wc/norobots.html>
Webmasters can use the /robots.txt file to forbid conforming robots from accessing parts of their web site.
The parsed files are kept in a WWW::RobotRules object, and this object provides methods to check if access to a given URL is prohibited.
The same WWW::RobotRules object can be used for one or more parsed /robots.txt files on any number of hosts.
The following methods are provided:
$rules = WWW::RobotRules->new($robot_name)
This is the constructor for WWW::RobotRules objects. The first argument given to new() is the name of the robot.
$rules->parse($robot_txt_url, $content, $fresh_until)
The parse() method takes as arguments the URL that was used to retrieve the /robots.txt file, and the contents of the file.
$rules->allowed($uri)
Returns TRUE if this robot is allowed to retrieve this URL.
$rules->agent([$name])
Get/set the agent name. NOTE: Changing the agent name will clear the robots.txt rules and expire times out of the cache.
ROBOTS.TXT
The format and semantics of the "/robots.txt" file are as follows (this is an edited abstract of
<http://www.robotstxt.org/wc/norobots.html>):
The file consists of one or more records separated by one or more blank lines. Each record contains lines of the form
<field-name>: <value>
The field name is case insensitive. Text after the '#' character on a line is ignored during parsing. This is used for comments. The
following <field-names> can be used:
User-Agent
The value of this field is the name of the robot the record is describing access policy for. If more than one User-Agent field is
present the record describes an identical access policy for more than one robot. At least one field needs to be present per record. If
the value is '*', the record describes the default access policy for any robot that has not not matched any of the other records.
The User-Agent fields must occur before the Disallow fields. If a record contains a User-Agent field after a Disallow field, that
constitutes a malformed record. This parser will assume that a blank line should have been placed before that User-Agent field, and
will break the record into two. All the fields before the User-Agent field will constitute a record, and the User-Agent field will be
the first field in a new record.
Disallow
The value of this field specifies a partial URL that is not to be visited. This can be a full path, or a partial path; any URL that
starts with this value will not be retrieved
Unrecognized records are ignored.
ROBOTS.TXT EXAMPLES
The following example "/robots.txt" file specifies that no robots should visit any URL starting with "/cyberworld/map/" or "/tmp/":
User-agent: *
Disallow: /cyberworld/map/ # This is an infinite virtual URL space
Disallow: /tmp/ # these will soon disappear
This example "/robots.txt" file specifies that no robots should visit any URL starting with "/cyberworld/map/", except the robot called
"cybermapper":
User-agent: *
Disallow: /cyberworld/map/ # This is an infinite virtual URL space
# Cybermapper knows where to go.
User-agent: cybermapper
Disallow:
This example indicates that no robots should visit this site further:
# go away
User-agent: *
Disallow: /
This is an example of a malformed robots.txt file.
# robots.txt for ancientcastle.example.com
# I've locked myself away.
User-agent: *
Disallow: /
# The castle is your home now, so you can go anywhere you like.
User-agent: Belle
Disallow: /west-wing/ # except the west wing!
# It's good to be the Prince...
User-agent: Beast
Disallow:
This file is missing the required blank lines between records. However, the intention is clear.
SEE ALSO
LWP::RobotUA, WWW::RobotRules::AnyDBM_File
COPYRIGHT
Copyright 1995-2009, Gisle Aas
Copyright 1995, Martijn Koster
This library is free software; you can redistribute it and/or modify it under the same terms as Perl itself.
perl v5.18.2 2012-02-18 WWW::RobotRules(3)