Sponsored Content
Top Forums Shell Programming and Scripting extract number range from a file Post 302426835 by jimmy_y on Thursday 3rd of June 2010 05:07:14 AM
Old 06-03-2010
extract number range from a file

Hi Everyone,

a.txt
Code:
1272904667;1272904737;1
1272904747;1272904819;1
1272904810;1272904857;1
1272904889;1272904926;1
1272905399;1272905406;1
1272905411;1272905422;1

if i want to get the record, when the a.txt 1st field is between 1272904749 and 1272905399, any simple way by using awk, or perl?

Thanks
 

10 More Discussions You Might Find Interesting

1. UNIX for Dummies Questions & Answers

Extract emp number from an error file.

I have to investigate error files created by SQL from an Oracle database. I would like to get just the emp number and pass this to another process, to out put specific details relating to that employee. Typical contents of an error file; STAFF_NUM TERM_DT DETAIL ------------ -----------... (2 Replies)
Discussion started by: jagannatha
2 Replies

2. Shell Programming and Scripting

how to extract a range of lines from a file

I am reading a file that contains over 5000 lines and I want to assign it to a shell variable array (which has a restriction of 1024 rows). I had an idea that if I could grab 1000 record hunks of the file, and pipe the records out, that I could perform a loop until I got to the end and process 1000... (5 Replies)
Discussion started by: beilstwh
5 Replies

3. Shell Programming and Scripting

Cutting number from range in xml file

Hi folks, I need to find the following value: First,I need to find the starting section by finding the line: <process-type id="OC4J_RiGHTv_${SCHEMA_NAME}" module-id="OC4J"> Second,under this line I need to find the following line: <port id="rmi" range="3765-3776"/> And third,from this line... (4 Replies)
Discussion started by: nir_s
4 Replies

4. Shell Programming and Scripting

extract a number within an xml file

Hi Everyone, I have an sh script that I am working on and I have run into a little snag that I am hoping someone here can assist me with. I am using wget to retrieve an xml file from thetvdb.com. This part works ok but what I need to be able to do is extract the series ID # from the xml and put... (10 Replies)
Discussion started by: tret
10 Replies

5. Shell Programming and Scripting

Extract a number from a line in a file and sed in another copied file

Dear all, I am trying to extract a number from a line in one file (task 1), duplicate another file (task 2) and replace all instances of the strings 300, in duplicated with the extracted number (task 3). Here is what I have tried so far: for ((k=1;k<4;k++)); do temp=`sed -n "${k}p"... (2 Replies)
Discussion started by: mnaqvi
2 Replies

6. Shell Programming and Scripting

Perl extract number from file & write to file

I have 1 file that has elements as follows. Also the CVR(10) and the word "SAUCE" only appear once in the file so maybe a grep command would work? file1 CVR( 9) = 0.385E+05, ! VEHICLE CVR(10) = 0.246E+05, ! SAUCE CVR(11) = 0.162E+03, ! VEHICLE I need to extract the... (6 Replies)
Discussion started by: austinj
6 Replies

7. Shell Programming and Scripting

Extract data from log file in specified range of time

I was searching for parsing a log file and found what I need in this link http://stackoverflow.com/questions/7575267/extract-data-from-log-file-in-specified-range-of-time But the most useful answer (posted by @Kent): # this variable you could customize, important is convert to seconds. # e.g... (2 Replies)
Discussion started by: kingk110
2 Replies

8. Solaris

File System Error: BAD SUPERBLOCK AT BLOCK 16: NUMBER OF DIRECTORIES OUT OF RANGE

Hi All, we are having a file system error in one of our servers. The server failed to boot in usual user mode. Instead boot with single user mode and requesting to run a FSCK manually to repair the corrupted. see the below output. Netra T2000, No Keyboard Copyright 2008 Sun Microsystems,... (5 Replies)
Discussion started by: Buddhike G
5 Replies

9. Shell Programming and Scripting

Extract range from config file matching pattern

I have config file like this: server_name xx opt1 opt2 opt3 suboptions1 #suboptions - disabled suboptions2 pattern suboptions3 server_name yy opt1 opt2 opt3 suboptions1 pattern #suboptions - disabled suboptions2 So basically I want to extract the server... (1 Reply)
Discussion started by: nemesis911
1 Replies

10. UNIX for Beginners Questions & Answers

I need to find in a file a list of number where last two digit end in a range

I all I am tryng to find a way to sort a list of number in a file by the value of last two digit. i have a list like this 313202320388 333202171199 373202164587 393202143736 323202132208 353201918107 343201887399 363201810249 333201805043 353201791691 (7 Replies)
Discussion started by: rattoeur
7 Replies
WWW::RobotRules(3pm)					User Contributed Perl Documentation				      WWW::RobotRules(3pm)

NAME
WWW::RobotRules - database of robots.txt-derived permissions SYNOPSIS
use WWW::RobotRules; my $rules = WWW::RobotRules->new('MOMspider/1.0'); use LWP::Simple qw(get); { my $url = "http://some.place/robots.txt"; my $robots_txt = get $url; $rules->parse($url, $robots_txt) if defined $robots_txt; } { my $url = "http://some.other.place/robots.txt"; my $robots_txt = get $url; $rules->parse($url, $robots_txt) if defined $robots_txt; } # Now we can check if a URL is valid for those servers # whose "robots.txt" files we've gotten and parsed: if($rules->allowed($url)) { $c = get $url; ... } DESCRIPTION
This module parses /robots.txt files as specified in "A Standard for Robot Exclusion", at <http://www.robotstxt.org/wc/norobots.html> Webmasters can use the /robots.txt file to forbid conforming robots from accessing parts of their web site. The parsed files are kept in a WWW::RobotRules object, and this object provides methods to check if access to a given URL is prohibited. The same WWW::RobotRules object can be used for one or more parsed /robots.txt files on any number of hosts. The following methods are provided: $rules = WWW::RobotRules->new($robot_name) This is the constructor for WWW::RobotRules objects. The first argument given to new() is the name of the robot. $rules->parse($robot_txt_url, $content, $fresh_until) The parse() method takes as arguments the URL that was used to retrieve the /robots.txt file, and the contents of the file. $rules->allowed($uri) Returns TRUE if this robot is allowed to retrieve this URL. $rules->agent([$name]) Get/set the agent name. NOTE: Changing the agent name will clear the robots.txt rules and expire times out of the cache. ROBOTS.TXT The format and semantics of the "/robots.txt" file are as follows (this is an edited abstract of <http://www.robotstxt.org/wc/norobots.html>): The file consists of one or more records separated by one or more blank lines. Each record contains lines of the form <field-name>: <value> The field name is case insensitive. Text after the '#' character on a line is ignored during parsing. This is used for comments. The following <field-names> can be used: User-Agent The value of this field is the name of the robot the record is describing access policy for. If more than one User-Agent field is present the record describes an identical access policy for more than one robot. At least one field needs to be present per record. If the value is '*', the record describes the default access policy for any robot that has not not matched any of the other records. The User-Agent fields must occur before the Disallow fields. If a record contains a User-Agent field after a Disallow field, that constitutes a malformed record. This parser will assume that a blank line should have been placed before that User-Agent field, and will break the record into two. All the fields before the User-Agent field will constitute a record, and the User-Agent field will be the first field in a new record. Disallow The value of this field specifies a partial URL that is not to be visited. This can be a full path, or a partial path; any URL that starts with this value will not be retrieved Unrecognized records are ignored. ROBOTS.TXT EXAMPLES The following example "/robots.txt" file specifies that no robots should visit any URL starting with "/cyberworld/map/" or "/tmp/": User-agent: * Disallow: /cyberworld/map/ # This is an infinite virtual URL space Disallow: /tmp/ # these will soon disappear This example "/robots.txt" file specifies that no robots should visit any URL starting with "/cyberworld/map/", except the robot called "cybermapper": User-agent: * Disallow: /cyberworld/map/ # This is an infinite virtual URL space # Cybermapper knows where to go. User-agent: cybermapper Disallow: This example indicates that no robots should visit this site further: # go away User-agent: * Disallow: / This is an example of a malformed robots.txt file. # robots.txt for ancientcastle.example.com # I've locked myself away. User-agent: * Disallow: / # The castle is your home now, so you can go anywhere you like. User-agent: Belle Disallow: /west-wing/ # except the west wing! # It's good to be the Prince... User-agent: Beast Disallow: This file is missing the required blank lines between records. However, the intention is clear. SEE ALSO
LWP::RobotUA, WWW::RobotRules::AnyDBM_File COPYRIGHT
Copyright 1995-2009, Gisle Aas Copyright 1995, Martijn Koster This library is free software; you can redistribute it and/or modify it under the same terms as Perl itself. perl v5.10.1 2011-03-13 WWW::RobotRules(3pm)
All times are GMT -4. The time now is 04:20 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy