Sponsored Content
Top Forums UNIX for Dummies Questions & Answers Report of duplicate files based on part of the filename Post 302152088 by sudheshnaiyer on Tuesday 18th of December 2007 02:44:35 PM
Old 12-18-2007
Report of duplicate files based on part of the filename

I have the files logged in the file system with names in the format of : filename_ordernumber_date_time
eg:

file_1_12012007_1101.txt
file_2_12022007_1101.txt
file_1_12032007_1101.txt

I need to find out all the files that are logged multiple times with same order number. In the above eg, I need to get a

report like:

file_1: Submitted two times at 12012007_1101 , 12032007_1101
 

9 More Discussions You Might Find Interesting

1. UNIX for Dummies Questions & Answers

how to move files into different folders based on filename

I need to move a bunch of files into folders that have the same name. I wanted to either do this with some filter command or some type of batch file that I could save that would already include all of the mv commands since I will have to do this process often. Whatever method you think is easier. ... (7 Replies)
Discussion started by: italia5
7 Replies

2. Shell Programming and Scripting

Duplicate rows in CSV files based on values

I want to duplicate a row if found two or more values in a particular column for corresponding row which is delimitted by comma. Input abc,line one,value1 abc,line two, value1, value2 abc,line three,value1 needs to converted to abc,line one,value1 abc,line two, value1 abc,line... (8 Replies)
Discussion started by: Incrediblian
8 Replies

3. UNIX for Dummies Questions & Answers

mv files based on filename

I have a directory of about 30,000 image files. The file names are all yearmonthday.jpg however some of the files have yearmonthday-snapshot.jpg i would like to move all files that contain the phrase -snapshot to their own directory. Any assistance with the proper commands would be much... (1 Reply)
Discussion started by: jdblank
1 Replies

4. Shell Programming and Scripting

Remove duplicate files based on text string?

Hi I have been struggling with a script for removing duplicate messages from a shared mailbox. I would like to search for duplicate messages based on the “Message-ID” string within the messages files. I have managed to find the duplicate “Message-ID” strings and (if I would like) delete... (1 Reply)
Discussion started by: spangberg
1 Replies

5. Shell Programming and Scripting

Move files based on date in filename

I know this gets covered quite a bit in the forum and I think there is enough there for me to figure out how to do what I am trying to do, I just don't think I would do it very efficiently so I am going to ask the question... I have database log files with date and time stamps in the file like ... (7 Replies)
Discussion started by: slatoms
7 Replies

6. Shell Programming and Scripting

Duplicate rows in CSV files based on values

I am new to this forum and this is my first post. I am looking at an old post with exactly the same name. Can not paste URL because I do not have 5 posts My requirement is exactly opposite. I want to get rid of duplicate rows and try to append the values of columns in those rows ... (10 Replies)
Discussion started by: vbhonde11
10 Replies

7. Shell Programming and Scripting

Using bash to separate files files based on parts of a filename

Hey guys, Sorry for the basic question but I have a lot of files that I want to separate into groups based on filenames which I can then cat together. Eg I have: (a_b_c.txt) WB34_2_SLA8.txt WB34_1_SLA8.txt WB34_1_DB10.txt WB34_2_DB10.txt WB34_1_SLA8.txt WB34_2_SLA8.txt 77_1_SLA8.txt... (1 Reply)
Discussion started by: Breentax
1 Replies

8. Shell Programming and Scripting

List duplicate files based on Name and size

Hello, I have a huge directory (with millions of files) and need to find out duplicates based on BOTH file name and File size. I know fdupes but it calculates MD5 which is very time-consuming and especially it takes forever as I have millions of files. Can anyone please suggest a script or... (7 Replies)
Discussion started by: prvnrk
7 Replies

9. Shell Programming and Scripting

Bash to calculate average of all files in directory and output by part of filename

I am trying to use awk to calculate the average of all lines in $2 for every file in a directory. The below bash seems to do that, but I cannot figure out how to capture the string before the _ as the output file name and have it be tab-delimeted. Thank you :). Filenames in... (3 Replies)
Discussion started by: cmccabe
3 Replies
WWW::RobotRules(3)					User Contributed Perl Documentation					WWW::RobotRules(3)

NAME
WWW::RobotsRules - Parse robots.txt files SYNOPSIS
require WWW::RobotRules; my $robotsrules = new WWW::RobotRules 'MOMspider/1.0'; use LWP::Simple qw(get); $url = "http://some.place/robots.txt"; my $robots_txt = get $url; $robotsrules->parse($url, $robots_txt); $url = "http://some.other.place/robots.txt"; my $robots_txt = get $url; $robotsrules->parse($url, $robots_txt); # Now we are able to check if a URL is valid for those servers that # we have obtained and parsed "robots.txt" files for. if($robotsrules->allowed($url)) { $c = get $url; ... } DESCRIPTION
This module parses a /robots.txt file as specified in "A Standard for Robot Exclusion", described in <http://info.webcrawler.com/mak/projects/robots/norobots.html> Webmasters can use the /robots.txt file to disallow conforming robots access to parts of their web site. The parsed file is kept in the WWW::RobotRules object, and this object provides methods to check if access to a given URL is prohibited. The same WWW::RobotRules object can parse multiple /robots.txt files. The following methods are provided: $rules = WWW::RobotRules->new($robot_name) This is the constructor for WWW::RobotRules objects. The first argument given to new() is the name of the robot. $rules->parse($robot_txt_url, $content, $fresh_until) The parse() method takes as arguments the URL that was used to retrieve the /robots.txt file, and the contents of the file. $rules->allowed($uri) Returns TRUE if this robot is allowed to retrieve this URL. $rules->agent([$name]) Get/set the agent name. NOTE: Changing the agent name will clear the robots.txt rules and expire times out of the cache. ROBOTS.TXT The format and semantics of the "/robots.txt" file are as follows (this is an edited abstract of <http://info.webcrawler.com/mak/projects/robots/norobots.html>): The file consists of one or more records separated by one or more blank lines. Each record contains lines of the form <field-name>: <value> The field name is case insensitive. Text after the '#' character on a line is ignored during parsing. This is used for comments. The following <field-names> can be used: User-Agent The value of this field is the name of the robot the record is describing access policy for. If more than one User-Agent field is present the record describes an identical access policy for more than one robot. At least one field needs to be present per record. If the value is '*', the record describes the default access policy for any robot that has not not matched any of the other records. Disallow The value of this field specifies a partial URL that is not to be visited. This can be a full path, or a partial path; any URL that starts with this value will not be retrieved ROBOTS.TXT EXAMPLES The following example "/robots.txt" file specifies that no robots should visit any URL starting with "/cyberworld/map/" or "/tmp/": User-agent: * Disallow: /cyberworld/map/ # This is an infinite virtual URL space Disallow: /tmp/ # these will soon disappear This example "/robots.txt" file specifies that no robots should visit any URL starting with "/cyberworld/map/", except the robot called "cybermapper": User-agent: * Disallow: /cyberworld/map/ # This is an infinite virtual URL space # Cybermapper knows where to go. User-agent: cybermapper Disallow: This example indicates that no robots should visit this site further: # go away User-agent: * Disallow: / SEE ALSO
LWP::RobotUA, WWW::RobotRules::AnyDBM_File libwww-perl-5.65 2001-04-20 WWW::RobotRules(3)
All times are GMT -4. The time now is 03:31 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy