Sponsored Content
Top Forums Shell Programming and Scripting Writing a Perl Script that processes multiple files Post 302561909 by evelibertine on Wednesday 5th of October 2011 01:35:59 PM
Old 10-05-2011
Writing a Perl Script that processes multiple files

I want to write a Perl script that manipulates multiple files. In the directory, I have files 250.*chr$.ped where * is from 1 to 1000 and $ is from 1-22 for a total of 22 x 10,000 = 22,000 files.

I want to write a script that only manipulates files 250.1chr*.ped where * is from 1 to 22. Currently I am using
Code:
opendir(DIR,"/cchome/output")
#----------------------------------------------change dir
   || die "NO SUCH Directory: Images";
open(OUT,">>out.out")
   || die "cannot open output file";
while (@file = readdir(DIR) )
   {
      shift @file;
      shift @file;
      my $filename;
      foreach $_ (@file){
        $filename = $_;
        next unless ($filename =~ /250.1chr*.ped$/);

But it is not working. Could you help? Thanks!

Last edited by Franklin52; 10-06-2011 at 03:21 AM.. Reason: Please use code tags, thank you
 

10 More Discussions You Might Find Interesting

1. Shell Programming and Scripting

Combining Multiple files in one in a perl script

All, I want to combine multiple files in one file. Something like what we do on the commad line as follows -> cat file1 file2 file3 > Main_File. Can something like this be done in a perl script very efficiently? Thanks, Rahul. (1 Reply)
Discussion started by: rahulrathod
1 Replies

2. Shell Programming and Scripting

Multiple processes writing on the same file simultaneously

Hi All, I have encountered a problem,please help me. I have a script in which multiple processes are writing on to the same file . How should I stop this , I mean lock mechanism can be implemented or we can write the at different files and then concatenate the files. What would be a better... (1 Reply)
Discussion started by: Sayantan
1 Replies

3. Shell Programming and Scripting

perl script on multiple files

I have a script that runs on one file (at a time). like this: $> perl myscript.pl filename > output How can I run it on >6000 files and have the output sent out into slightly modified file name $> perl myscript 6000files> output6000files.new extension Thanks in anticipation (4 Replies)
Discussion started by: aritakum
4 Replies

4. Shell Programming and Scripting

Run perl script on files in multiple directories

Hi, I want to run a Perl script on multiple files, with same name ("Data.txt") but in different directories (eg : 2010_06_09_A/Data.txt, 2010_06_09_B/Data.txt). I know how to run this perl script on files in the same directory like: for $i in *.txt do perl myscript.pl $i > $i.new... (8 Replies)
Discussion started by: ad23
8 Replies

5. UNIX for Dummies Questions & Answers

Writing a for loop that processes multiple input files

I would like to write a for loop that does the following: I have a file called X.txt and other files called 1.txt,2.txt, .....,1000.txt. I want to substitute the 6th column of the file X.txt with 1.txt and store the output as X.1. Then I want to do the same with X.txt and 2.txt and store the... (1 Reply)
Discussion started by: evelibertine
1 Replies

6. UNIX for Dummies Questions & Answers

Writing a loop to manipulate a script and store it in multiple output files

I have a script where the the 9th line looks like this: $filename=sprintf("250.1chr%d.ped", $N); I want to modify this script 1000 times, changing 250.1chr%d.ped to 250.2chr%d.ped, 250.3chr%.ped.......and so on all the way to 250.1000chr%d.ped and store each output in files called ... (4 Replies)
Discussion started by: evelibertine
4 Replies

7. UNIX for Dummies Questions & Answers

Writing a loop to process multiple input files by a shell script

I have multiple input files that I want to manipulate using a shell script. The files are called 250.1 through 250.1000 but I only want the script to manipulate 250.300 through 250.1000. Before I was using the following script to manipulate the text files: for i in 250.*; do || awk... (4 Replies)
Discussion started by: evelibertine
4 Replies

8. Shell Programming and Scripting

How can I do one liner import multiple custom .pm files in my perl script?

I am new for Perl I want to ask one question. I have around 50 custom packages which i am using in my Perl script. I want to import all .pm packages in my Perl script in an easy way. Right now i have to import each package individually. So Is there any way to do so?? Right Now i am doing like: ... (1 Reply)
Discussion started by: Navrattan Bansa
1 Replies

9. UNIX for Dummies Questions & Answers

Writing a script to print the number of lines in multiple files

Hi I have 1000 files labelled data1.txt through data1000.txt. I want to write a script that prints out the number of lines in each txt file and outputs it in the following format: Column 1: number of data file (1 through 1000) Column 2: number of lines in the text file Thanks! (2 Replies)
Discussion started by: evelibertine
2 Replies

10. Shell Programming and Scripting

How to run perl script on multiple files of two directories?

Hi I have 100 files under file A labled 1.txt 2.txt.....100.txt(made up name) I have 1 files under file B labled name.txt How can i run the same perl script on 100 files and file name.txt I want to run perl script.pl A/1.txt B/name.txt perl script.pl A/2.txt B/name.txt ....... perl... (3 Replies)
Discussion started by: grace_shen
3 Replies
BP_FAST_LOAD_GFF(1p)					User Contributed Perl Documentation				      BP_FAST_LOAD_GFF(1p)

NAME
bp_fast_load_gff.pl - Fast-load a Bio::DB::GFF database from GFF files. SYNOPSIS
% bp_fast_load_gff.pl -d testdb dna1.fa dna2.fa features1.gff features2.gff ... DESCRIPTION
This script loads a Bio::DB::GFF database with the features contained in a list of GFF files and/or FASTA sequence files. You must use the exact variant of GFF described in Bio::DB::GFF. Various command-line options allow you to control which database to load and whether to allow an existing database to be overwritten. This script is similar to load_gff.pl, but is much faster. However, it is hard-coded to use MySQL and probably only works on Unix platforms due to its reliance on pipes. See bp_load_gff.pl for an incremental loader that works with all databases supported by Bio::DB::GFF, and bp_bulk_load_gff.pl for a fast MySQL loader that supports all platforms. NOTES If the filename is given as "-" then the input is taken from standard input. Compressed files (.gz, .Z, .bz2) are automatically uncompressed. FASTA format files are distinguished from GFF files by their filename extensions. Files ending in .fa, .fasta, .fast, .seq, .dna and their uppercase variants are treated as FASTA files. Everything else is treated as a GFF file. If you wish to load -fasta files from STDIN, then use the -f command-line swith with an argument of '-', as in gunzip my_data.fa.gz | bp_fast_load_gff.pl -d test -f - The nature of the load requires that the database be on the local machine and that the indicated user have the "file" privilege to load the tables and have enough room in /usr/tmp (or whatever is specified by the $TMPDIR environment variable), to hold the tables transiently. If your MySQL is version 3.22.6 and was compiled using the "load local file" option, then you may be able to load remote databases with local data using the --local option. About maxfeature: the default value is 100,000,000 bases. If you have features that are close to or greater that 100Mb in length, then the value of maxfeature should be increased to 1,000,000,000. This value must be a power of 10. If the list of GFF or fasta files exceeds the kernel limit for the maximum number of command-line arguments, use the --long_list /path/to/files option. The adaptor used is dbi::mysqlopt. There is currently no way to change this. COMMAND-LINE OPTIONS Command-line options can be abbreviated to single-letter options. e.g. -d instead of --database. --database <dsn> Mysql database name --create Reinitialize/create data tables without asking --local Try to load a remote database using local data. --user Username to log in as --fasta File or directory containing fasta files to load --password Password to use for authentication --long_list Directory containing a very large number of GFF and/or FASTA files --maxfeature Set the value of the maximum feature size (default 100Mb; must be a power of 10) --group A list of one or more tag names (comma or space separated) to be used for grouping in the 9th column. --gff3_munge Activate GFF3 name munging (see Bio::DB::GFF) --summary Generate summary statistics for drawing coverage histograms. This can be run on a previously loaded database or during the load. --Temporary Location of a writable scratch directory SEE ALSO
Bio::DB::GFF, bulk_load_gff.pl, load_gff.pl AUTHOR
Lincoln Stein, lstein@cshl.org Copyright (c) 2002 Cold Spring Harbor Laboratory This library is free software; you can redistribute it and/or modify it under the same terms as Perl itself. See DISCLAIMER.txt for disclaimers of warranty. perl v5.14.2 2012-03-02 BP_FAST_LOAD_GFF(1p)
All times are GMT -4. The time now is 11:22 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy