I have a report file that is generated every day by a scheduled process.
Each day the file is written to a directory named .../blah_blah/Y07/MM-DD-YY/reportmmddyy.tab
I want to copy all of this reports to a separate directory without having to do it one by one.
However, if I try
cp... (3 Replies)
So I am not sure if this should go in the shell forum or in the beginners. It is my first time posting on these forums.
I have a directory, main_dir lets say, with multiple sub directories (one_dir through onehundred_dir for example) and in each sub directory there is a test.txt. How would one... (2 Replies)
Hi,
I want to put the following values into Variables R2=0.999863 , V2=118.870318 , D2=-178.887511 and so on. There are six values for each variable R2-R8, V2-V8 and D2-D8, total of 18 values for all the variables. Can any one help me to copy and paste all the values in their respective... (2 Replies)
Hello,
I have a small question and i hope someone can help me, if i have 200 domains directories in my server under this directory
something like
now how i can copy one folder i have to this directories?
Thank You (5 Replies)
I have several directories and all those directories have .dat files in them. I want to copy all those .dat files to one directory say "collected_directory"
The problem is I don't want to overwrite files. So, if two file names match, I don't want the old file to be overwritten with a new one.
... (1 Reply)
I am using below scripts to copy all the files from multiple folders. By executing individually command i am able to copy all the files but using scripts only getting first file. System is ignoring the second CD and mget command.
HOST=server.com
USER=loginid
PASSWD="abc"
echo "open $HOST... (6 Replies)
my directory structure is like below:
basedir\
p.txt
q.htm
r.java
b\
abc.htm
xyz.java
c\
p.htm
q.java
rst.txt
my requirement is i want to copy all the files and directories... (0 Replies)
Hi,
Friends, i have a requirement where i need to rename my files residing in multiple sub directories and move them to one different directory along with some kind of directory indicator.
For eg:
test--is my parent directory and it has many files such as
a1.txt
a2.txt
a3.txt
... (5 Replies)
I have data of an excel files as given below,
file1
org1_1 1 1 2.5 100
org1_2 1 2 5.5 98
org1_3 1 3 7.2 88
file2
org2_1 1 1 2.5 100
org2_2 1 2 5.5 56
org2_3 1 3 7.2 70
I have multiple excel files as above shown.
I have to copy column 1, column 4 and paste into a new excel file as... (26 Replies)
Hey
im working on script that can compare 2 directory and check difference, then copy difference files in third diretory.
here is the story:
in folder one we have 12 subfolder and in each of them near 500 images hosted.
01 02 03 04 05 06 07 08 09 10 11 12
in folder 2 we have same subfolder... (2 Replies)
Discussion started by: nimafire
2 Replies
LEARN ABOUT DEBIAN
bp_bulk_load_gff
BP_BULK_LOAD_GFF(1p) User Contributed Perl Documentation BP_BULK_LOAD_GFF(1p)NAME
bulk_load_gff.pl - Bulk-load a Bio::DB::GFF database from GFF files.
SYNOPSIS
% bulk_load_gff.pl -d testdb dna1.fa dna2.fa features1.gff features2.gff ...
DESCRIPTION
This script loads a Bio::DB::GFF database with the features contained in a list of GFF files and/or FASTA sequence files. You must use the
exact variant of GFF described in Bio::DB::GFF. Various command-line options allow you to control which database to load and whether to
allow an existing database to be overwritten.
This script differs from bp_load_gff.pl in that it is hard-coded to use MySQL and cannot perform incremental loads. See bp_load_gff.pl for
an incremental loader that works with all databases supported by Bio::DB::GFF, and bp_fast_load_gff.pl for a MySQL loader that supports
fast incremental loads.
NOTES
If the filename is given as "-" then the input is taken from standard input. Compressed files (.gz, .Z, .bz2) are automatically
uncompressed.
FASTA format files are distinguished from GFF files by their filename extensions. Files ending in .fa, .fasta, .fast, .seq, .dna and their
uppercase variants are treated as FASTA files. Everything else is treated as a GFF file. If you wish to load -fasta files from STDIN,
then use the -f command-line swith with an argument of '-', as in
gunzip my_data.fa.gz | bp_fast_load_gff.pl -d test -f -
The nature of the bulk load requires that the database be on the local machine and that the indicated user have the "file" privilege to
load the tables and have enough room in /usr/tmp (or whatever is specified by the $TMPDIR environment variable), to hold the tables
transiently.
Local data may now be uploaded to a remote server via the --local option with the database host specified in the dsn, e.g.
dbi:mysql:test:db_host
The adaptor used is dbi::mysqlopt. There is currently no way to change this.
About maxfeature: the default value is 100,000,000 bases. If you have features that are close to or greater that 100Mb in length, then the
value of maxfeature should be increased to 1,000,000,000. This value must be a power of 10.
Note that Windows users must use the --create option.
If the list of GFF or fasta files exceeds the kernel limit for the maximum number of command-line arguments, use the --long_list
/path/to/files option.
COMMAND-LINE OPTIONS
Command-line options can be abbreviated to single-letter options. e.g. -d instead of --database.
--database <dsn> Database name (default dbi:mysql:test)
--adaptor Adaptor name (default mysql)
--create Reinitialize/create data tables without asking
--user Username to log in as
--fasta File or directory containing fasta files to load
--long_list Directory containing a very large number of
GFF and/or FASTA files
--password Password to use for authentication
(Does not work with Postgres, password must be
supplied interactively or be left empty for
ident authentication)
--maxbin Set the value of the maximum bin size
--local Flag to indicate that the data source is local
--maxfeature Set the value of the maximum feature size (power of 10)
--group A list of one or more tag names (comma or space separated)
to be used for grouping in the 9th column.
--gff3_munge Activate GFF3 name munging (see Bio::DB::GFF)
--summary Generate summary statistics for drawing coverage histograms.
This can be run on a previously loaded database or during
the load.
--Temporary Location of a writable scratch directory
SEE ALSO
Bio::DB::GFF, fast_load_gff.pl, load_gff.pl
AUTHOR
Lincoln Stein, lstein@cshl.org
Copyright (c) 2002 Cold Spring Harbor Laboratory
This library is free software; you can redistribute it and/or modify it under the same terms as Perl itself. See DISCLAIMER.txt for
disclaimers of warranty.
perl v5.14.2 2012-03-02 BP_BULK_LOAD_GFF(1p)