09-16-2010
Dear Klash xx
It worked..sorry it was my mistake....I didn't realize I have to target the second header.
LA
10 More Discussions You Might Find Interesting
1. Shell Programming and Scripting
Hi everyone,
I need to know how to remove a chunk of codes from a file
for instance i have couple of lines which are commented out of the file and i need to remove that block. here is the example
--#------------------------------------------------------------------
--# File name= ... (5 Replies)
Discussion started by: ROOZ
5 Replies
2. Shell Programming and Scripting
Hi,
I do have a TAB delimted text file with the following format.
1 (- identifier of each group. this text is not present in the file only number)
1 3 4 65 56 WERTF
2 3 4 56 56 GHTYHU
3 3 5 64 23 VMFKLG
2
1 3 4 65 56 DGTEYDH
2 3 4 56 56 FJJJCKC
3 3 5 64 23 FNNNCHD
3
1 3 4 65 56 JDHJDH... (9 Replies)
Discussion started by: Lucky Ali
9 Replies
3. Shell Programming and Scripting
Hi all,
I have the following script, but are not too sure about the syntax to complete the script.
In essence, the script must connect to a SFTP server at a client site with username and password located in a file on my server.
Then change to the appropriate directory.
Pull the data to the... (1 Reply)
Discussion started by: codenjanod
1 Replies
4. Shell Programming and Scripting
Hi All
I wanted to know how to effectively delete some columns in a large tab delimited file.
I have a file that contains 5 columns and almost 100,000 rows
3456 f g t t
3456 g h
456 f h
4567 f g h z
345 f g
567 h j k lThis is a very large data file and tab delimited.
I need... (2 Replies)
Discussion started by: Lucky Ali
2 Replies
5. Shell Programming and Scripting
Hi All,
I need some help to effectively parse out a subset of results from a big results file.
Below is an example of the text file. Each block that I need to parse starts with "Output of GENE for sequence file 100.fasta" (next block starts with another number). I have given the portion of... (8 Replies)
Discussion started by: Lucky Ali
8 Replies
6. Shell Programming and Scripting
Hi Gurus,
I need some help in extracting some of these information and massage it into the desired output as shown below.
I need to extract the last row with the header in below sample which is usually the most recent date, for example:
2012-06-01 142356 mb 519 -219406 mb 1 ... (9 Replies)
Discussion started by: superHonda123
9 Replies
7. UNIX for Advanced & Expert Users
Hi ,
I am getting file name like
ABC_DATA_CUSTIOMERS_20120617.dat
ABC_DATA_PRODUCTS_20120617.dat
Need to convert
CUSTIOMERS.dat
PRODUCTS.dat
Help me how to do this. (7 Replies)
Discussion started by: reach_malu
7 Replies
8. Shell Programming and Scripting
Need to sort a portion of a file in a Alphabetical Order.
Example : The user adam is not sorted and the user should get sorted. I don't want the complete file to get sorted.
Currently All_users.txt contains the following lines.
##############
# ARS USERS
##############
mike, Mike... (6 Replies)
Discussion started by: evrurs
6 Replies
9. Shell Programming and Scripting
I am trying to parse a file but the filehas binary data inline mixed with text fields.
I tried the binutils strings function , it get the binary data out but put the char following the binary data in a new line .
input file
app_id:1936 pgm_num:0 branch:TBNY ord_num:0500012(–QMK) deal_num:0... (12 Replies)
Discussion started by: tasmac
12 Replies
10. Shell Programming and Scripting
Hi,
I have a log file that gets updated every second. Currently the size has grown to 20+ GB. I need to have a command/script, that will try to get the actual size of the file and will remove 50% of the data that are in the log file. I don't mind removing the data as the size has grown to huge... (8 Replies)
Discussion started by: Souvik Patra
8 Replies
LEARN ABOUT DEBIAN
bp_bulk_load_gff
BP_BULK_LOAD_GFF(1p) User Contributed Perl Documentation BP_BULK_LOAD_GFF(1p)
NAME
bulk_load_gff.pl - Bulk-load a Bio::DB::GFF database from GFF files.
SYNOPSIS
% bulk_load_gff.pl -d testdb dna1.fa dna2.fa features1.gff features2.gff ...
DESCRIPTION
This script loads a Bio::DB::GFF database with the features contained in a list of GFF files and/or FASTA sequence files. You must use the
exact variant of GFF described in Bio::DB::GFF. Various command-line options allow you to control which database to load and whether to
allow an existing database to be overwritten.
This script differs from bp_load_gff.pl in that it is hard-coded to use MySQL and cannot perform incremental loads. See bp_load_gff.pl for
an incremental loader that works with all databases supported by Bio::DB::GFF, and bp_fast_load_gff.pl for a MySQL loader that supports
fast incremental loads.
NOTES
If the filename is given as "-" then the input is taken from standard input. Compressed files (.gz, .Z, .bz2) are automatically
uncompressed.
FASTA format files are distinguished from GFF files by their filename extensions. Files ending in .fa, .fasta, .fast, .seq, .dna and their
uppercase variants are treated as FASTA files. Everything else is treated as a GFF file. If you wish to load -fasta files from STDIN,
then use the -f command-line swith with an argument of '-', as in
gunzip my_data.fa.gz | bp_fast_load_gff.pl -d test -f -
The nature of the bulk load requires that the database be on the local machine and that the indicated user have the "file" privilege to
load the tables and have enough room in /usr/tmp (or whatever is specified by the $TMPDIR environment variable), to hold the tables
transiently.
Local data may now be uploaded to a remote server via the --local option with the database host specified in the dsn, e.g.
dbi:mysql:test:db_host
The adaptor used is dbi::mysqlopt. There is currently no way to change this.
About maxfeature: the default value is 100,000,000 bases. If you have features that are close to or greater that 100Mb in length, then the
value of maxfeature should be increased to 1,000,000,000. This value must be a power of 10.
Note that Windows users must use the --create option.
If the list of GFF or fasta files exceeds the kernel limit for the maximum number of command-line arguments, use the --long_list
/path/to/files option.
COMMAND-LINE OPTIONS
Command-line options can be abbreviated to single-letter options. e.g. -d instead of --database.
--database <dsn> Database name (default dbi:mysql:test)
--adaptor Adaptor name (default mysql)
--create Reinitialize/create data tables without asking
--user Username to log in as
--fasta File or directory containing fasta files to load
--long_list Directory containing a very large number of
GFF and/or FASTA files
--password Password to use for authentication
(Does not work with Postgres, password must be
supplied interactively or be left empty for
ident authentication)
--maxbin Set the value of the maximum bin size
--local Flag to indicate that the data source is local
--maxfeature Set the value of the maximum feature size (power of 10)
--group A list of one or more tag names (comma or space separated)
to be used for grouping in the 9th column.
--gff3_munge Activate GFF3 name munging (see Bio::DB::GFF)
--summary Generate summary statistics for drawing coverage histograms.
This can be run on a previously loaded database or during
the load.
--Temporary Location of a writable scratch directory
SEE ALSO
Bio::DB::GFF, fast_load_gff.pl, load_gff.pl
AUTHOR
Lincoln Stein, lstein@cshl.org
Copyright (c) 2002 Cold Spring Harbor Laboratory
This library is free software; you can redistribute it and/or modify it under the same terms as Perl itself. See DISCLAIMER.txt for
disclaimers of warranty.
perl v5.14.2 2012-03-02 BP_BULK_LOAD_GFF(1p)