Those commands are trying to load the entire line into memory at once, which won't work on a 64 32-bit system since there's a 4GB limit on per-process memory and sufficient clutter in the way that you probably can't get an entire 2.5G in one chunk.
How big are the individual records? You can tell awk to use things other than \n as a record separator, changing its definition of lines:
Use nawk or gawk if you have it.
Last edited by Corona688; 12-02-2011 at 04:39 PM..
Hi,
I am new to UNIX scripting and woiuld appreicate your help...
Input file contains only one (but long) record:
aaaaabbbbbcccccddddd.....
Desired file:
NEW RECORD #new record (hardcoded) added as first record - its length is irrelevant#
aaaaa
bbbbb
ccccc
ddddd
...
...
... (1 Reply)
All,
We receive a file with a large no of records (records can vary) and we have to split it into two files based on another file. e.g.
File1:
UHDR 2008112
"25187","00000022","00",21-APR-1991,"" ,"D",-000000519,+0000000000,"C", ,+000000000,+000000000,000000000,"2","" ,21-APR-1991... (7 Replies)
Hi ,
i have files coming in my system which are very huge in MB and GBs, all these files are in a single line, there is no newline character.
I need to get only last 700 bytes of these files, of this i am splitting the files by "split -b 700 filename" but this gives all the splitted... (2 Replies)
Hi
I have to write a script to split the huge file into several pieces. The file columns is | pipe delimited. The data sample is as:
6625060|1420215|07308806|N|20100120|5572477081|+0002.79|+0000.00|0004|0001|......... (3 Replies)
Hi,
I have a requiement where in i will get a single file but there will be mutiple headers.
Suppose say for eg:
Header1
Data...
Data...
Header2
Data..
Data..
Header3
Data..
Data..
I want to split each with the corresponding data into a single file.
Please let me know how... (1 Reply)
I have a bif text file with the following format:
d1_03 fr:23
d1_03 fr:56
d1_03 fr:67
d1_03 fr:78
d1_01 fr:35
d1_01 fr:29
d1_01 fr:45
d2_09 fr:34
d2_09 fr:78
d3_98 fr:90
d3_98 fr:104
d3_98 fr:360
I have like thousands of such lines
I want to reformat this file based on column 1... (3 Replies)
Hi,
I have a huge file with a single line.
But I want to break that line into lines of with each line having five columns.
My file is like this:
code:
"hi","there","how","are","you?","It","was","great","working","with","you.","hope","to","work","you."
I want it like this:
code:... (1 Reply)
Hi all,
I am new to scripting and I have a requirement
we have source file as
HEADER 01.10.2010 14:32:37 NAYA
TA0022
TA0000
20000001;20060612;99991231;K4;02;3
20000008;20080624;99991231;K4;02;3
20000026;19840724;99991231;KK;01;3
20000027;19840724;99991231;KK;01;3... (6 Replies)
Hi i want to fetch 100k record from a file which is looking like as below.
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
... (17 Replies)
Hi Friends ,
Please guide me with the code to extract multiple files from one file .
The File Looks like ( Suppose a file has 2 tables list ,column length may vary )
H..- > File Header....
H....- >Table 1 Header....
D....- > Table 1 Data....
T....- >Table 1 Trailer....
H..-> Table 2... (1 Reply)
Discussion started by: AspiringD
1 Replies
LEARN ABOUT DEBIAN
bio::seqio::tab
Bio::SeqIO::tab(3pm) User Contributed Perl Documentation Bio::SeqIO::tab(3pm)NAME
Bio::SeqIO::tab - nearly raw sequence file input/output stream. Reads/writes id" "sequence"
"
SYNOPSIS
Do not use this module directly. Use it via the Bio::SeqIO class.
DESCRIPTION
This object can transform Bio::Seq objects to and from tabbed flat file databases.
It is very useful when doing large scale stuff using the Unix command line utilities (grep, sort, awk, sed, split, you name it). Imagine
that you have a format converter 'seqconvert' along the following lines:
my $in = Bio::SeqIO->newFh(-fh => *STDIN , '-format' => $from);
my $out = Bio::SeqIO->newFh(-fh=> *STDOUT, '-format' => $to);
print $out $_ while <$in>;
then you can very easily filter sequence files for duplicates as:
$ seqconvert < foo.fa -from fasta -to tab | sort -u |
seqconvert -from tab -to fasta > foo-unique.fa
Or grep [-v] for certain sequences with:
$ seqconvert < foo.fa -from fasta -to tab | grep -v '^S[a-z]*control' |
seqconvert -from tab -to fasta > foo-without-controls.fa
Or chop up a huge file with sequences into smaller chunks with:
$ seqconvert < all.fa -from fasta -to tab | split -l 10 - chunk-
$ for i in chunk-*; do seqconvert -from tab -to fasta < $i > $i.fa; done
# (this creates files chunk-aa.fa, chunk-ab.fa, ..., each containing 10
# sequences)
FEEDBACK
Mailing Lists
User feedback is an integral part of the evolution of this and other Bioperl modules. Send your comments and suggestions preferably to one
of the Bioperl mailing lists. Your participation is much appreciated.
bioperl-l@bioperl.org - General discussion
http://bioperl.org/wiki/Mailing_lists - About the mailing lists
Support
Please direct usage questions or support issues to the mailing list:
bioperl-l@bioperl.org
rather than to the module maintainer directly. Many experienced and reponsive experts will be able look at the problem and quickly address
it. Please include a thorough description of the problem with code and data examples if at all possible.
Reporting Bugs
Report bugs to the Bioperl bug tracking system to help us keep track the bugs and their resolution. Bug reports can be submitted via the
web:
https://redmine.open-bio.org/projects/bioperl/
AUTHORS
Philip Lijnzaad, p.lijnzaad@med.uu.nl
APPENDIX
The rest of the documentation details each of the object methods. Internal methods are usually preceded with a _
next_seq
Title : next_seq
Usage : $seq = $stream->next_seq()
Function: returns the next sequence in the stream
Returns : Bio::Seq object
Args :
write_seq
Title : write_seq
Usage : $stream->write_seq($seq)
Function: writes the $seq object into the stream
Returns : 1 for success and 0 for error
Args : Bio::Seq object
perl v5.14.2 2012-03-02 Bio::SeqIO::tab(3pm)