Sponsored Content
Top Forums Shell Programming and Scripting Perl script error to split huge data one by one. Post 302377017 by skmdu on Thursday 3rd of December 2009 12:47:21 AM
Old 12-03-2009
Code:
$cat test.pl

#! /usr/bin/perl 
open(FILE,$ARGV[0]) or die $!;

while(<FILE>){
        chomp;
        if (/>seq/){
                $next=<FILE>;
                chomp $next;
                $c=split(//,$next);
                print "$_ $c\n";
        }
}

$perl test.pl inputfile > outputfile

 

10 More Discussions You Might Find Interesting

1. Shell Programming and Scripting

Shell script to check the unique numbers in huge data

Friends, I have to write a shell script,the description is---- i Have to check the uniqueness of the numbers in a file. A file is containing 200thousand tickets and a ticket have 15 numbers in asecending order.And there is a strip that is having 6 tickets that means 90 numbers.I... (7 Replies)
Discussion started by: namishtiwari
7 Replies

2. Shell Programming and Scripting

Split a huge data into few different files?!

Input file data contents: >seq_1 MSNQSPPQSQRPGHSHSHSHSHAGLASSTSSHSNPSANASYNLNGPRTGGDQRYRASVDA >seq_2 AGAAGRGWGRDVTAAASPNPRNGGGRPASDLLSVGNAGGQASFASPETIDRWFEDLQHYE >seq_3 ATLEEMAAASLDANFKEELSAIEQWFRVLSEAERTAALYSLLQSSTQVQMRFFVTVLQQM ARADPITALLSPANPGQASMEAQMDAKLAAMGLKSPASPAVRQYARQSLSGDTYLSPHSA... (7 Replies)
Discussion started by: patrick87
7 Replies

3. Shell Programming and Scripting

Problem running Perl Script with huge data files

Hello Everyone, I have a perl script that reads two types of data files (txt and XML). These data files are huge and large in number. I am using something like this : foreach my $t (@text) { open TEXT, $t or die "Cannot open $t for reading: $!\n"; while(my $line=<TEXT>){ ... (4 Replies)
Discussion started by: ad23
4 Replies

4. Shell Programming and Scripting

Help- counting delimiter in a huge file and split data into 2 files

I’m new to Linux script and not sure how to filter out bad records from huge flat files (over 1.3GB each). The delimiter is a semi colon “;” Here is the sample of 5 lines in the file: Name1;phone1;address1;city1;state1;zipcode1 Name2;phone2;address2;city2;state2;zipcode2;comment... (7 Replies)
Discussion started by: lv99
7 Replies

5. AIX

Error while copying huge amount of data in aix

Hi When i copy 300GB of data from one filesystem to the other filesystem in AIX I get the error : tar: 0511-825 The file 'SAPBRD.dat' is too large. The command I used is : # tar -cf - . | (cd /sapbackup ; tar -xf - ) im copying as root The below is my ulimit -a output : ... (3 Replies)
Discussion started by: samsungsamsung
3 Replies

6. Shell Programming and Scripting

how to split a huge file by every 100 lines

into small files. i need to add a head.txt and tail.txt into small files at the begin and end, and give a name as q1.xml q2.xml q3.xml .... thank you very much. (2 Replies)
Discussion started by: dtdt
2 Replies

7. UNIX for Dummies Questions & Answers

Split a huge 7 GB File Based on Pattern into 4 files

Hi, I have a Huge 7 GB file which has around 1 million records, i want to split this file into 4 files to contain around 250k messages each. Please help me as Split command cannot work here as it might miss tags.. Format of the file is as below <!--###### ###### START-->... (6 Replies)
Discussion started by: KishM
6 Replies

8. Shell Programming and Scripting

Split a folder with huge number of files in n folders

We have a folder XYZ with large number of files (>350,000). how can i split the folder and create say 10 of them XYZ1 to XYZ10 with 35,000 files each. (doesnt matter which files go where). (12 Replies)
Discussion started by: AlokKumbhare
12 Replies

9. UNIX for Advanced & Expert Users

Need Optimization shell/awk script to aggreagte (sum) for all the columns of Huge data file

Optimization shell/awk script to aggregate (sum) for all the columns of Huge data file File delimiter "|" Need to have Sum of all columns, with column number : aggregation (summation) for each column File not having the header Like below - Column 1 "Total Column 2 : "Total ... ...... (2 Replies)
Discussion started by: kartikirans
2 Replies

10. Solaris

Split huge File System

Gents I have huge NAS File System as /sys with size 10 TB and I want to Split each 1TB in spirit File System to be mounted in the server. How to can I do that without changing anything in the source. Please your support. (1 Reply)
Discussion started by: AbuAliiiiiiiiii
1 Replies
Perl6::Slurp(3pm)					User Contributed Perl Documentation					 Perl6::Slurp(3pm)

NAME
Perl6::Slurp - Implements the Perl 6 'slurp' built-in SYNOPSIS
use Perl6::Slurp; # Slurp a file by name... $file_contents = slurp 'filename'; $file_contents = slurp '<filename'; $file_contents = slurp '<', 'filename'; $file_contents = slurp '+<', 'filename'; # Slurp a file via an (already open!) handle... $file_contents = slurp *STDIN; $file_contents = slurp $filehandle; $file_contents = slurp IO::File->new('filename'); # Slurp a string... $str_contents = slurp $string; $str_contents = slurp '<', $string; # Slurp a pipe... $str_contents = slurp 'tail -20 $filename |'; $str_contents = slurp '-|', 'tail', -20, $filename; # Slurp with no source slurps from whatever $_ indicates... for (@files) { $contents .= slurp; } # ...or from the entire ARGV list, if $_ is undefined... $_ = undef; $ARGV_contents = slurp; # Specify I/O layers as part of mode... $file_contents = slurp '<:raw', $file; $file_contents = slurp '<:utf8', $file; $file_contents = slurp '<:raw :utf8', $file; # Specify I/O layers as separate options... $file_contents = slurp $file, {raw=>1}; $file_contents = slurp $file, {utf8=>1}; $file_contents = slurp $file, {raw=>1}, {utf8=>1}; $file_contents = slurp $file, [raw=>1, utf8=>1]; # Specify input record separator... $file_contents = slurp $file, {irs=>" "}; $file_contents = slurp '<', $file, {irs=>" "}; $file_contents = slurp {irs=>" "}, $file; # Input record separator can be regex... $file_contents = slurp $file, {irs=>qr/ +/}; $file_contents = slurp '<', $file, {irs=>qr/ +| {2,}}; # Specify autochomping... $file_contents = slurp $file, {chomp=>1}; $file_contents = slurp {chomp=>1}, $file; $file_contents = slurp $file, {chomp=>1, irs=>" "}; $file_contents = slurp $file, {chomp=>1, irs=>qr/ +/}; # Specify autochomping that replaces irs # with another string... $file_contents = slurp $file, {irs=>" ", chomp=>" "}; $file_contents = slurp $file, {chomp=>" "}, {irs=>qr/ +/}; # Specify autochomping that replaces # irs with a dynamically computed string... my $n = 1; $file_contents = slurp $file, {chomp=>sub{ " #line ".$n++." "}; # Slurp in a list context... @lines = slurp 'filename'; @lines = slurp $filehandle; @lines = slurp $string; @lines = slurp '<:utf8', 'filename', {irs=>"x{2020}", chomp=>" "}; DESCRIPTION
"slurp" takes: o a filename, o a filehandle, o a typeglob reference, o an IO::File object, or o a scalar reference, converts it to an input stream if necessary, and reads in the entire stream. If "slurp" fails to set up or read the stream, it throws an exception. If no data source is specified "slurp" uses the value of $_ as the source. If $_ is undefined, "slurp" uses the @ARGV list, and magically slurps the contents of all the sources listed in @ARGV. Note that the same magic is also applied if you explicitly slurp <*ARGV>, so the following three input operations: $contents = join "", <ARGV>; $contents = slurp *ARGV; $/ = undef; $contents = slurp; are identical in effect. In a scalar context "slurp" returns the stream contents as a single string. If the stream is at EOF, it returns an empty string. In a list context, it splits the contents after the appropriate input record separator and returns the resulting list of strings. You can set the input record separator ("{ irs => $your_irs_here}") for the input operation. The separator can be specified as a string or a regex. Note that an explicit input record separator has no effect in a scalar context, since "slurp" always reads in everything anyway. In a list context, changing the separator can change how the input is broken up within the list that is returned. If an input record separator is not explicitly specified, "slurp" defaults to " " (not to the current value of $/ X since Perl 6 doesn't have a $/); You can also tell "slurp" to automagically "chomp" the input as it is read in, by specifying: ("{ chomp => 1 }") Better still, you can tell "slurp" to automagically "chomp" the input and replace what it chomps with another string, by specifying: ("{ chomp => "another string" }") You can also tell "slurp" to compute the replacement string on-the-fly by specifying a subroutine as the "chomp" value: ("{ chomp => sub{...} }"). This subroutine is passed the string being chomped off, so for example you could squeeze single newlines to a single space and multiple conseqcutive newlines to a two newlines with: sub squeeze { my ($removed) = @_; if ($removed =~ tr/ / / == 1) { return " " } else { return " "; } } print slurp(*DATA, {irs=>qr/[ ]* +/, chomp=>&squeeze}), " "; Which would transform: This is the first paragraph This is the second paragraph This, the third This one is the very last to: This is the first paragraph This is the second paragraph This, the third This one is the very last Autochomping works in both scalar and list contexts. In scalar contexts every instance of the input record separator will be removed (or replaced) within the returned string. In list context, each list item returned with its terminating separator removed (or replaced). You can specify I/O layers, either using the Perl 5 notation: slurp "<:layer1 :layer2 :etc", $filename; or as an array of options: slurp $filename, [layer1=>1, layer2=>1, etc=>1]; slurp [layer1=>1, layer2=>1, etc=>1], $filename; or as individual options (each of which must be in a separate hash): slurp $filename, {layer1=>1}, {layer2=>1}, {etc=>1}; slurp {layer1=>1}, {layer2=>1}, {etc=>1}, $filename; (...which, of course, would look much cooler in Perl 6: # Perl 6 only :-( slurp $filename, :layer1 :layer2 :etc; slurp :layer1 :layer2 :etc, $filename; ) A common mistake is to put all the options together in one hash: slurp $filename, {layer1=>1, layer2=>1, etc=>1}; This is almost always a disaster, since the order of I/O layers is usually critical, and placing them all in one hash effectively randomizes that order. Use an array instead: slurp $filename, [layer1=>1, layer2=>1, etc=>1]; WARNING
The syntax and semantics of Perl 6 is still being finalized and consequently is at any time subject to change. That means the same caveat applies to this module. DEPENDENCIES
Requires: Perl 5.8.0, Perl6::Export AUTHOR
Damian Conway (damian@conway.org) COPYRIGHT
Copyright (c) 2003-2012, Damian Conway. All Rights Reserved. This module is free software. It may be used, redistributed and/or modified under the same terms as Perl itself. perl v5.14.2 2012-06-14 Perl6::Slurp(3pm)
All times are GMT -4. The time now is 09:38 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy