Hi Madhan,
No..Its taking double the time...
Script:
#!/usr/local/bin/perl
open(FILE, "<", $ARGV[0]) || die ("unable to open <$!>\n");
while( read(FILE, $data, 1000) == 1000) {
$stat=0;
@char = split(//,$data);
foreach (@char){
print "@char[$stat]" if((ord(@char[$stat])>=32)&&(ord(@char[$stat])<=126));
print "\n" if(ord(@char[$stat])==10);
$stat++;
}
}
# for processing last batch
$stat=0;
@char = split(//,$data);
foreach (@char){
print "@char[$stat]" if((ord(@char[$stat])>=32)&&(ord(@char[$stat])<=126));
print "\n" if(ord(@char[$stat])==10);
$stat++;
}
close(FILE);
So I am reverting back to the old logic that took 10 mins for 300 MB file.
Please let me know if anything else can be done.
Thank you for your support and help on this..