Sponsored Content
Top Forums Shell Programming and Scripting Problem running Perl Script with huge data files Post 302436137 by jim mcnamara on Friday 9th of July 2010 09:39:16 AM
Old 07-09-2010
You are probably exceeding the limit of virtual memory. You must be keeping an array that grows without bounds.

If the files are really big, like > 2GB, consider asking the sysadmin to add more swap space. I personally believe that showing more of your code would help more than adding swap space. I think you are hogging memory in your code.
 

10 More Discussions You Might Find Interesting

1. Shell Programming and Scripting

Shell script to check the unique numbers in huge data

Friends, I have to write a shell script,the description is---- i Have to check the uniqueness of the numbers in a file. A file is containing 200thousand tickets and a ticket have 15 numbers in asecending order.And there is a strip that is having 6 tickets that means 90 numbers.I... (7 Replies)
Discussion started by: namishtiwari
7 Replies

2. Shell Programming and Scripting

Perl script for extract data from xml files

Hi All, Prepare a perl script for extracting data from xml file. The xml data look like as AC StartTime="1227858839" ID="88" ETime="1227858837" DSTFlag="false" Type="2" Duration="303" /> <AS StartTime="1227858849" SigPairs="119 40 98 15 100 32 128 18 131 23 70 39 123 20 120 27 100 17 136 12... (3 Replies)
Discussion started by: allways4u21
3 Replies

3. Shell Programming and Scripting

Split a huge data into few different files?!

Input file data contents: >seq_1 MSNQSPPQSQRPGHSHSHSHSHAGLASSTSSHSNPSANASYNLNGPRTGGDQRYRASVDA >seq_2 AGAAGRGWGRDVTAAASPNPRNGGGRPASDLLSVGNAGGQASFASPETIDRWFEDLQHYE >seq_3 ATLEEMAAASLDANFKEELSAIEQWFRVLSEAERTAALYSLLQSSTQVQMRFFVTVLQQM ARADPITALLSPANPGQASMEAQMDAKLAAMGLKSPASPAVRQYARQSLSGDTYLSPHSA... (7 Replies)
Discussion started by: patrick87
7 Replies

4. Shell Programming and Scripting

Perl script error to split huge data one by one.

Below is my perl script: #!/usr/bin/perl open(FILE,"$ARGV") or die "$!"; @DATA = <FILE>; close FILE; $join = join("",@DATA); @array = split( ">",$join); for($i=0;$i<=scalar(@array);$i++){ system ("/home/bin/./program_name_count_length MULTI_sequence_DATA_FILE -d... (5 Replies)
Discussion started by: patrick87
5 Replies

5. Shell Programming and Scripting

running perl script problem

While executing perl scriptit gives some compling issue, please help out $inputFilename="c:\allways.pl"; open (FILEH,$inputFilename) or die "Could not open log file"; Error : Could not open log file at c:\allways.pl line 4 learner in Perl (1 Reply)
Discussion started by: allways4u21
1 Replies

6. Shell Programming and Scripting

Three Difference File Huge Data Comparison Problem.

I got three different file: Part of File 1 ARTPHDFGAA . . Part of File 2 ARTGHHYESA . . Part of File 3 ARTPOLYWEA . . (4 Replies)
Discussion started by: patrick87
4 Replies

7. Shell Programming and Scripting

Help- counting delimiter in a huge file and split data into 2 files

I’m new to Linux script and not sure how to filter out bad records from huge flat files (over 1.3GB each). The delimiter is a semi colon “;” Here is the sample of 5 lines in the file: Name1;phone1;address1;city1;state1;zipcode1 Name2;phone2;address2;city2;state2;zipcode2;comment... (7 Replies)
Discussion started by: lv99
7 Replies

8. Shell Programming and Scripting

Perl: Need help comparing huge files

What do i need to do have the below perl program load 205 million record files into the hash. It currently works on smaller files, but not working on huge files. Any idea what i need to do to modify to make it work with huge files: #!/usr/bin/perl $ot1=$ARGV; $ot2=$ARGV; open(mfileot1,... (12 Replies)
Discussion started by: mrn6430
12 Replies

9. Shell Programming and Scripting

In PErl script: need to read the data one file and generate multiple files based on the data

We have the data looks like below in a log file. I want to generat files based on the string between two hash(#) symbol like below Source: #ext1#test1.tale2 drop #ext1#test11.tale21 drop #ext1#test123.tale21 drop #ext2#test1.tale21 drop #ext2#test12.tale21 drop #ext3#test11.tale21 drop... (5 Replies)
Discussion started by: Sanjeev G
5 Replies

10. UNIX for Advanced & Expert Users

File comaprsons for the Huge data files ( around 60G) - Need optimized and teh best way to do this

I have 2 large file (.dat) around 70 g, 12 columns but the data not sorted in both the files.. need your inputs in giving the best optimized method/command to achieve this and redirect the not macthing lines to the thrid file ( diff.dat) File 1 - 15 columns File 2 - 15 columns Data is... (9 Replies)
Discussion started by: kartikirans
9 Replies
swap(1M)                                                  System Administration Commands                                                  swap(1M)

NAME
swap - swap administrative interface SYNOPSIS
/usr/sbin/swap -a swapname [swaplow] [swaplen] /usr/sbin/swap -d swapname [swaplow] /usr/sbin/swap -l /usr/sbin/swap -s DESCRIPTION
The swap utility provides a method of adding, deleting, and monitoring the system swap areas used by the memory manager. OPTIONS
The following options are supported: -a swapname Add the specified swap area. This option can only be used by the super-user. swapname is the name of the swap file: for example, /dev/dsk/c0t0d0s1 or a regular file. swaplow is the offset in 512-byte blocks into the file where the swap area should begin. swaplen is the desired length of the swap area in 512-byte blocks. The value of swaplen can not be less than 16. For example, if n blocks are specified, then (n-1) blocks would be the actual swap length. swaplen must be at least one page in length. The size of a page of memory can be determined by using the pagesize command. See pagesize(1). Since the first page of a swap file is automatically skipped, and a swap file needs to be at least one page in length, the minimum size should be a multiple of 2 pagesize bytes. The size of a page of memory is machine dependent. swaplow + swaplen must be less than or equal to the size of the swap file. If swaplen is not specified, an area will be added starting at swaplow and extending to the end of the designated file. If neither swaplow nor swaplen are specified, the whole file will be used except for the first page. Swap areas are normally added automatically during system startup by the /sbin/swapadd script. This script adds all swap areas which have been specified in the /etc/vfstab file; for the syntax of these specifications, see vfstab(4). To use an NFS or local file-system swapname, you should first create a file using mkfile(1M). A local file-system swap file can now be added to the running system by just running the swap -a command. For NFS mounted swap files, the server needs to export the file. Do this by performing the following steps: 1. Add the following line to /etc/dfs/dfstab: share -F nfs -o rw=clientname,root=clientname path-to-swap-file 2. Run shareall(1M). 3. Have the client add the following line to /etc/vfstab: server:path-to-swap-file - local-path-to-swap-filenfs --- local-path-to-swap-file -- swap --- 4. Have the client run mount: # mount local-path-to-swap-file 5. The client can then run swap -a to add the swap space: # swap -a local-path-to-swap-file -d swapname Delete the specified swap area. This option can only be used by the super-user. swapname is the name of the swap file: for example, /dev/dsk/c0t0d0s1 or a regular file. swaplow is the offset in 512-byte blocks into the swap area to be deleted. If swaplow is not specified, the area will be deleted starting at the second page. When the command completes, swap blocks can no longer be allocated from this area and all swap blocks previously in use in this swap area have been moved to other swap areas. -l List the status of all the swap areas. The output has five columns: path The path name for the swap area. dev The major/minor device number in decimal if it is a block special device; zeroes otherwise. swaplo The swaplow value for the area in 512-byte blocks. blocks The swaplen value for the area in 512-byte blocks. free The number of 512-byte blocks in this area that are not currently allocated. The list does not include swap space in the form of physical memory because this space is not associated with a particular swap area. If swap -l is run while swapname is in the process of being deleted (by swap -d), the string INDEL will appear in a sixth column of the swap stats. -s Print summary information about total swap space usage and availability: allocated The total amount of swap space in bytes currently allocated for use as backing store. reserved The total amount of swap space in bytes not currently allocated, but claimed by memory mappings for possi- ble future use. used The total amount of swap space in bytes that is either allocated or reserved. available The total swap space in bytes that is currently available for future reservation and allocation. These numbers include swap space from all configured swap areas as listed by the -l option, as well swap space in the form of physical memory. USAGE
On the 32-bit operating system, only the first 2 Gbytes -1 are used for swap devices greater than or equal to 2 Gbytes in size. On the 64-bit operating system, a block device larger than 2 Gbytes can be fully utilized for swap up to 2**63 -1 bytes. ENVIRONMENT VARIABLES
See environ(5) for descriptions of the following environment variables that affect the execution of swap: LC_CTYPE and LC_MESSAGE. ATTRIBUTES
See attributes(5) for descriptions of the following attributes: +-----------------------------+-----------------------------+ | ATTRIBUTE TYPE | ATTRIBUTE VALUE | +-----------------------------+-----------------------------+ |Availability |SUNWcsu | +-----------------------------+-----------------------------+ SEE ALSO
pagesize(1), mkfile(1M), shareall(1M), getpagesize(3C), vfstab(4), attributes(5), largefile(5) WARNINGS
No check is done to determine if a swap area being added overlaps with an existing file system. SunOS 5.10 20 Jan 2004 swap(1M)
All times are GMT -4. The time now is 04:38 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy