Sponsored Content
Top Forums UNIX for Dummies Questions & Answers Large file data handling issue Post 302731319 by Gurkamal83 on Wednesday 14th of November 2012 08:06:08 PM
Old 11-14-2012
How can I overcome this limitation
 

10 More Discussions You Might Find Interesting

1. HP-UX

Need to split a large data file using a Unix script

Greetings all: I am still new to Unix environment and I need help with the following requirement. I have a large sequential file sorted on a field (say store#) that is being split into several smaller files, one for each store. That means if there are 500 stores, there will be 500 files. This... (1 Reply)
Discussion started by: SAIK
1 Replies

2. Shell Programming and Scripting

Performance issue in UNIX while generating .dat file from large text file

Hello Gurus, We are facing some performance issue in UNIX. If someone had faced such kind of issue in past please provide your suggestions on this . Problem Definition: /Few of load processes of our Finance Application are facing issue in UNIX when they uses a shell script having below... (19 Replies)
Discussion started by: KRAMA
19 Replies

3. Shell Programming and Scripting

Extract data from large file 80+ million records

Hello, I have got one file with more than 120+ million records(35 GB in size). I have to extract some relevant data from file based on some parameter and generate other output file. What will be the besat and fastest way to extract the ne file. sample file format :--... (2 Replies)
Discussion started by: learner16s
2 Replies

4. Shell Programming and Scripting

UNIX File handling -Issue in reading a file

I have been doing automation of daily check activity for a server, i have been using sqls to retrive the data and while loop for reading the data from the file for several activities. BUT i got a show stopper the below one.. where the data is getting store in $temp_file, but not being read by while... (1 Reply)
Discussion started by: KuldeepSinghTCS
1 Replies

5. Shell Programming and Scripting

Severe performance issue while 'grep'ing on large volume of data

Background ------------- The Unix flavor can be any amongst Solaris, AIX, HP-UX and Linux. I have below 2 flat files. File-1 ------ Contains 50,000 rows with 2 fields in each row, separated by pipe. Row structure is like Object_Id|Object_Name, as following: 111|XXX 222|YYY 333|ZZZ ... (6 Replies)
Discussion started by: Souvik
6 Replies

6. Red Hat

Advice regarding filesystems handling large number of files

Hi All, I have a CentOS operating system installed. I work with really huge number of files which are not only huge in number but some of them really huge in size. Minimum number of files could be 1 million to 2 million in one directory itself. Some of the files are even several Gigabytes in... (2 Replies)
Discussion started by: shoaibjameel123
2 Replies

7. Shell Programming and Scripting

UNIX file handling issue

I have a huge file semicolon( ; ) separated records are Pipe(|) delimited. e.g abc;def;ghi|jkl;mno;pqr|123;456;789 I need to replace the 50th field(semicolon separated) of each record with 9006. The 50th field can have no value e.g. ;; Can someone help me with the appropriate command. (3 Replies)
Discussion started by: Gurkamal83
3 Replies

8. UNIX for Dummies Questions & Answers

File handling issue

Hi All, I am running into an issue. I have a very big file. Wants to split it in smaller chunks. This file has multiple header/ trailers. Also, between each header/trailer there are records. Number of records in each header trailer combination can vary. Also, headers can start with... (3 Replies)
Discussion started by: Gurkamal83
3 Replies

9. Shell Programming and Scripting

Output large volume of data to CSV file

I have a program that output the ownership and permission on each directory and file on the server to a csv file. I am getting error message when I run the program. The program is not outputting to the csv file. Error: the file access permissions do not allow the specified action cannot... (2 Replies)
Discussion started by: dellanicholson
2 Replies

10. Shell Programming and Scripting

Large File masking incorrectly happening Ç delimeter issue

The OS version is Red Hat Enterprise Linux Server release 6.10 I have a script to mask some columns with **** in a data file which is delimeted with Ç , I am using awk for the masking , when I try to mask a small file the awk works fine and masks the required column , but when the file is... (6 Replies)
Discussion started by: LinuxUser8092
6 Replies
memx(8) 						      System Manager's Manual							   memx(8)

NAME
memx - memory exerciser SYNOPSIS
/usr/field/memx -s [-h] [-ofile] [-ti] [-mj] [-pk] OPTIONS
The memx options are as follows: Print the help message for the memx command. Disables automatic shared memory testing. Save diagnostic output in file. Run time in minutes (i). The default is to run until the process receives a CTRL-C or a kill -15 pid command. The memory size in bytes (j) to be tested by each spawned process. Must be greater than 4095. The default is (total-memory)/20. The number of pro- cesses to spawn (k). The default is 20. The maximum is also 20. DESCRIPTION
The memx memory exerciser spawns processes to exercise memory by writing and reading three patterns: 1's and 0's, 0's and 1's, and a random pattern. You specify the number of processes to spawn and the size of memory to be tested by each process. If the shmx Shared Memory exerciser is present, it will be the first process spawned; the remaining processes are standard memory exercisers. The memx exerciser will run until the process receives a CTRL-C or a kill -15 pid command. A logfile for you to examine and then remove is created in the current working directory. If there are errors in the logfile, check the syslog file where the driver and kernel error messages are saved. RESTRICTIONS
The memx exerciser is restricted by the size of the available swap space. The size of the swap space and the size of internal memory available determines how many processes can run on the system. For example, If there is 16 Mbytes of swap space and 16 Mbytes of memory, all of the swap space would be used if all 20 spawned memory exercisers are running. In that event, no new processes would be able to run. On systems with large amounts of memory and small swap space, you must restrict the number of memory exercisers and/or the size of memory being tested. If there is a need to run a system exerciser over an NFS link or on a diskless system there are some restrictions. For exercisers that need to write into a file system, such as fsx(8), the target file system must be writable by root. Also, the directory in which any of the exercisers are executed must be writable by root because temporary files are written into the current directory. These latter restrictions are sometimes difficult to overcome because often NFS file systems are mounted in a way that prevents root from writing into them. Some of the restrictions may be overcome by copying the exerciser to another directory and then executing it. You should specify the -s option to disable automatic shared memory testing, which is not supported. EXAMPLES
The following example tests all of memory by running 20 spawned processes until a CTRL-C or kill -15 pid command is received: % /usr/field/memx The following example runs 10 spawned processes, memory size 500,000 bytes, for 180 minutes in the background. % /usr/field/memx -t180 -m500000 -p10 & SEE ALSO
Commands: cmx(8), diskx(8), fsx(8), shmx(8), tapex(8) memx(8)
All times are GMT -4. The time now is 07:53 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy