Sponsored Content
Top Forums UNIX for Dummies Questions & Answers Large file data handling issue Post 302731269 by Gurkamal83 on Wednesday 14th of November 2012 03:28:39 PM
Old 11-14-2012
hmm, yes this is a 600K file and as already said it is a single record. How can i handle the file manupulation?
 

10 More Discussions You Might Find Interesting

1. HP-UX

Need to split a large data file using a Unix script

Greetings all: I am still new to Unix environment and I need help with the following requirement. I have a large sequential file sorted on a field (say store#) that is being split into several smaller files, one for each store. That means if there are 500 stores, there will be 500 files. This... (1 Reply)
Discussion started by: SAIK
1 Replies

2. Shell Programming and Scripting

Performance issue in UNIX while generating .dat file from large text file

Hello Gurus, We are facing some performance issue in UNIX. If someone had faced such kind of issue in past please provide your suggestions on this . Problem Definition: /Few of load processes of our Finance Application are facing issue in UNIX when they uses a shell script having below... (19 Replies)
Discussion started by: KRAMA
19 Replies

3. Shell Programming and Scripting

Extract data from large file 80+ million records

Hello, I have got one file with more than 120+ million records(35 GB in size). I have to extract some relevant data from file based on some parameter and generate other output file. What will be the besat and fastest way to extract the ne file. sample file format :--... (2 Replies)
Discussion started by: learner16s
2 Replies

4. Shell Programming and Scripting

UNIX File handling -Issue in reading a file

I have been doing automation of daily check activity for a server, i have been using sqls to retrive the data and while loop for reading the data from the file for several activities. BUT i got a show stopper the below one.. where the data is getting store in $temp_file, but not being read by while... (1 Reply)
Discussion started by: KuldeepSinghTCS
1 Replies

5. Shell Programming and Scripting

Severe performance issue while 'grep'ing on large volume of data

Background ------------- The Unix flavor can be any amongst Solaris, AIX, HP-UX and Linux. I have below 2 flat files. File-1 ------ Contains 50,000 rows with 2 fields in each row, separated by pipe. Row structure is like Object_Id|Object_Name, as following: 111|XXX 222|YYY 333|ZZZ ... (6 Replies)
Discussion started by: Souvik
6 Replies

6. Red Hat

Advice regarding filesystems handling large number of files

Hi All, I have a CentOS operating system installed. I work with really huge number of files which are not only huge in number but some of them really huge in size. Minimum number of files could be 1 million to 2 million in one directory itself. Some of the files are even several Gigabytes in... (2 Replies)
Discussion started by: shoaibjameel123
2 Replies

7. Shell Programming and Scripting

UNIX file handling issue

I have a huge file semicolon( ; ) separated records are Pipe(|) delimited. e.g abc;def;ghi|jkl;mno;pqr|123;456;789 I need to replace the 50th field(semicolon separated) of each record with 9006. The 50th field can have no value e.g. ;; Can someone help me with the appropriate command. (3 Replies)
Discussion started by: Gurkamal83
3 Replies

8. UNIX for Dummies Questions & Answers

File handling issue

Hi All, I am running into an issue. I have a very big file. Wants to split it in smaller chunks. This file has multiple header/ trailers. Also, between each header/trailer there are records. Number of records in each header trailer combination can vary. Also, headers can start with... (3 Replies)
Discussion started by: Gurkamal83
3 Replies

9. Shell Programming and Scripting

Output large volume of data to CSV file

I have a program that output the ownership and permission on each directory and file on the server to a csv file. I am getting error message when I run the program. The program is not outputting to the csv file. Error: the file access permissions do not allow the specified action cannot... (2 Replies)
Discussion started by: dellanicholson
2 Replies

10. Shell Programming and Scripting

Large File masking incorrectly happening Ç delimeter issue

The OS version is Red Hat Enterprise Linux Server release 6.10 I have a script to mask some columns with **** in a data file which is delimeted with Ç , I am using awk for the masking , when I try to mask a small file the awk works fine and masks the required column , but when the file is... (6 Replies)
Discussion started by: LinuxUser8092
6 Replies
EC(4)							   BSD Kernel Interfaces Manual 						     EC(4)

NAME
ec -- driver for 3Com EtherLink II (3c503) ISA bus Ethernet cards SYNOPSIS
ec0 at isa? port 0x250 iomem 0xd8000 irq 9 DESCRIPTION
The ec device driver supports 3Com EtherLink II (3c503) Ethernet cards for ISA bus which are based on the National Semiconductor DP8390/WD83C690 Ethernet interface chips. MEDIA SELECTION
The EtherLink II supports two media types on a single card. All support the AUI media type. The other media is either BNC or UTP behind a transceiver. Software cannot differentiate between BNC and UTP cards. To enable the AUI media, select the 10base5 or aui media type with ifconfig(8)'s media directive. To select the other media (BNC or UTP), select the 10base2 or bnc media type. DIAGNOSTICS
ec0: wildcarded IRQ is not allowed The IRQ was wildcarded in the kernel configuration file. This is not supported. ec0: invalid IRQ <n>, must be 3, 4, 5, or 9 An IRQ other than the above IRQ values was specified in the kernel configuration file. The EtherLink II hardware only supports the above listed IRQ values. ec0: failed to clear shared memory at offset <off> The memory test was unable to clear shared the interface's shared memory region. This often indicates that the card is configured at a con- flicting iomem address. ec0: warning - receiver ring buffer overrun The DP8390 Ethernet chip used by this board implements a shared-memory ring-buffer to store incoming packets. The 3c503 usually has only 8K bytes of shared memory. This is only enough room for about 4 full-size (1500 byte) packets. This can sometimes be a problem, especially on the original 3c503, because these boards' shared-memory access speed is quite slow; typically only about 1MB/second. The overhead of this slow memory access, and the fact that there is only room for 4 full-sized packets means that the ring-buffer will occasionally overrun. When an overrun occurs, the board must be reset to avoid a lockup problem in early revision DP8390 Ethernet chips. Resetting the board causes all of the data in the ring-buffer to be lost, requiring the data to be retransmitted/received, congesting the board further. Because of this, maximum throughput on these boards is only about 400-600K bytes per second. This problem is exacerbated by NFS because the 8-bit boards lack sufficient packet buffer memory to support the default 8K byte packets that NFS and other protocols use as their default. If these cards must be used with NFS, use the mount_nfs(8) -r and -w options in /etc/fstab to limit NFS's packet size. 4K (4096) byte packets generally work. SEE ALSO
ifmedia(4), intro(4), isa(4), ifconfig(8), mount_nfs(8) BSD
October 20, 1997 BSD
All times are GMT -4. The time now is 07:39 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy