11-14-2012
hmm, yes this is a 600K file and as already said it is a single record. How can i handle the file manupulation?
10 More Discussions You Might Find Interesting
1. HP-UX
Greetings all:
I am still new to Unix environment and I need help with the following requirement.
I have a large sequential file sorted on a field (say store#) that is being split into several smaller files, one for each store. That means if there are 500 stores, there will be 500 files. This... (1 Reply)
Discussion started by: SAIK
1 Replies
2. Shell Programming and Scripting
Hello Gurus,
We are facing some performance issue in UNIX. If someone had faced such kind of issue in past please provide your suggestions on this .
Problem Definition:
/Few of load processes of our Finance Application are facing issue in UNIX when they uses a shell script having below... (19 Replies)
Discussion started by: KRAMA
19 Replies
3. Shell Programming and Scripting
Hello,
I have got one file with more than 120+ million records(35 GB in size). I have to extract some relevant data from file based on some parameter and generate other output file.
What will be the besat and fastest way to extract the ne file.
sample file format :--... (2 Replies)
Discussion started by: learner16s
2 Replies
4. Shell Programming and Scripting
I have been doing automation of daily check activity for a server, i have been using sqls to retrive the data and while loop for reading the data from the file for several activities. BUT i got a show stopper the below one.. where the data is getting store in $temp_file, but not being read by while... (1 Reply)
Discussion started by: KuldeepSinghTCS
1 Replies
5. Shell Programming and Scripting
Background
-------------
The Unix flavor can be any amongst Solaris, AIX, HP-UX and Linux. I have below 2 flat files.
File-1
------
Contains 50,000 rows with 2 fields in each row, separated by pipe.
Row structure is like Object_Id|Object_Name, as following:
111|XXX
222|YYY
333|ZZZ
... (6 Replies)
Discussion started by: Souvik
6 Replies
6. Red Hat
Hi All,
I have a CentOS operating system installed. I work with really huge number of files which are not only huge in number but some of them really huge in size. Minimum number of files could be 1 million to 2 million in one directory itself. Some of the files are even several Gigabytes in... (2 Replies)
Discussion started by: shoaibjameel123
2 Replies
7. Shell Programming and Scripting
I have a huge file semicolon( ; ) separated records are Pipe(|) delimited.
e.g
abc;def;ghi|jkl;mno;pqr|123;456;789
I need to replace the 50th field(semicolon separated) of each record with 9006. The 50th field can have no value e.g. ;;
Can someone help me with the appropriate command. (3 Replies)
Discussion started by: Gurkamal83
3 Replies
8. UNIX for Dummies Questions & Answers
Hi All,
I am running into an issue. I have a very big file. Wants to split it in smaller chunks. This file has multiple header/ trailers. Also, between each header/trailer there are records. Number of records in each header trailer combination can vary. Also, headers can start with... (3 Replies)
Discussion started by: Gurkamal83
3 Replies
9. Shell Programming and Scripting
I have a program that output the ownership and permission on each directory and file on the server to a csv file. I am getting error message
when I run the program. The program is not outputting to the csv file.
Error:
the file access permissions do not allow the specified action
cannot... (2 Replies)
Discussion started by: dellanicholson
2 Replies
10. Shell Programming and Scripting
The OS version is
Red Hat Enterprise Linux Server release 6.10
I have a script to mask some columns with **** in a data file which is delimeted with Ç ,
I am using awk for the masking , when I try to mask a small file the awk works fine and masks the required column ,
but when the file is... (6 Replies)
Discussion started by: LinuxUser8092
6 Replies
LEARN ABOUT DEBIAN
marc::file::microlif
MARC::File::MicroLIF(3pm) User Contributed Perl Documentation MARC::File::MicroLIF(3pm)
NAME
MARC::File::MicroLIF - MicroLIF-specific file handling
SYNOPSIS
use MARC::File::MicroLIF;
my $file = MARC::File::MicroLIF->in( $filename );
while ( my $marc = $file->next() ) {
# Do something
}
$file->close();
undef $file;
EXPORT
None.
The buffer must be large enough to handle any valid record because we don't check for cases like a CR/LF pair or an end-of-record/CR/LF
trio being only partially in the buffer.
The max valid record is the max MARC record size(99999) plus one or two characters per tag (CR, LF, or CR/LF). It's hard to say what the
max number of tags is, so here we use 6000. (6000 tags can be squeezed into a MARC record only if every tag has only one subfield
containing a maximum of one character, or if data from multiple tags overlaps in the MARC record body. We're pretty safe.)
METHODS
in()
Opens a MicroLIF file for reading.
Gets the next chunk of data. If $want_line is true then you get the next chunk ending with any combination of
and
of any length. If
it is false or not passed then you get the next chunk ending with x60 followed by any combination of
and
of any length.
All trailing
and
are stripped.
header()
If the MicroLIF file has a file header then the header is returned. If the file has no header or the file has not yet been opened then
"undef" is returned.
decode()
Decodes a MicroLIF record and returns a USMARC record.
Can be called in one of three different ways:
$object->decode( $lif )
MARC::File::MicroLIF->decode( $lif )
MARC::File::MicroLIF::decode( $lif )
TODO
RELATED MODULES
MARC::File
LICENSE
This code may be distributed under the same terms as Perl itself.
Please note that these modules are not products of or supported by the employers of the various contributors to the code.
AUTHOR
Andy Lester, "<andy@petdance.com>"
perl v5.10.1 2010-03-29 MARC::File::MicroLIF(3pm)