11-14-2012
Is this "file" actually a "stream" with no record terminators?
10 More Discussions You Might Find Interesting
1. HP-UX
Greetings all:
I am still new to Unix environment and I need help with the following requirement.
I have a large sequential file sorted on a field (say store#) that is being split into several smaller files, one for each store. That means if there are 500 stores, there will be 500 files. This... (1 Reply)
Discussion started by: SAIK
1 Replies
2. Shell Programming and Scripting
Hello Gurus,
We are facing some performance issue in UNIX. If someone had faced such kind of issue in past please provide your suggestions on this .
Problem Definition:
/Few of load processes of our Finance Application are facing issue in UNIX when they uses a shell script having below... (19 Replies)
Discussion started by: KRAMA
19 Replies
3. Shell Programming and Scripting
Hello,
I have got one file with more than 120+ million records(35 GB in size). I have to extract some relevant data from file based on some parameter and generate other output file.
What will be the besat and fastest way to extract the ne file.
sample file format :--... (2 Replies)
Discussion started by: learner16s
2 Replies
4. Shell Programming and Scripting
I have been doing automation of daily check activity for a server, i have been using sqls to retrive the data and while loop for reading the data from the file for several activities. BUT i got a show stopper the below one.. where the data is getting store in $temp_file, but not being read by while... (1 Reply)
Discussion started by: KuldeepSinghTCS
1 Replies
5. Shell Programming and Scripting
Background
-------------
The Unix flavor can be any amongst Solaris, AIX, HP-UX and Linux. I have below 2 flat files.
File-1
------
Contains 50,000 rows with 2 fields in each row, separated by pipe.
Row structure is like Object_Id|Object_Name, as following:
111|XXX
222|YYY
333|ZZZ
... (6 Replies)
Discussion started by: Souvik
6 Replies
6. Red Hat
Hi All,
I have a CentOS operating system installed. I work with really huge number of files which are not only huge in number but some of them really huge in size. Minimum number of files could be 1 million to 2 million in one directory itself. Some of the files are even several Gigabytes in... (2 Replies)
Discussion started by: shoaibjameel123
2 Replies
7. Shell Programming and Scripting
I have a huge file semicolon( ; ) separated records are Pipe(|) delimited.
e.g
abc;def;ghi|jkl;mno;pqr|123;456;789
I need to replace the 50th field(semicolon separated) of each record with 9006. The 50th field can have no value e.g. ;;
Can someone help me with the appropriate command. (3 Replies)
Discussion started by: Gurkamal83
3 Replies
8. UNIX for Dummies Questions & Answers
Hi All,
I am running into an issue. I have a very big file. Wants to split it in smaller chunks. This file has multiple header/ trailers. Also, between each header/trailer there are records. Number of records in each header trailer combination can vary. Also, headers can start with... (3 Replies)
Discussion started by: Gurkamal83
3 Replies
9. Shell Programming and Scripting
I have a program that output the ownership and permission on each directory and file on the server to a csv file. I am getting error message
when I run the program. The program is not outputting to the csv file.
Error:
the file access permissions do not allow the specified action
cannot... (2 Replies)
Discussion started by: dellanicholson
2 Replies
10. Shell Programming and Scripting
The OS version is
Red Hat Enterprise Linux Server release 6.10
I have a script to mask some columns with **** in a data file which is delimeted with Ç ,
I am using awk for the masking , when I try to mask a small file the awk works fine and masks the required column ,
but when the file is... (6 Replies)
Discussion started by: LinuxUser8092
6 Replies
LEARN ABOUT DEBIAN
lire::dlfstream
DlfStream(3pm) LogReport's Lire Documentation DlfStream(3pm)
NAME
Lire::DlfStream - Interface to DLF data stream
SYNOPSIS
use Lire::DlfStore;
my $store = Lire::DlfStore->open( "mystore" );
my $dlf_stream = $store->open_dlf_stream( "www", "r" );
print "Data begins on ", scalar localtime $dlf_stream->start_time(), "
";
print "Data ends on ", scalar localtime $dlf_stream->end_time(), "
";
while ( my $dlf = $dlf_stream->read_dlf() ) {
...
}
DESCRIPTION
This object encapsulates DLF stream.
name
Returns the schema's name of the DlfStream.
mode()
Returns the mode in which the DlfStream was opened.
sort_spec()
Returns the sort specification that is set on the stream.
close()
This method should be called when the Lire::DlfStream isn't needed anymore, otherwise the same DlfStream cannot be opened until then.
nrecords()
Returs the number of DLF records in the stream.
start_time()
Returns the timestamp of the oldest DLF record in the stream in seconds since the epoch.
end_time()
Returns the timestamp of the newest DLF record in the stream in seconds since the epoch.
read_dlf()
Returns an hash reference containing the next DLF record in the stream. It returns undef once the end of the stream is reached.
This method will throw an exception if the DlfStream isn't open in 'r' mode or if there is an error reading the DLF record.
read_dlf_aref()
Returns the next DLF record in the stream as an array reference. The fields are in the order specified by the schema.
This method will throw an exception if the DlfStream isn't open in 'r' mode or if there is an error reading the DLF record.
write_dlf( $dlf, [ $link_ids ] )
Writes the fields contained in the hash ref $dlf to the DLF stream.
This method will throw an exception if there is an error writing the DLF record or if the stream isn't opened in 'w' mode.
The $link_ids parameter is used when the stream's schema is a Lire::DerivedSchema. It should be an array reference containing the DLF ids
of the records which are linked to this record.
clean( [ $time ] )
This method will remove all DLF records older than $time. It $time is omitted, all Dlf records will be removed.
Lire 2.1.1 2006-07-23 DlfStream(3pm)