Sponsored Content
Top Forums UNIX for Dummies Questions & Answers Large file data handling issue Post 302731307 by Gurkamal83 on Wednesday 14th of November 2012 07:32:05 PM
Old 11-14-2012
Thanks for the reply.. but your command also didnt work.
With the small files i can see record values and the record getting trimmed off with the trailling '|'.
When I use the same command with my actual file there is nothing that is coming on screen even if I echo it into a newfile nothing with get populated in the new file.

Starting bytes
001;04.5.1;2012-10-25 08:47:41;ABCDE||3;1351169231;1351169261;;;1351169256;1351169256;1351169261;;;;;;;;;;;00:00:05;

Ending bytes.
255;8;1;incoming;;;;;2;0;;;0;;10;0;0;0;0;;abc.com;x1-6-00-1c-fb-2f-8e-14.XXXX.xc.xr.com;;;;;0;000000;;Singh Gurkamal;1;;;this is a very big file;;;||1000;2012-10-25 08:49:03|
 

10 More Discussions You Might Find Interesting

1. HP-UX

Need to split a large data file using a Unix script

Greetings all: I am still new to Unix environment and I need help with the following requirement. I have a large sequential file sorted on a field (say store#) that is being split into several smaller files, one for each store. That means if there are 500 stores, there will be 500 files. This... (1 Reply)
Discussion started by: SAIK
1 Replies

2. Shell Programming and Scripting

Performance issue in UNIX while generating .dat file from large text file

Hello Gurus, We are facing some performance issue in UNIX. If someone had faced such kind of issue in past please provide your suggestions on this . Problem Definition: /Few of load processes of our Finance Application are facing issue in UNIX when they uses a shell script having below... (19 Replies)
Discussion started by: KRAMA
19 Replies

3. Shell Programming and Scripting

Extract data from large file 80+ million records

Hello, I have got one file with more than 120+ million records(35 GB in size). I have to extract some relevant data from file based on some parameter and generate other output file. What will be the besat and fastest way to extract the ne file. sample file format :--... (2 Replies)
Discussion started by: learner16s
2 Replies

4. Shell Programming and Scripting

UNIX File handling -Issue in reading a file

I have been doing automation of daily check activity for a server, i have been using sqls to retrive the data and while loop for reading the data from the file for several activities. BUT i got a show stopper the below one.. where the data is getting store in $temp_file, but not being read by while... (1 Reply)
Discussion started by: KuldeepSinghTCS
1 Replies

5. Shell Programming and Scripting

Severe performance issue while 'grep'ing on large volume of data

Background ------------- The Unix flavor can be any amongst Solaris, AIX, HP-UX and Linux. I have below 2 flat files. File-1 ------ Contains 50,000 rows with 2 fields in each row, separated by pipe. Row structure is like Object_Id|Object_Name, as following: 111|XXX 222|YYY 333|ZZZ ... (6 Replies)
Discussion started by: Souvik
6 Replies

6. Red Hat

Advice regarding filesystems handling large number of files

Hi All, I have a CentOS operating system installed. I work with really huge number of files which are not only huge in number but some of them really huge in size. Minimum number of files could be 1 million to 2 million in one directory itself. Some of the files are even several Gigabytes in... (2 Replies)
Discussion started by: shoaibjameel123
2 Replies

7. Shell Programming and Scripting

UNIX file handling issue

I have a huge file semicolon( ; ) separated records are Pipe(|) delimited. e.g abc;def;ghi|jkl;mno;pqr|123;456;789 I need to replace the 50th field(semicolon separated) of each record with 9006. The 50th field can have no value e.g. ;; Can someone help me with the appropriate command. (3 Replies)
Discussion started by: Gurkamal83
3 Replies

8. UNIX for Dummies Questions & Answers

File handling issue

Hi All, I am running into an issue. I have a very big file. Wants to split it in smaller chunks. This file has multiple header/ trailers. Also, between each header/trailer there are records. Number of records in each header trailer combination can vary. Also, headers can start with... (3 Replies)
Discussion started by: Gurkamal83
3 Replies

9. Shell Programming and Scripting

Output large volume of data to CSV file

I have a program that output the ownership and permission on each directory and file on the server to a csv file. I am getting error message when I run the program. The program is not outputting to the csv file. Error: the file access permissions do not allow the specified action cannot... (2 Replies)
Discussion started by: dellanicholson
2 Replies

10. Shell Programming and Scripting

Large File masking incorrectly happening Ç delimeter issue

The OS version is Red Hat Enterprise Linux Server release 6.10 I have a script to mask some columns with **** in a data file which is delimeted with Ç , I am using awk for the masking , when I try to mask a small file the awk works fine and masks the required column , but when the file is... (6 Replies)
Discussion started by: LinuxUser8092
6 Replies
AMANDA-ARCHIVE-FOR(5)					   File formats and conventions 				     AMANDA-ARCHIVE-FOR(5)

NAME
amanda-archive-format - Format of amanda archive streams DESCRIPTION
The Amanda archive format is designed to be a simple, efficient means of interleaving multiple simultaneous files, allowing an arbitrary number of data streams for a file. It is a streaming format in the sense that the writer need not know the size of files until they are completely written to the archive, and the reader can process the archive in constant space. DATA MODEL
The data stored in an archive consists of an unlimited number of files. Each file consists of a number of "attributes", each identified by a 16-bit ID. Each attribute can contain an unlimited amount of data. Attribute IDs less than 16 (AMAR_ATTR_APP_START) are reserved for special purposes, but the remaining IDs are available for application-specific uses. STRUCTURE
RECORDS A record can be either a header record or a data record. A header record serves as a "checkpoint" in the file, with a magic value that can be used to recognize archive files. A header record has a fixed size of 28 bytes, as follows: 28 bytes: magic string The magic string is the ASCII text "AMANDA ARCHIVE FORMAT " followed by a decimal representation of the format version number (currently '1'), padded to 28 bytes with NUL bytes. A data record has a variable size, as follows: 2 bytes: file number 2 bytes: attribute ID 4 bytes: data size (N) N bytes: data The file number and attribute ID serve to identify the data stream to which this data belongs. The low 31 bits of the data size give the number of data bytes following, while the high bit (the EOA bit) indicates the end of the attribute, as described below. Because records are generally read into memory in their entirety, the data size must not exceed 4MB (4194304 bytes). All integers are in network byte order. A header record is distinguished from a data record by the magic string. The file number 0x414d, corresponding to the characters "AM", is forbidden and must be skipped on writing. Attribute ID 0 (AMAR_ATTR_FILENAME) gives the filename of a file. This attribute is mandatory for each file, must be nonempty, must fit in a single record, and must precede any other attributes for the same file in the archive. The filename should be a printable string (ASCII or UTF-8), to facilitate use of generic archive-display utilities, but the format permits any nonempty bytestring. The filename cannot span multiple records. Attribute ID 1 (AMAR_ATTR_EOF) signals the end of a file. This attribute must contain no data, but should have the EOA bit set. CONNECTION TO DATA MODEL Each file in an archive is assigned a file number distinct from any other active file in the archive. The first record for a file must have attribute ID 0 (AMAR_ATTR_FILENAME), indicating a filename. A file ends with an empty record with ID 1 (AMAR_ATTR_EOF). For every file at which a reader might want to begin reading, the filename record should be preceded by a header record. How often to write header records is left to the discretion of the application. All data records with the same file number and attribute ID are considered a part of the same attribute. The boundaries between such records are not significant to the contents of the attribute, and both readers and writers are free to alter such boundaries as necessary. The final data record for each attribute has the high bit (the EOA bit) of its data size field set. A writer must not reuse an attribute ID within a file. An attribute may be terminated by a record containing both data and an EOA bit, or by a zero-length record with its EOA bit set. SEE ALSO
amanda(8), amanda(8) The Amanda Wiki: : http://wiki.zmanda.com/ AUTHOR
Dustin J. Mitchell <dustin@zmanda.com> Zmanda, Inc. (http://www.zmanda.com) Amanda 3.3.1 02/21/2012 AMANDA-ARCHIVE-FOR(5)
All times are GMT -4. The time now is 08:54 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy