Sponsored Content
Top Forums UNIX for Dummies Questions & Answers Large file data handling issue Post 302731307 by Gurkamal83 on Wednesday 14th of November 2012 07:32:05 PM
Old 11-14-2012
Thanks for the reply.. but your command also didnt work.
With the small files i can see record values and the record getting trimmed off with the trailling '|'.
When I use the same command with my actual file there is nothing that is coming on screen even if I echo it into a newfile nothing with get populated in the new file.

Starting bytes
001;04.5.1;2012-10-25 08:47:41;ABCDE||3;1351169231;1351169261;;;1351169256;1351169256;1351169261;;;;;;;;;;;00:00:05;

Ending bytes.
255;8;1;incoming;;;;;2;0;;;0;;10;0;0;0;0;;abc.com;x1-6-00-1c-fb-2f-8e-14.XXXX.xc.xr.com;;;;;0;000000;;Singh Gurkamal;1;;;this is a very big file;;;||1000;2012-10-25 08:49:03|
 

10 More Discussions You Might Find Interesting

1. HP-UX

Need to split a large data file using a Unix script

Greetings all: I am still new to Unix environment and I need help with the following requirement. I have a large sequential file sorted on a field (say store#) that is being split into several smaller files, one for each store. That means if there are 500 stores, there will be 500 files. This... (1 Reply)
Discussion started by: SAIK
1 Replies

2. Shell Programming and Scripting

Performance issue in UNIX while generating .dat file from large text file

Hello Gurus, We are facing some performance issue in UNIX. If someone had faced such kind of issue in past please provide your suggestions on this . Problem Definition: /Few of load processes of our Finance Application are facing issue in UNIX when they uses a shell script having below... (19 Replies)
Discussion started by: KRAMA
19 Replies

3. Shell Programming and Scripting

Extract data from large file 80+ million records

Hello, I have got one file with more than 120+ million records(35 GB in size). I have to extract some relevant data from file based on some parameter and generate other output file. What will be the besat and fastest way to extract the ne file. sample file format :--... (2 Replies)
Discussion started by: learner16s
2 Replies

4. Shell Programming and Scripting

UNIX File handling -Issue in reading a file

I have been doing automation of daily check activity for a server, i have been using sqls to retrive the data and while loop for reading the data from the file for several activities. BUT i got a show stopper the below one.. where the data is getting store in $temp_file, but not being read by while... (1 Reply)
Discussion started by: KuldeepSinghTCS
1 Replies

5. Shell Programming and Scripting

Severe performance issue while 'grep'ing on large volume of data

Background ------------- The Unix flavor can be any amongst Solaris, AIX, HP-UX and Linux. I have below 2 flat files. File-1 ------ Contains 50,000 rows with 2 fields in each row, separated by pipe. Row structure is like Object_Id|Object_Name, as following: 111|XXX 222|YYY 333|ZZZ ... (6 Replies)
Discussion started by: Souvik
6 Replies

6. Red Hat

Advice regarding filesystems handling large number of files

Hi All, I have a CentOS operating system installed. I work with really huge number of files which are not only huge in number but some of them really huge in size. Minimum number of files could be 1 million to 2 million in one directory itself. Some of the files are even several Gigabytes in... (2 Replies)
Discussion started by: shoaibjameel123
2 Replies

7. Shell Programming and Scripting

UNIX file handling issue

I have a huge file semicolon( ; ) separated records are Pipe(|) delimited. e.g abc;def;ghi|jkl;mno;pqr|123;456;789 I need to replace the 50th field(semicolon separated) of each record with 9006. The 50th field can have no value e.g. ;; Can someone help me with the appropriate command. (3 Replies)
Discussion started by: Gurkamal83
3 Replies

8. UNIX for Dummies Questions & Answers

File handling issue

Hi All, I am running into an issue. I have a very big file. Wants to split it in smaller chunks. This file has multiple header/ trailers. Also, between each header/trailer there are records. Number of records in each header trailer combination can vary. Also, headers can start with... (3 Replies)
Discussion started by: Gurkamal83
3 Replies

9. Shell Programming and Scripting

Output large volume of data to CSV file

I have a program that output the ownership and permission on each directory and file on the server to a csv file. I am getting error message when I run the program. The program is not outputting to the csv file. Error: the file access permissions do not allow the specified action cannot... (2 Replies)
Discussion started by: dellanicholson
2 Replies

10. Shell Programming and Scripting

Large File masking incorrectly happening Ç delimeter issue

The OS version is Red Hat Enterprise Linux Server release 6.10 I have a script to mask some columns with **** in a data file which is delimeted with Ç , I am using awk for the masking , when I try to mask a small file the awk works fine and masks the required column , but when the file is... (6 Replies)
Discussion started by: LinuxUser8092
6 Replies
srec_signetics(5)						File Formats Manual						 srec_signetics(5)

NAME
srec_signetics - Signetics file format DESCRIPTION
The Signetics file format is not often used. The major disadvantage in modern applications is that the addressing range is limited to only 64kb. Records All data lines are called records, and each record contains the following 5 fields: +--+------+----+----+----+----+ |: | aaaa | cc | as | dd | ss | The field are defined as follows: +--+------+----+----+----+----+ : Every record starts with this identifier. aaaa The address field. A four digit (2 byte) number representing the first address to be used by this record. cc The byte-count. A two digit value (1 byte), counting the actual data bytes in the record. as Address checksum. Covers 2 address bytes and the byte count. dd The actual data of this record. There can be 1 to 255 data bytes per record (see cc) ss Data Checksum. Covers only all the data bytes of this record. Record Begin Every record begins with a colon ":[rq] character. Records contain only ASCII characters. No spaces or tabs are allowed in a record. In fact, apart from the 1st colon, no other characters than 0..9 and A..F are allowed in a record. Interpretation of a record should be case less, it does not matter if you use a..f or A..F. Unfortunately the colon was chosen for the Signetics file format, similar to the Intel format (see srec_intel(5) for more information). However, SRecord is able to automatically detect the dofference between the two format, when you use the -Guess format specifier. Address Field This is the address where the first data byte of the record should be stored. After storing that data byte, the address is incremented by 1 to point to the address for the next data byte of the record. And so on, until all data bytes are stored. The address is represented by a 4 digit hex number (2 bytes), with the MSD first. The order of addresses in the records of a file is not important. The file may also contain address gaps, to skip a portion of unused memory. Byte Count The byte count cc counts the actual data bytes in the current record. Usually records have 32 data bytes, but any number between 1 and 255 is possible. A value of 0x00 for cc indicates the end of the file. In this case not even the address checksum will follow! The record (and file) are terminated immediately. It is not recommended to send too many data bytes in a record for that may increase the transmission time in case of errors. Also avoid sending only a few data bytes per record, because the address overhead will be too heavy in comparison to the payload. Address Checksum This is not really a checksum anymore, it looks more like a CRC. The checksum can not only detect errors in the values of the bytes, but also bytes out of order can be detected. The checksum is calculated by this algorithm: checksum = 0 for i = 1 to 3 checksum = checkum XOR byte ROL checksum next i For the Address Checksum we only need 2 Address bytes and 1 Byte Count byte to be added. That's why we count to 3 in the loop. Every byte is XORed with the previous result. Then the intermediate result is rolled left (carry rolls back into b0). This results in a very reliable checksum, and that for only 3 bytes! The last record of the file does not contain any checksums! So the file ends right after the Byte Count of 0. Data Field The payload of the record is formed by the Data field. The number of data bytes expected is given by the Byte Count field. The last record of the file may not contain a Data field. Data Checksum This checksum uses the same algorithm as used for the Address Checksum. This time we calculate the checksum with only the data bytes of this record. checksum = 0 for i = 1 to cc checksum = checksum XOR byte ROL checksum next i Note that we count to the Byte Count cc this time. Size Multiplier In general, binary data will expand in sized by approximately 2.4 times when represented with this format. EXAMPLE
Here is an example Signetics file :B00010A5576F77212044696420796F75207265617B :B01010E56C6C7920676F207468726F756768206136 :B02010256C6C20746861742074726F75626C652068 :B0300D5F746F207265616420746869733FD1 :B03D00 In the example above you can see a piece of code in Signetics format. The first 3 lines have 16 bytes of data each, which can be seen by the byte count. The 4th line has only 13 bytes, because the program is at it's end there. Notice that the last record of the file contains no data bytes, and not even an Address Checksum. SEE ALSO
http://sbprojects.fol.nl/knowledge/fileformats/signetics.htm AUTHOR
This man page was taken from the above Web page. It was written by San Bergmans <sanmail@bigfoot.com> Reference Manual SRecord srec_signetics(5)
All times are GMT -4. The time now is 11:10 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy