Sponsored Content
Top Forums Shell Programming and Scripting Fetching record based on Uniq Key from huge file. Post 302717423 by lathigara on Thursday 18th of October 2012 06:14:43 AM
Old 10-18-2012
hi itkamaraj thanks for updates.

some how i am geting error at the time of running your command.
awk: syntax error near line 1
awk: bailing out near line 1

and i am not getting why this error is coming.

thanks
prashant
Smilie
 

10 More Discussions You Might Find Interesting

1. UNIX for Dummies Questions & Answers

Parsing out records from one huge record

Hi, I have one huge record and know that each record in the file is 550 bytes long. How do I parse out individual records from the single huge record. Thanks, (4 Replies)
Discussion started by: bwrynz1
4 Replies

2. Shell Programming and Scripting

Compare 2 huge files wrt to a key using awk

Hi Folks, I need to compare two very huge file ( i.e the files would contain a minimum of 70k records each) using awk or sed. The comparison needs to be done with respect to a 'key'. For example : File1 ********** 1234|TONY|Y75634|20/07/2008 1235|TINA|XCVB56|30/07/2009... (13 Replies)
Discussion started by: Ranjani
13 Replies

3. Shell Programming and Scripting

Logic for file fetching based on date

Dear friends, I receive the following files into a FTP location on a daily basis -rw-r----- 1 guest ftp1 5021 Aug 19 09:03 CHECK_TEST_Extracts_20080818210000.zip -rw-r----- 1 guest ftp1 2437 Aug 20 05:15 CHECK_TEST_Extracts_20080819210000.zip -rw-r----- 1 guest ... (2 Replies)
Discussion started by: sureshg_sampat
2 Replies

4. Shell Programming and Scripting

filter the uniq record problem

Anyone can help for filter the uniq record for below example? Thank you very much Input file 20090503011111|test|abc 20090503011112|tet1|abc|def 20090503011112|test1|bcd|def 20090503011131|abc|abc 20090503011131|bbc|bcd 20090503011152|bcd|abc 20090503011151|abc|abc... (8 Replies)
Discussion started by: bleach8578
8 Replies

5. Shell Programming and Scripting

Keep the last uniq record only

Hi folks, Below is the content of a file 'tmp.dat', and I want to keep the uniq record (key by first column). However, the uniq record should be the last record. 302293022|2|744124889|744124889 302293022|3|744124889|744124889 302293022|4|744124889|744124889 302293022|5|744124889|744124889... (4 Replies)
Discussion started by: ChicagoBlues
4 Replies

6. Shell Programming and Scripting

Need help splitting huge single record file

I was given a data file that I need to split into multiple lines/records based on a key word. The problem is that it is 2.5GB or bigger and everything I try in perl or sed causes a Segmentation fault. Can someone give me some other ideas. The data is of the form:... (5 Replies)
Discussion started by: leolson
5 Replies

7. Shell Programming and Scripting

Removing Dupes from huge file- awk/perl/uniq

Hi, I have the following command in place nawk -F, '!a++' file > file.uniq It has been working perfectly as per requirements, by removing duplicates by taking into consideration only first 3 fields. Recently it has started giving below error: bash-3.2$ nawk -F, '!a++'... (17 Replies)
Discussion started by: makn
17 Replies

8. UNIX for Dummies Questions & Answers

Split a huge 7 GB File Based on Pattern into 4 files

Hi, I have a Huge 7 GB file which has around 1 million records, i want to split this file into 4 files to contain around 250k messages each. Please help me as Split command cannot work here as it might miss tags.. Format of the file is as below <!--###### ###### START-->... (6 Replies)
Discussion started by: KishM
6 Replies

9. Shell Programming and Scripting

Fetching values in CSV file based on column name

input.csv: Field1,Field2,Field3,Field4,Field4 abc ,123 ,xyz ,000 ,pqr mno ,123 ,dfr ,111 ,bbb output: Field2,Field4 123 ,000 123 ,111 how to fetch the values of Field4 where Field2='123' I don't want to fetch the values based on column position. Instead want to... (10 Replies)
Discussion started by: bharathbangalor
10 Replies

10. Shell Programming and Scripting

EBCDIC File Split Based On Record Key

I was wondering if anyone could explain to me how to split a variable length EBCDIC file into seperate files based on the record key. I have the COBOL layout, and so I need to split the file into 13 different EBCDIC files so that I can run each one through a C++ converter I have, and get the... (11 Replies)
Discussion started by: hanshot1stx
11 Replies
AMPLOT(8)						      System Manager's Manual							 AMPLOT(8)

NAME
amplot - visualize the behavior of Amanda SYNOPSIS
amplot [ -c ] [ -e ] [ -g ] [ -l ] [ -p ] [ -t T ] amdump_files DESCRIPTION
Amplot reads an amdump output file that Amanda generates each run (e.g. amdump.1) and translates the information into a picture format that may be used to determine how your installation is doing and if any parameters need to be changed. Amplot also prints out amdump lines that it either does not understand or knows to be warning or error lines and a summary of the start, end and total time for each backup image. Amplot is a shell script that executes an awk program (amplot.awk) to scan the amdump output file. It then executes a gnuplot program (amplot.g) to generate the graph. The awk program is written in an enhanced version of awk, such as GNU awk (gawk version 2.15 or later) or nawk. During execution, amplot generates a few temporary files that gnuplot uses. These files are deleted at the end of execution. See the amanda(8) man page for more details about Amanda. OPTIONS
-c Compress amdump_files after plotting. -e Extend the X (time) axis if needed. -g Direct gnuplot output directly to the X11 display (default). -p Direct postscript output to file YYYYMMDD.ps (opposite of -g). -l Generate landscape oriented output. -t T Set the right edge of the plot to be T hours. The amdump_files may be in various compressed formats (compress, gzip, pact, compact). INTERPRETATION
The figure is divided into a number of regions. There are titles on the top that show important statistical information about the configu- ration and from this execution of amdump. In the figure, the X axis is time, with 0 being the moment amdump was started. The Y axis is divided into 5 regions: QUEUES: How many backups have not been started, how many are waiting on space in the holding disk and how many have been transferred successfully to tape. %BANDWIDTH: Percentage of allowed network bandwidth in use. HOLDING DISK: The higher line depicts space allocated on the holding disk to backups in progress and completed backups waiting to be written to tape. The lower line depicts the fraction of the holding disk containing completed backups waiting to be written to tape including the file currently being written to tape. The scale is percentage of the holding disk. TAPE: Tape drive usage. %DUMPERS: Percentage of active dumpers. The idle period at the left of the graph is time amdump is asking the machines how much data they are going to dump. This process can take a while if hosts are down or it takes them a long time to generate estimates. AUTHOR
Olafur Gudmundsson ogud@tis.com Trusted Information Systems formerly at University of Maryland, College Park BUGS
Reports lines it does not recognize, mainly error cases but some are legitimate lines the program needs to be taught about. SEE ALSO
amanda(8), amdump(8), gawk(1), nawk(1), awk(1), gnuplot(1), sh(1), compress(1), gzip(1) 4th Berkeley Distribution AMPLOT(8)
All times are GMT -4. The time now is 01:52 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy