Sponsored Content
Top Forums UNIX for Dummies Questions & Answers comparing Huge Files - Performance is very bad Post 302092551 by BOFH on Tuesday 10th of October 2006 03:10:50 PM
Old 10-10-2006
I've noticed that for simple things, I can shell script something and it works fine. When things start getting complicated or there's a performance issue, I'll break out perl (or python if you like).

I'd take what you have and see if it could be done better in perl. I'm sure it'd be a lot faster and probably easier to write.

Carl
 

9 More Discussions You Might Find Interesting

1. AIX

Bad performance when log in with putty

Hello guys! I'm n00b in AIX and I'm sticked in a problem. (my English is poor enough, but I hope you can understand me :P). So.. I'm trying to connect to an AIX machine with putty, and .. 'using username xxx' appears after 2 sec (OK), but 'xxx@ip's password' appears after 1:15 min. After... (6 Replies)
Discussion started by: combat2k
6 Replies

2. Shell Programming and Scripting

Comparing two huge files

Hi, I have two files file A and File B. File A is a error file and File B is source file. In the error file. First line is the actual error and second line gives the information about the record (client ID) that throws error. I need to compare the first field (which doesnt start with '//') of... (11 Replies)
Discussion started by: kmkbuddy_1983
11 Replies

3. HP-UX

Bad performance but Low CPU loading?

There might be some problem with my server, because every morning at 7, it's performance become bad with no DB extra deadlock. But I just couldn't figure it out. Please give me some advise, thanks a lot... According to the CPU performace chart, Daily CPU loading Maximum: 42 %, Average:36%. ... (8 Replies)
Discussion started by: GreenShery
8 Replies

4. Shell Programming and Scripting

Comparing two huge files on field basis.

Hi all, I have two large files and i want a field by field comparison for each record in it. All fields are tab seperated. file1: Email SELVAKUMAR RAMACHANDRAN Email SHILPA SAHU Web NIYATI SONI Web NIYATI SONI Email VIINII DOSHI Web RAJNISH KUMAR Web ... (4 Replies)
Discussion started by: Suman Singh
4 Replies

5. Shell Programming and Scripting

Comparing 2 huge text files

I have this 2 files: k5login sanwar@systems.nyfix.com jjamnik@systems.nyfix.com nisha@SYSTEMS.NYFIX.COM rdpena@SYSTEMS.NYFIX.COM service/backups-ora@SYSTEMS.NYFIX.COM ivanr@SYSTEMS.NYFIX.COM nasapova@SYSTEMS.NYFIX.COM tpulay@SYSTEMS.NYFIX.COM rsueno@SYSTEMS.NYFIX.COM... (11 Replies)
Discussion started by: linuxgeek
11 Replies

6. Solaris

Performance (iops) becomes bad, what is the reason?

I have written a virtual HBA driver named "xmp_vhba". A scsi disk is attached on it. as shown below: xmp_vhba, instance #0 disk, instance #11 But the performance became very bad when we read/write the scsi disk using the vdbench(a read/write io tool). What is the reason? ... (7 Replies)
Discussion started by: ForgetChen
7 Replies

7. HP-UX

Performance issue with 'grep' command for huge file size

I have 2 files; one file (say, details.txt) contains the details of employees and another file (say, emp.txt) has some selected employee names. I am extracting employee details from details.txt by using emp.txt and the corresponding code is: while read line do emp_name=`echo $line` grep -e... (7 Replies)
Discussion started by: arb_1984
7 Replies

8. Shell Programming and Scripting

Perl: Need help comparing huge files

What do i need to do have the below perl program load 205 million record files into the hash. It currently works on smaller files, but not working on huge files. Any idea what i need to do to modify to make it work with huge files: #!/usr/bin/perl $ot1=$ARGV; $ot2=$ARGV; open(mfileot1,... (12 Replies)
Discussion started by: mrn6430
12 Replies

9. UNIX for Advanced & Expert Users

Performance problem with removing duplicates in a huge file (50+ GB)

I'm trying to remove duplicate data from an input file with unsorted data which is of size >50GB and write the unique records to a new file. I'm trying and already tried out a variety of options posted in similar threads/forums. But no luck so far.. Any suggestions please ? Thanks !! (9 Replies)
Discussion started by: Kannan K
9 Replies
MRTG2PCP(1)						       Performance Co-Pilot						       MRTG2PCP(1)

NAME
mrtg2pcp - Import mrtg data and create a PCP archive SYNOPSIS
mrtg2pcp hostname devname timezone infile outfile DESCRIPTION
mrtg2pcp is intended to read an MRTG log file as created by mrtg(1) and translate this into a Performance Co-Pilot (PCP) archive with the basename outfile. The hostname, devname, and timezone arguments specify information about the system for which the statistics were gathered. The resultant PCP achive may be used with all the PCP client tools to graph subsets of the data using pmchart(1), perform data reduction and reporting, filter with the PCP inference engine pmie(1), etc. A series of physical files will be created with the prefix outfile. These are outfile.0 (the performance data), outfile.meta (the metadata that describes the performance data) and outfile.index (a temporal index to improve efficiency of replay operations for the archive). If any of these files exists already, then mrtg2pcp will not overwrite them and will exit with an error message of the form __pmLogNewFile: "blah.0" already exists, not over-written mrtg2pcp is a Perl script that uses the PCP::LogImport Perl wrapper around the PCP libpcp_import library, and as such could be used as an example to develop new tools to import other types of performance data and create PCP archives. SEE ALSO
logimport(3), PCP::LogImport(3pm), pmchart(1), pmie(1) pmlogger(1). 3.8.10 Performance Co-Pilot MRTG2PCP(1)
All times are GMT -4. The time now is 02:39 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy