Sponsored Content
Top Forums Shell Programming and Scripting Performance issue in Grepping large files Post 302819813 by millan on Tuesday 11th of June 2013 12:34:04 PM
Old 06-11-2013
Yes Rudic..you have correctly pointed these points.

I can understand these are the problem in this code.
But i m not able to enhance the code to avoid these.

Can you pls suggest what kind of code change i can do for this.
 

9 More Discussions You Might Find Interesting

1. UNIX for Dummies Questions & Answers

Unix File System performance with large directories

Hi, how does the Unix File System perform with large directories (containing ~30.000 files)? What kind of structure is used for the organization of a directory's content, linear lists, (binary) trees? I hope the description 'Unix File System' is exact enough, I don't know more about the file... (3 Replies)
Discussion started by: dive
3 Replies

2. Shell Programming and Scripting

Grepping issue..

I found another problem with my disk-adding script today. When looking for disks, I use grep. When I grep for the following disk sizes: 5242880 I also pick up these as well: 524288000 How do I specifically pick out one or the other, using grep, without resorting to the -v option? ... (9 Replies)
Discussion started by: LinuxRacr
9 Replies

3. Shell Programming and Scripting

Performance issue in UNIX while generating .dat file from large text file

Hello Gurus, We are facing some performance issue in UNIX. If someone had faced such kind of issue in past please provide your suggestions on this . Problem Definition: /Few of load processes of our Finance Application are facing issue in UNIX when they uses a shell script having below... (19 Replies)
Discussion started by: KRAMA
19 Replies

4. Shell Programming and Scripting

replace issue with large files

I have the following problem: I have two files: S containing sentences (one in each row) and W containing files (one in each row). It might look like this: S: a b c apple d. e f orange g. h banana i j. W: orange banana apple My task is to replace in S all words that appear in W... (2 Replies)
Discussion started by: tootles564
2 Replies

5. Shell Programming and Scripting

Severe performance issue while 'grep'ing on large volume of data

Background ------------- The Unix flavor can be any amongst Solaris, AIX, HP-UX and Linux. I have below 2 flat files. File-1 ------ Contains 50,000 rows with 2 fields in each row, separated by pipe. Row structure is like Object_Id|Object_Name, as following: 111|XXX 222|YYY 333|ZZZ ... (6 Replies)
Discussion started by: Souvik
6 Replies

6. Red Hat

Empty directory, large size and performance

Hi, I've some directory that I used as working directory for a program. At the end of the procedure, the content is deleted. This directory, when I do a ls -l, appears to still take up some space. After a little research, I've seen on a another board of this forum that it's not really taking... (5 Replies)
Discussion started by: bdx
5 Replies

7. Shell Programming and Scripting

Grepping large list of files

Hi All, I need help to know the exact command when I grep large list of files. Either using ls or find command. However I do not want to find in the subdirectories as the number of subdirectories are not fixed. How do I achieve that. I want something like this: find ./ -name "MYFILE*.txt"... (2 Replies)
Discussion started by: angshuman
2 Replies

8. Shell Programming and Scripting

Grepping verbal forms from a large corpus

I want to extract verbal forms from a large corpus of English. I have identified a certain number of patterns. Each pattern has the following structure SPACE word_CATEGORY where word refers to the verbal form and CATEGORY refers to the class of the verb The categories are identified as per the... (4 Replies)
Discussion started by: gimley
4 Replies

9. Shell Programming and Scripting

Bash script search, improve performance with large files

Hello, For several of our scripts we are using awk to search patterns in files with data from other files. This works almost perfectly except that it takes ages to run on larger files. I am wondering if there is a way to speed up this process or have something else that is quicker with the... (15 Replies)
Discussion started by: SDohmen
15 Replies
clfmerge(1)							     logtools							       clfmerge(1)

NAME
clfmerge - merge Common-Log Format web logs based on time-stamps SYNOPSIS
clfmerge [--help | -h] [-b size] [-d] [file names] DESCRIPTION
The clfmerge program is designed to avoid using sort to merge multiple web log files. Web logs for big sites consist of multiple files in the >100M size range from a number of machines. For such files it is not practical to use a program such as gnusort to merge the files because the data is not always entirely in order (so the merge option of gnusort doesn't work so well), but it is not in random order (so doing a complete sort would be a waste). Also the date field that is being sorted on is not particularly easy to specify for gnusort (I have seen it done but it was messy). This program is designed to simply and quickly sort multiple large log files with no need for temporary storage space or overly large buf- fers in memory (the memory footprint is generally only a few megs). OVERVIEW
It will take a number (from 0 to n) of file-names on the command line, it will open them for reading and read CLF format web log data from them all. Lines which don't appear to be in CLF format (NB they aren't parsed fully, only minimal parsing to determine the date is per- formed) will be rejected and displayed on standard-error. If zero files are specified then there will be no error, it will just silently output nothing, this is for scripts which use the find com- mand to find log files and which can't be counted on to find any log files, it saves doing an extra check in your shell scripts. If one file is specified then the data will be read into a 1000 line buffer and it will be removed from the buffer (and displayed on stan- dard output) in date order. This is to handle the case of web servers which date entries on the connection time but write them to the log at completion time and thus generate log files that aren't in order (Netscape web server does this - I haven't checked what other web servers do). If more than one file is specified then a line will be read from each file, the file that had the earliest time stamp will be read from until it returns a time stamp later than one of the other files. Then the file with the earlier time stamp will be read. With multiple files the buffer size is 1000 lines or 100 * the number of files (whichever is larger). When the buffer becomes full the first line will be removed and displayed on standard output. OPTIONS
-b buffer-size Specify the buffer-size to use, if 0 is specified then it means to disable the sliding-window sorting of the data which improves the speed. -d Set domain-name mangling to on. This means that if a line starts with as the name of the site that was requested then that would be removed from the start of the line and the GET / would be changed to GET http://www.company.com/ which allows programs like Webal- izer to produce good graphs for large hosting sites. Also it will make the domain name in lower case. EXIT STATUS
0 No errors 1 Bad parameters 2 Can't open one of the specified files 3 Can't write to output AUTHOR
This program, its manual page, and the Debian package were written by Russell Coker <russell@coker.com.au>. SEE ALSO
clfsplit(1),clfdomainsplit(1) Russell Coker <;russell@coker.com.au> 0.06 clfmerge(1)
All times are GMT -4. The time now is 01:57 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy