Sponsored Content
Operating Systems HP-UX Performance issue with 'grep' command for huge file size Post 302574570 by arb_1984 on Thursday 17th of November 2011 03:07:31 PM
Old 11-17-2011
Thank you all for your quick response !! Thanks a lot rwuertn; '-F' option is working and I am able to extract the required data within less time period.

However, the files are like:
Code:
emp.txt
------------
John
Kevin
Prakash
Susan
Ken

details.txt
-------------
HDR|Prakash D
DTL|Prakash|EMP0000010|Sr Associate|FL
HDR|Kevin T
DTL|Kevin|EMP0000004|Analyst|IL
HDR|John M
DTL|John|EMP0000184|Manager|CA

Thanks again Smilie

Last edited by Scott; 11-17-2011 at 05:52 PM.. Reason: Code tags...
 

10 More Discussions You Might Find Interesting

1. Shell Programming and Scripting

Grep matched records from huge file

111111111100000000001111111111 123232323200000010001114545454 232435424200000000001232131212 342354234301000000002323423443 232435424200000000001232131212 2390898994200000000001238908092 This is the record format. From 11th position to 20th position in a record there are 0's occuring,and... (6 Replies)
Discussion started by: mjkreddy
6 Replies

2. Shell Programming and Scripting

performance of shell script ( grep command)

Hi, I have to find out the run time for 40-45 different componets. These components writes in to a genreric log file in a single directory. eg. directory is LOG and the log file name format is generic_log_<process_id>_<date YY_MM_DD_HH_MM_SS>.log i am taking the run time using the time... (3 Replies)
Discussion started by: vikash_k
3 Replies

3. Shell Programming and Scripting

Implement in one line sed or awk having no delimiter and file size is huge

I have file which contains around 5000 lines. The lines are fixed legth but having no delimiter.Each line line contains nearly 3000 characters. I want to delete the lines a> if it starts with 1 and if 576th postion is a digit i,e 0-9 or b> if it starts with 0 or 9(i,e header and footer) ... (4 Replies)
Discussion started by: millan
4 Replies

4. Shell Programming and Scripting

Severe performance issue while 'grep'ing on large volume of data

Background ------------- The Unix flavor can be any amongst Solaris, AIX, HP-UX and Linux. I have below 2 flat files. File-1 ------ Contains 50,000 rows with 2 fields in each row, separated by pipe. Row structure is like Object_Id|Object_Name, as following: 111|XXX 222|YYY 333|ZZZ ... (6 Replies)
Discussion started by: Souvik
6 Replies

5. Shell Programming and Scripting

FTP a huge Size file

Dear All, Good Evening!! I have a requirement to ftp a 220GB backup file to a remote backup server. I wrote a script for this purpose. But it takes more than 8 hours to transfer this file. Is there any other method to do it in less time??? Thanks in Advance!!! ---------- Post updated... (5 Replies)
Discussion started by: Naga06
5 Replies

6. Shell Programming and Scripting

Optimised way for search & replace a value on one line in a very huge file (File Size is 24 GB).

Hi Experts, I had to edit (a particular value) in header line of a very huge file so for that i wanted to search & replace a particular value on a file which was of 24 GB in Size. I managed to do it but it took long time to complete. Can anyone please tell me how can we do it in a optimised... (7 Replies)
Discussion started by: manishkomar007
7 Replies

7. Shell Programming and Scripting

Performance issue while using find command

Hi, I have created a shell script for Server Log Automation Process. I have used find xargs grep command to search the string. for Example, find -name | xargs grep "816995225" > test.txt . Here my problem is, We have lot of records and we want to grep the string... (4 Replies)
Discussion started by: nanthagopal
4 Replies

8. UNIX for Advanced & Expert Users

Performance problem with removing duplicates in a huge file (50+ GB)

I'm trying to remove duplicate data from an input file with unsorted data which is of size >50GB and write the unique records to a new file. I'm trying and already tried out a variety of options posted in similar threads/forums. But no luck so far.. Any suggestions please ? Thanks !! (9 Replies)
Discussion started by: Kannan K
9 Replies

9. UNIX for Dummies Questions & Answers

What is the faster way to grep from huge file?

Hi All, I am new to this forum and this is my first post. My requirement is like to optimize the time taken to grep the file with 40000 lines. There are two files FILEA(40000 lines) FILEB(40000 lines). The requirement is like this, both the file will be in the format below... (11 Replies)
Discussion started by: mad man
11 Replies

10. Shell Programming and Scripting

Performance Issue for a file search command

Hi All, This query is regarding performance improvement of a command. I have a list of IDs in a file (say file1 with single ID column) and file2 has the data rows. I need to get the IDs from file1 and search in file2, matching rows from file2 should be written to a file3. For this... (4 Replies)
Discussion started by: Tanu
4 Replies
pfsouthdrhtml(1)					      General Commands Manual						  pfsouthdrhtml(1)

NAME
pfsouthdrhtml - Create a web page with an HDR viewer SYNOPSIS
pfsouthdrhtml [<page_name>] [--quality <1-5>] [--image-dir <directory_name>] [--page-template <template_file>] [--image-template <tem- plate_file>] [--object-output <file_name.js>] [--html-output <file_name.html>] DESCRIPTION
The command creates in the current directory an HTML web page containing multi-exposure HDR viewer. The multi-exposure viewer displays a portion of the available dynamic range with minimum contrast distortions and provides a slider control to move the dynamic range window towards brighter or darker tones. The interface is very similar to pfsview, which is a pfstools application for displaying HDR images. The web page employs only JavaScript and CSS opacity property and does not require Java applets or the Flash plugin. Note that because this techniques encodes 20-60 exposures using only few images, the displayed exposures may not be identical to the exposures that are shown in pfsview. For examples and more information, visit http://pfstools.sourceforge.net/hdrhtml/. <page_name> specifies the file name, of the web page to be generated. If <page_name> is missing, the file name of the first image with .html extension will be used. The command can take as input several images and put them all on the same web page. For each image, its file name (from the FILE_NAME tag in the pfsstrem) without extension and a leading path will be used as a name for all JavaScript variables corresponding to that image. If the filename contains illegal characters (such as space, '-', '[', etc), these will be converted to '_'. --quality <1-5>, -q <1-5> Quality of the interpolated exposures, from the worst (1) to the best (5). The default is 2, which is sufficient for most applica- tions. Higher quality will introduce less distortions in the brightest and the darkest tones, but will also generate more images. More images means that there is more data that needs to be transferred to the web-browser, making HDR viewer less responsive. --image-dir <directory_name>, -d <directory_name> Specify where to store the resulting image files. Links to images in HTML will be updated accordingly. This must be a relative path and the directory must exist. Useful to avoid clutter in the current directory. --page-template <template_file>, -p <directory_name>, --image-template <template_file>, -i <template_file> Replaces the template files used to generate an HTML web page. The template files contain all HTML and JaveScript code with special keywords (@keyword@) that are replaced with image specific data, such as width, height, image base name, etc. The default template files can be found in INSTALL_DIR/share/pfstools/hdrhtml_*_templ.html. See TEMPLATE FILE FORMAT below more details. --object-output <file_name.js>, -o <file_name.js> Store JavaScript objects (hdr_<base_name>) associated with each image in a separate file. This is useful if you want to script cre- ating HTML pages. --html-output <file_name.html>, -l <file_name.html> Store HTML code that shows HDRHTML viewer for each image in a separate file. This is useful if you want to script creating HTML pages. TEMPLATE FILE FORMAT
pfsouthdrhtml uses two template files hdrhtml_page_templ.html and hdrhtml_image_templ.html, located in INSTALL_DIR/share/pfstools/, to gen- erate a web page with an HDR HTML viewer. The 'page' file contains the HTML of the entire web page and the 'image' file is used to paste a viewer code for a single image. You can replace one or both these templates with your own using --page-template and --image-template options. Each template contains HTML code with additional keywords surrounded by @ marks (@keyword@), which are replaced with HDR HTML specific code. Most of the keywords are self explanatory, therefore only the most important are described below. @hdr_img_def@ JavaScript objects that must be put in the 'body' section before any images. These define all the parameters needed to control HDR HTML viewer. @cf_array_def@ Pre-computed array of opacity coefficients. The same array is used for all images that use the same quality setting. Currently only one such array could be used per web-page, so images generated with different quality setting cannot be mixed on a single web page. @image_htmlcode@ or @image_htmlcode[base_name]@ Inserts HTML code of all images or a single image with the base_name (name with no file extension) specified as a parameter. This should be put where HDR HTML viewer should be located. EXAMPLES
pfsin memorial.hdr | pfshdrhtml memorial_church Generates a web page memorial_church.html with a set of images memorial_church_*.jpg in the current directory. pfsin ~/hdr_images/*.exr | pfssize --maxx 512 --maxy 512 | pfsouthdrhtml hdr_images Generate a web page with all OpenEXR images from ~/hdr_images/. The images are resized so that they are not larger than 512x512. SEE ALSO
pfsin(1) pfsout(1) BUGS
Please report bugs and comments to the discussion group http://groups.google.com/group/pfstools pfsouthdrhtml(1)
All times are GMT -4. The time now is 06:22 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy