Don't use a loop to get this done, your processing the 2.5GB details.txt file for each name in emp.txt. So if you had 2 names in emp.txt your processing 5GB of detail.txt. 10 names = 25GB. It doesn't scale well that way.
Try this:
Then you are only processing details.txt once, and of course however big emp.txt is.
Using -F might also save some time. If you don't have the '-F' option look for 'fgrep'.
But being on HP-UX the standard 'grep' should have the -F option available.
Last edited by rwuerth; 11-17-2011 at 01:59 PM..
Reason: I'm scatter brained today. Keep thinking of things to add, after the fact.
These 2 Users Gave Thanks to rwuerth For This Post:
111111111100000000001111111111
123232323200000010001114545454
232435424200000000001232131212
342354234301000000002323423443
232435424200000000001232131212
2390898994200000000001238908092
This is the record format.
From 11th position to 20th position in a record there are 0's occuring,and... (6 Replies)
Hi,
I have to find out the run time for 40-45 different componets. These components writes in to a genreric log file in a single directory.
eg.
directory is LOG and the log file name format is generic_log_<process_id>_<date YY_MM_DD_HH_MM_SS>.log
i am taking the run time using the time... (3 Replies)
I have file which contains around 5000 lines.
The lines are fixed legth but having no delimiter.Each line line contains nearly 3000 characters.
I want to delete the lines
a> if it starts with 1 and if 576th postion is a digit i,e 0-9
or
b> if it starts with 0 or 9(i,e header and footer)
... (4 Replies)
Background
-------------
The Unix flavor can be any amongst Solaris, AIX, HP-UX and Linux. I have below 2 flat files.
File-1
------
Contains 50,000 rows with 2 fields in each row, separated by pipe.
Row structure is like Object_Id|Object_Name, as following:
111|XXX
222|YYY
333|ZZZ
... (6 Replies)
Dear All,
Good Evening!!
I have a requirement to ftp a 220GB backup file to a remote backup server.
I wrote a script for this purpose.
But it takes more than 8 hours to transfer this file.
Is there any other method to do it in less time???
Thanks in Advance!!!
---------- Post updated... (5 Replies)
Hi Experts,
I had to edit (a particular value) in header line of a very huge file so for that i wanted to search & replace a particular value on a file which was of 24 GB in Size. I managed to do it but it took long time to complete. Can anyone please tell me how can we do it in a optimised... (7 Replies)
Hi,
I have created a shell script for Server Log Automation Process. I have used
find xargs grep command to search the string.
for Example,
find -name | xargs grep "816995225" > test.txt .
Here my problem is,
We have lot of records and we want to grep the string... (4 Replies)
I'm trying to remove duplicate data from an input file with unsorted data which is of size >50GB and write the unique records to a new file.
I'm trying and already tried out a variety of options posted in similar threads/forums. But no luck so far..
Any suggestions please ?
Thanks !! (9 Replies)
Hi All,
I am new to this forum and this is my first post.
My requirement is like to optimize the time taken to grep the file with 40000 lines.
There are two files FILEA(40000 lines) FILEB(40000 lines).
The requirement is like this, both the file will be in the format below... (11 Replies)
Hi All,
This query is regarding performance improvement of a command.
I have a list of IDs in a file (say file1 with single ID column) and file2 has the data rows.
I need to get the IDs from file1 and search in file2, matching rows from file2 should be written to a file3.
For this... (4 Replies)
Discussion started by: Tanu
4 Replies
LEARN ABOUT CENTOS
gensprep
gensprep(8) ICU 50.1.2 Manual gensprep(8)NAME
gensprep - compile StringPrep data from files filtered by filterRFC3454.pl
SYNOPSIS
gensprep [ -h, -?, --help ] [ -v, --verbose ] [ -c, --copyright ] [ -s, --sourcedir source ] [ -d, --destdir destination ]
DESCRIPTION
gensprep reads filtered RFC 3454 files and compiles their information into a binary form. The resulting file, <name>.icu, can then be read
directly by ICU, or used by pkgdata(8) for incorporation into a larger archive or library.
The files read by gensprep are described in the FILES section.
OPTIONS -h, -?, --help
Print help about usage and exit.
-v, --verbose
Display extra informative messages during execution.
-c, --copyright
Include a copyright notice into the binary data.
-s, --sourcedir source
Set the source directory to source. The default source directory is specified by the environment variable ICU_DATA.
-d, --destdir destination
Set the destination directory to destination. The default destination directory is specified by the environment variable ICU_DATA.
ENVIRONMENT
ICU_DATA Specifies the directory containing ICU data. Defaults to /usr/share/icu/50.1.2/. Some tools in ICU depend on the presence of the
trailing slash. It is thus important to make sure that it is present if ICU_DATA is set.
FILES
The following files are read by gensprep and are looked for in the source /misc for rfc3454_*.txt files and in source /unidata for Normal-
izationCorrections.txt.
rfc3453_A_1.txt Contains the list of unassigned codepoints in Unicode version 3.2.0....
rfc3454_B_1.txt Contains the list of code points that are commonly mapped to nothing....
rfc3454_B_2.txt Contains the list of mappings for casefolding of code points when Normalization form NFKC is specified....
rfc3454_C_X.txt Contains the list of code points that are prohibited for IDNA.
NormalizationCorrections.txt
Contains the list of code points whose normalization has changed since Unicode Version 3.2.0.
VERSION
50.1.2
COPYRIGHT
Copyright (C) 2000-2002 IBM, Inc. and others.
SEE ALSO pkgdata(8)ICU MANPAGE 18 March 2003 gensprep(8)