03-27-2014
Still no reply -- is this 32 bit windows running cygwin running whatever? If so, the 4G address space can make hash tools fail, often not gracefully, and often well before the 4G nominal limit, like 1.7G, stumbling over some signed int4 in the process and the address space usage of code and other data. It sounds like you need a 64 bit CPU and O/S.
Once you run past the RAM, the sequential reading and writing of sort may outperform the random activity of hash. Also, not all hash are written for dynamic expansion of hash bucket count, so the amount of linear searching inside the bucket may increase. In rouguewave, for instance, you should set the bucket count according to the size of the set at the start.
Extendible hashing - Wikipedia, the free encyclopedia
10 More Discussions You Might Find Interesting
1. Shell Programming and Scripting
I have a file which consists of 1000 entries. Out of 1000 entries i have 500 Duplicate Entires. I want to remove the first Duplicate Entry (i,e entire Line) in the File.
The example of the File is shown below:
8244100010143276|MARISOL CARO||MORALES|HSD768|CARR 430 KM 1.7 ... (1 Reply)
Discussion started by: ravi_rn
1 Replies
2. Shell Programming and Scripting
Hi,
My awk program is failing. I figured out using command
od -c filename
that the last line of the file doesnt end with a new line character.
Mine is an automated process because of this data is missing.
How do i handle this?
I want to append new line character at the end of last... (2 Replies)
Discussion started by: pinnacle
2 Replies
3. Shell Programming and Scripting
i have the long file more than one ns and www and mx in the line like .
i need the first ns record and first www and first mx from line .
the records are seperated with tthe ; i am try ing in awk scripting not getiing the solution.
... (4 Replies)
Discussion started by: kiranmosarla
4 Replies
4. Shell Programming and Scripting
Hi Sorry to multipost. I am opening the new thread because the earlier threads head was misleading to my current doubt.
and i am stuck.
list=`cat /u/Test/programs`;
psg "ServTest" | awk -v listawk=$list '{
cmd_name=($5 ~ /^/)? $9:$8
for(pgmname in listawk)
... (6 Replies)
Discussion started by: Anteus
6 Replies
5. Shell Programming and Scripting
I have an extremely large csv file that I need to search the second field, and upon matches update the last field...
I can pull the line with awk.. but apparently you cant use awk to directly update the file? So im curious if I can use sed to do this... The good news is the field I want to... (5 Replies)
Discussion started by: trey85stang
5 Replies
6. Shell Programming and Scripting
Folks ,
i want to read a csv file line by line till the end of file and filter the text in the line and append everything into a variable.
csv file format is :-
trousers:shirts,price,50
jeans:tshirts,rate,60
pants:blazer,costprice,40
etc
i want to read the first line and get... (6 Replies)
Discussion started by: venu
6 Replies
7. Shell Programming and Scripting
I have several hundreds of tiny files which need to be concatenated into one single line and all those in a single file. Some files have several blank lines. Tried to use this script but failed on it.
awk 'END { print r } r && !/^/ { print FILENAME, r; r = "" }{ r = r ? r $0 : $0 }' *.txt... (8 Replies)
Discussion started by: sdf
8 Replies
8. Shell Programming and Scripting
I'm looking to remove duplicate rows from a CSV file with a twist.
The first row is a header.
There are 31 columns. I want to remove duplicates when the first 29 rows are identical ignoring row 30 and 31 BUT the duplicate that is kept should have the shortest total character length in rows 30... (6 Replies)
Discussion started by: Michael Stora
6 Replies
9. UNIX for Dummies Questions & Answers
Hi all,
I've got a file that has 12 fields. I've merged 2 files and there will be some duplicates in the following:
FILE:
1. ABC, 12345, TEST1, BILLING, GV, 20/10/2012, C, 8, 100, AA, TT, 100
2. ABC, 12345, TEST1, BILLING, GV, 20/10/2012, C, 8, 100, AA, TT, (EMPTY)
3. CDC, 54321, TEST3,... (4 Replies)
Discussion started by: tugar
4 Replies
10. Shell Programming and Scripting
My file (the output of an experiment) starts off looking like this,
_____________________________________________________________
Subjects incorporated to date: 001
Data file started on machine PKSHS260-05CP
**********************************************************************
Subject 1,... (9 Replies)
Discussion started by: samonl
9 Replies
HASH(3) BSD Library Functions Manual HASH(3)
NAME
hash -- hash database access method
SYNOPSIS
#include <sys/types.h>
#include <db.h>
DESCRIPTION
The routine dbopen() is the library interface to database files. One of the supported file formats is hash files. The general description
of the database access methods is in dbopen(3), this manual page describes only the hash specific information.
The hash data structure is an extensible, dynamic hashing scheme.
The access method specific data structure provided to dbopen() is defined in the <db.h> header as follows:
typedef struct {
u_int bsize;
u_int ffactor;
u_int nelem;
u_int cachesize;
uint32_t (*hash)(const void *, size_t);
int lorder;
} HASHINFO;
The elements of this structure are as follows:
bsize bsize defines the hash table bucket size, and defaults to 4096 for in-memory tables. If bsize is 0 (no bucket size is specified)
a bucket size is chosen based on the underlying file system I/O block size. It may be preferable to increase the page size for
disk-resident tables and tables with large data items.
ffactor ffactor indicates a desired density within the hash table. It is an approximation of the number of keys allowed to accumulate in
any one bucket, determining when the hash table grows or shrinks. The default value is 8.
nelem nelem is an estimate of the final size of the hash table. If not set or set too low, hash tables will expand gracefully as keys
are entered, although a slight performance degradation may be noticed. The default value is 1.
cachesize A suggested maximum size, in bytes, of the memory cache. This value is only advisory, and the access method will allocate more
memory rather than fail.
hash hash is a user defined hash function. Since no hash function performs equally well on all possible data, the user may find that
the built-in hash function does poorly on a particular data set. User specified hash functions must take two arguments (a
pointer to a byte string and a length) and return a 32-bit quantity to be used as the hash value.
lorder The byte order for integers in the stored database metadata. The number should represent the order as an integer; for example,
big endian order would be the number 4,321. If lorder is 0 (no order is specified) the current host order is used. If the file
already exists, the specified value is ignored and the value specified when the tree was created is used.
If the file already exists (and the O_TRUNC flag is not specified), the values specified for the parameters bsize, ffactor, lorder, and nelem
are ignored and the values specified when the tree was created are used.
If a hash function is specified, hash_open() will attempt to determine if the hash function specified is the same as the one with which the
database was created, and will fail if it is not.
ERRORS
The hash access method routines may fail and set errno for any of the errors specified for the library routine dbopen(3).
SEE ALSO
btree(3), dbopen(3), mpool(3), recno(3)
Per-Ake Larson, "Dynamic Hash Tables", Communications of the ACM, Issue 4, Volume 31, April 1988.
Margo Seltzer, "A New Hash Package for UNIX", Proceedings of the 1991 Winter USENIX Technical Conference, USENIX Association,
http://www.usenix.org/publications/library/proceedings/seltzer2.pdf, 173-184, January 1991.
BUGS
Only big and little endian byte order is supported.
BSD
December 16, 2010 BSD