Is there a 'fuzzy search' facility in Linux?


 
Thread Tools Search this Thread
Top Forums Shell Programming and Scripting Is there a 'fuzzy search' facility in Linux?
# 1  
Old 11-03-2010
Is there a 'fuzzy search' facility in Linux?

I have over 10m documents that I want to search through against a list of know keywords, however the documents were produced using a technique that isn't perfect in how the data was presented.

Is there a fuzzy keyword search available in Linux or can anyone think of a way of doing it that isn't horrendously time expensive?

Example Keyword

Banana

Search therefore, case insensitive for...

Banana
Banan*
Bana*a
Ban*na
Ba*ana
B*nana
*anana

Bana**
Ban*n*
Ba*an*
B*nan*
*anan*
Ban**a
Ba*a*a
B*na*a
*ana*a

and so on.....

With 500 keywords and average of 10 characters per word that's over 50k 'fuzzy searches' per page to cover all the permutations, for words above 9 characters you'd probably want to have even more then 2 * characters per word which ramps up the number of searches even more.

Ideas please?
# 2  
Old 11-03-2010
10m documents = 10,000,000 files?

500 keywords of ~10 characters?

Well, grep -E would be a bit challenged. If you have the resources, you could for every file get every word as a short line: word file# line#, sort them all eliminating duplicates, merge with a sorted keyword list using join, and now you have an index.

The intermediate list is a very big sort, but the join efficiently trims the list. Perhaps it would help to remove obvious nuisance words like: 'a', 'the', 'and'. The join command needs a flat file, as it likes to seek back to where it started, necessary when doing a cartesian product, but not when one list is unique. I have a streaming join m1join.c that can do this merge on a pipe from the sort.
# 3  
Old 11-03-2010
Quote:
Originally Posted by DGPickett
10m documents = 10,000,000 files?

500 keywords of ~10 characters?

Well, grep -E would be a bit challenged. If you have the resources, you could for every file get every word as a short line: word file# line#, sort them all eliminating duplicates, merge with a sorted keyword list using join, and now you have an index.

The intermediate list is a very big sort, but the join efficiently trims the list. Perhaps it would help to remove obvious nuisance words like: 'a', 'the', 'and'. The join command needs a flat file, as it likes to seek back to where it started, necessary when doing a cartesian product, but not when one list is unique. I have a streaming join m1join.c that can do this merge on a pipe from the sort.
It's about 30m pages of text (the average appears to be 3 pages per document)

I'm not sure that producing an intermediate list per page would help, assuming 2000 words per 3 pages (the density is quite high) doing that processing would still be horrendously time intensive for 10m documents surely?

It's an interesting idea though and I'll try to throw something together to do some time tests.

I have a reasonable amount of processing resources in terms of a few multicore hyperthreaded machines, so I could allocate about 34 'virtual' machines to this, but even so it's a fair amount of processing!! I've just calculated that even at only 1 second per document processed (likely very very optimistic) it would require about 3.5 days with all the virtual machines running 24x7. If, as is more likely, each document is taking say 10 seconds per process we're now into 35 days of 24x7.....Yikes!!

I was hoping there would be a standard function or prog that I could use and just pump the keywords in then point at the pages, ho hum back to the drawing board!!

Last edited by Bashingaway; 11-03-2010 at 01:03 PM.. Reason: added estimated times
# 4  
Old 11-03-2010
Is there a Google Desktop for LINUX Xwindows yet? It's a Google world: you have to look to know, and imagine to look. Why, yes:
http://www.google.com/search?q=googl...x=&startPage=1
# 5  
Old 11-03-2010
Quote:
Originally Posted by DGPickett
Is there a Google Desktop for LINUX Xwindows yet? It's a Google world: you have to look to know, and imagine to look. Why, yes:
google desktop linux - Google Search
Google does not index use a "fuzzy" algorithm, as I recall.

Google indexes, as I recall, using a Bayesian classifier.

There is a difference (quite a difference) between index and retrieval with a fuzzy algorithm versus indexing with a Bayesian classifier.

---------- Post updated at 17:18 ---------- Previous update was at 17:14 ----------

OBTW, on fuzzy search, read this reference:

Quote:
Fuzzy search

Fuzzy search searches for words that are spelled in a similar way to the search term.

Example

SELECT AUTHOR, TITLE
FROM DB2EXT.TEXTTAB
WHERE CONTAINS(COMMENT,
'fuzzy form of 80 "pullitzer"') =1

In this example, the search could find an occurrence of the misspelled word pulitzer.

The match level, in the example “80”, specifies the desired degree of accuracy. Use fuzzy search when misspellings are possible in the document. This is often the case when an Optical Character Recognition device, or phonetic input creates the document. Use values between 1 and 100 to show the degree of fuzziness, where 100 is an exact match and anything below 80 is increasingly "fuzzy".
Note: If the fuzzy search does not provide the appropriate degree of accuracy, search for parts of a term using character masking.
I think you can easily find a "fuzzy" indexer to run on Linux.

If you find one (in PHP), let me know. I may implement fuzzy search as an additional capability on this site.

---------- Post updated at 17:21 ---------- Previous update was at 17:18 ----------

OBTW, as a side-note, you could probably use a Bayesian classifier to assist in building a fuzzy searcher or indexer. I've not look into this, but a bit of Google'ing around might yield some useful peach fuzz Smilie

---------- Post updated at 17:25 ---------- Previous update was at 17:21 ----------

Here is something interesting.....


Approximate/fuzzy string search in PHP


Quote:
This PHP class, approximate-search.php, provides non-exact text search (often called fuzzy search or approximate matching).

It allows you to specify a Levenshtein edit distance treshold, i.e. an error limit for a match. For example, a search for kamari with a threshold of 1 errors would match kamari, kammari, kaNari and kamar but not kaNar.

The code is optimized for repeated searching of the same string, e.g. walking through rows of a database.
# 6  
Old 11-03-2010
Hi.

I've been using glimpse and agrep for a number of years. I index my files over-night, every night. See

Webglimpse and Glimpse: advanced site search software for Unix : index websites or intranets

however, it sounds like you need only the glimpse package. If I recall correctly it includes glimpseindex and glimpse. The index files are not small, but storage is cheap these days.

An example from man glimpse:
Code:
       glimpse -1 'Tuson;Arezona'

       will  output  all  lines containing both patterns allowing one spelling
       error in any of the patterns (either insertion, deletion, or  substitu-
       tion), which in this case is definitely needed.

There are lots of options for the indexing and searching.

You could test how you like the fuzzy search by installing agrep and using that on a few text files without doing the indexing. The agrep package I use is available in my Debian repository.

Good luck ... cheers, drl
Login or Register to Ask a Question

Previous Thread | Next Thread

7 More Discussions You Might Find Interesting

1. Shell Programming and Scripting

fuzzy sequence match in a text file

Hi Forum: I have struggle with it and decide to use my eye ball to accomplish this. Basically I am looking for sequence of date inside a file. If one of the sequence repeat 2-3 time or skip once; it's still consider a match. input text file: Sep 6 A Sep 6 A Sep 10 A Sep 7 B Sep 8... (7 Replies)
Discussion started by: chirish
7 Replies

2. Shell Programming and Scripting

How to delete corrupted characters and then do fuzzy searches?

Hi All I have a whole block of pages that have come in from various sources, unfortunately the pages in many instances have blocks of corrupted text. What I'm trying to do is write a sed line that will just delete non alphanumeric characters if they're in a block of say three or four... (5 Replies)
Discussion started by: Bashingaway
5 Replies

3. Hardware

Monitor/projector display looks fuzzy

Hi there Not sure if I'm posting this in the right section...but here goes. I'm using an HP Compaq nc8430 laptop. Graphics card according to specs is an ATI Mobility Radeon X1600. It's the first time I installed Linux for use on my personal laptop and I'm having trouble using it with a... (0 Replies)
Discussion started by: notreallyhere
0 Replies

4. UNIX for Dummies Questions & Answers

Unable to use the CDE Facility

Hello I have a SunBlade 1000 workstation and I cannot login via CDE. All I get is a console login prompt. I then have to login via root and I just get the command line interface. I have being doing some research on the UNIX forum and the problem may lie with the content in etc/hosts file.... (7 Replies)
Discussion started by: tjwops
7 Replies

5. Programming

Fuzzy Match Logic for Numerical Values

I have searched the internet (including these forums) and perhaps I'm not using the right wording. What I'm looking for is a function (preferably C) that analyzes the similitude of two numerical or near-numerical values, and returns either a true/false (match/nomatch) or a return code that... (4 Replies)
Discussion started by: marcus121
4 Replies

6. OS X (Apple)

Unix email facility

Dear all, I am an inexperienced man with Macitosh and green in Apple OS X . I had tried very hard to use Unix, in fact the Terminal, with its Email function. I read some books and came to know that it has Mail, mailx or mail functions that we can use for simple mail. I have try every... (3 Replies)
Discussion started by: Larry LAM
3 Replies

7. UNIX for Dummies Questions & Answers

Using the LOG_AUTH facility

Hi, I am wanting to enable logging of all ftp sessions on my Solaris 8 host. I want to at least log all ftp logins and if possible any commands that the user executes. I have tried various settings in syslog.conf then rereading syslogd but logging still does not happen. I have... (1 Reply)
Discussion started by: blp001
1 Replies
Login or Register to Ask a Question