I have two files:
1. A huge 8 GB text file. (big_file.txt)
2. A huge list of words approximately 8 million of them. (words_file.txt). Each word is separated by a newline.
What I intend to do is to read each word "w" from words_file.txt and search for that word in big_file.txt. Then extract two words before and after "w" from big_file.txt.
A naive way is to simply run this command below for each of the word.
But the above code is too slow as the file is being read from the disk several times.
I then tried to store the entire big_file.txt in the memory by modifying the above code (shown below), but still this code is slow. Using "top" command, I can notice that memory usage increases and decreases as if big_file.txt is being read again and again for each "w". I want the big_file.txt to be read just once.
The big_file.txt looks something like this (posting just a small sample of the file):
The words_file.txt looks like this (just a sample):
The output that the code gives:
Any suggestions how I can speed up the search and extraction? I am using BASH on Linux.
Last edited by shoaibjameel123; 11-16-2015 at 06:46 AM..
Reason: Edited based on RudiC's comments.
Wouldn't it be the easiest to put the big_file.txt to some RAMDisk and then read from there? How to make a RAMDisk is depending on your system, but i'm sure there is one. It might require areboot, though, to create one.
Thanks for your responses. When I ran RudiC's script, I did not see any output for five minutes and the memory usage crossed 60%. I had to stop running it, but I thank RudiC for his script. I did not try RAMdisk yet. But I did stumble upon a grep regular expression which can grep for multiple words in one statement. I am sure this will help speed up because logically it only requires one file read of my large file. Simple grep command for just three words goes like this:
But when I try this in my command:
I do not see anything happening i.e. no output and also no error message. I only want to reduce the amount of file reads, and I am sure that it will speed up my script considerably.
Location: Saint Paul, MN USA / BSD, CentOS, Debian, OS X, Solaris
Posts: 2,288
Thanks Given: 430
Thanked 480 Times in 395 Posts
Hi.
I'm not going to post everything because I'm still thinking about it, but this version of the grep pattern seems to produce the expected output:
producing:
Best wishes ... cheers, drl
Thanks, drl. I can see some improvement using your command as I can see that all words in "obama|primaries|water" are searched at the same time. This can surely help reduce the amount of iterations needed.
I can also improve upon your script a little bit by doing this:
I also found another way using GNU parallel. I have tried several variations, but the variation which I am interested in is this (given here GNU Parallel):
But the above code does not do what I really want, so I tried modifying to this:
There is still some issues about how to include those word patterns from words_file.txt into this command.
I have one big file of size 9GB (big_file.txt). This big file has sentences and paragraphs like any usual English document. I have another file consisting of replacement strings for sed to use. The file name is replace.sed and each entry in one line looks like this:
s/\<shout\>/shout/g
s/\<b is... (2 Replies)
Background
-------------
The Unix flavor can be any amongst Solaris, AIX, HP-UX and Linux. I have below 2 flat files.
File-1
------
Contains 50,000 rows with 2 fields in each row, separated by pipe.
Row structure is like Object_Id|Object_Name, as following:
111|XXX
222|YYY
333|ZZZ
... (6 Replies)
I was running a program and it stopped and showed "Out of Memory!". at that time, the RAM used by this process is around 4G and the free memory size of the machine is around 30G. Does anybody know what maybe the reason? this program is written with Perl. the OS of the machine is Solaris U8. And I... (1 Reply)
All,
I have a problem with grep/fgrep/egrep. Basically I am building a 200 times 200 correlation matrix. The entries of this matrix need to be retrieved from another very large matrix (~100G). I tried to use the grep/fgrep/egrep to locate each entry and put them into one file. It looks very... (1 Reply)
Hi, my problem:
gzgrep "^.\{376\}8301685001120" filename /dev/null
###ERROR ###
grep: RE error 11: Range endpoint too large.
Whats my mistake?
Is the position 376 to large for grep???
Thanks. (2 Replies)
We just set up a system to use large pages. I want to know if there is a command to see how much of the memory is being used for large pages. For example if we have a system with 8GB of RAm assigned and it has been set to use 4GB for large pages is there a command to show that 4GB of the *GB is... (1 Reply)
I am looking for a file with 'MCR0000000716214' in it. I tried the following command:
grep MCR0000000716214 *
The problem is that the folder I am searching in has over 87000 files and I am getting the following:
bash: /bin/grep: Arg list too long
Is there any command I can use that can... (6 Replies)
Refer from title:
How can i get memory used or anything that can show memory from sar file
example on solaris:-
we can use sar with option to show memory used at time that sar crontab run.
on HP-UX, it not has option to see memory used. But i think it may be have some parameter or some... (1 Reply)
Hi,
I'm developing a data processing pipeline with multiple stages, with data being moved between the stages using shared memory segments. The size of the data is typically of the order of hundreds of megabytes, and there are typically a few tens of main shared memory segments each of size... (2 Replies)