I have a huge collection of files in a directory about 200000. I have the command below but it only uses one core of the computer. I want it to do task in parallel.
This is the command that I want to run in parallel:
I know how to run sort command in parallel:
Can anyone please help me so that I can run
command in parallel? I am using Linux with GNU parallel installed.
Can anyone please help me so that I can run
command in parallel? I am using Linux with GNU parallel installed.
You cannot parallelize that pipeline. Think about it.
sort testfile has to read and process the entire contents of 'testfile' before it can even output a single line. Otherwise, it's possible that a line not yet read from 'testfile' would need to precede an already printed line.
The same goes for sort -nr.
In theory, the only aspect of this pipeline that can be parallelized is uniq -c. However, for that to work, The first sort's output would need to be carefully chopped at the boundaries between sequences of identical lines. Also, the output from the tasks handling the count would have to be recombined before being fed to the numeric sort. Furthermore, you cannot count on this working with pipes if line lengths may exceed PIPE_BUF (this can lead to the interweaving of non-atomic write()s.). Temporary files would be required and some mechanism to know when a temp file is complete (renaming, moving, locking, etc).
In short, you cannot do better than the original pipeline. If you have multiple cores, your system is probably running the processes on different cores. The reason you're probably only seeing one of them utilized is because a sort is running and hasn't yet completed. While that's happening, everything downstream must sleep.
Hmmm. I may have leapt before I looked, since I don't know anything about GNU Parallel.
I will have to study that tool's documentation and your solution. Thank you for giving me something interesting to do while a storm runs it course outside.
I skimmed GNU Parallel's documentation, focusing on the options you've used.
As I understand it, this will spawn one instance of sort per cpu. parallel will read 1 megabyte of data (give or take the length of a line) from its standard input and feed those chunks to alternating sorts. parallel writes the sorted data to a file and writes that filename to its standard output.
Nifty. Saves us the trouble of manually splitting the original file, spawning multiple sorts, and managing temp files.
Not as nifty. Runs one instance of sort to merge the sorted files whose names were generated by the previous parallel. Then deletes those files.
We can accomplished that with plain xargs: xargs sh -c 'sort -m "$@" && rm "$@"' sh
DANGER AHEAD!!!
This component is broken (even if it usually gives the correct result). parallel's --pipe will by default decompose the input into line-oriented chunks of approximately 1 megabyte in size. It has no knowledge of the contents of that input. If a sequence of identical lines spans more than one chunk, the output will show multiple, consecutive counts for the same line (whose values should sum to the correct value), because multiple instances of uniq see those identical lines.
Without a smarter way to distribute the data, you'll have to use a single instance of uniq to achieve reliably correct results.
I did not test as the documentation seemed sufficiently clear on the workings of --pipe and I don't have parallel installed on any machine.
If my analysis is incorrect, I look forward to learning some more.
I've tested uniq -c with parallel. As of now it does give correct results (with some test files I've generated on my own). But if its not reliable I am removing that.
Hi All,
Below the actual file which i like to sort and Uniq -u
/opt/oracle/work/Antony/Shell_Script> cat emp.1st
2233|a.k. shukula |g.m. |sales |12/12/52 |6000
1006|chanchal singhvi |director |sales |03/09/38 |6700... (8 Replies)
Hello all,
Need to pick your brains,
I have a 10Gb file where each row is a name, I am expecting about 50 names in total. So there are a lot of repetitions in clusters.
So I want to do a
sort -u file
Will it be considerably faster or slower to use a uniq before piping it to sort... (3 Replies)
Hi !
I am trying to remove doubbled entrys in a textfile only between delimiters.
Like that example but i dont know how to do that with sort or similar.
input:
{
aaa
aaa
}
{
aaa
aaa
}
output:
{
aaa
}
{ (8 Replies)
I have a flatfile A.txt
2012/12/04 14:06:07 |trees|Boards 2, 3|denver|mekong|mekong12
2012/12/04 17:07:22 |trees|Boards 2, 3|denver|mekong|mekong12
2012/12/04 17:13:27 |trees|Boards 2, 3|denver|mekong|mekong12
2012/12/04 14:07:39 |rain|Boards 1|tampa|merced|merced11
How do i sort and get... (3 Replies)
Hi All,
I have a text file with the format shown below. Some of the records are duplicated with the only exception being date (Field 15). I want to compare all duplicate records using subscriber number (field 7) and keep only those records with greater date.
... (1 Reply)
Hello,
I have a large data file:
1234 8888 bbb
2745 8888 bbb
9489 8888 bbb
1234 8888 aaa
4838 8888 aaa
3977 8888 aaa
I need to remove duplicate lines (where the first column is the duplicate). I have been using:
sort file.txt | uniq -w4 > newfile.txt
However, it seems to keep the... (11 Replies)
Does anyone have a quick and dirty way of performing a sort and uniq in perl?
How an array with data like:
this is bkupArr BOLADVICE_VN
this is bkupArr MLT6800PROD2A
this is bkupArr MLT6800PROD2A
this is bkupArr BOLADVICE_VN_7YR
this is bkupArr MLT6800PROD2A
I want to sort it... (4 Replies)
Using the last, uniq, sort and cut commands, determine how many times the different users have logged in.
I know how to use the last command and cut command...
i came up with last | cut -f1 -d" " | uniq
i dont know if this is right, can someone please help me... thanks (1 Reply)
I have a file:
Fred
Fred
Fred
Jim
Fred
Jim
Jim
If sort is executed on the listed file, shouldn't the output be?:
Fred
Fred
Fred
Fred
Jim
Jim
Jim (3 Replies)