Quote:
Originally Posted by
Corona688
It doesn't work that way. It doesn't run 16384 individual merges simultaneously, then 4096 merges simultaneously, then 1024, etc etc. It always does the number you tell it to, as many times as it takes to process the list of things to do.
That makes no sense to me. If I tell it to make 1GB temporaries, my 13GB test file will make 13 of them and probably merge just once. If I tell it to make 1MB temporaries, it will make 13,000 of them, and likely have to merge the outputs of the first set of merges at least one extra time, handling the same data more times with more disk I/O.
My real TB-sized inputs will see this even more.
BTW, on skimming the sort source code, I found that by default it tries to make the buffer-size parameter 1/8 of physical memory: 4GB. That was so slow that I started this investigation. Smaller turned out to be better, possibly because none of the TLB or caches could deal with it effectively during the initial in-RAM merges because of the huge working set.
Quote:
I don't think small sorts hurt you, especially since they're small enough to be cached. What hurts you are merges on too many files at once for the disk to seek between, reducing it's I/O throughput.
Remember you are trying to find a "sweet spot" where CPU use and disk throughput are both at peak -- where the system can sustain full disk and cpu use.
Not sure what you mean about small sorts; I can't have them. I have large files, requiring large sorts. The files are large enough that I do not expect anything from previous merges to be available if the data has to be merged again, and I expect the data from different inputs to be far enough apart on disk to require seeks no matter what I do. Accordingly, I'd like to minimize the number of merge passes.
Even with these large sizes, sweet spots are exactly what I'm looking for.
Quote:
Check if you're eating into swap sometimes. Hitting swap could have severe performance penalties that throw off your tests.
I'm looking. But with the modest parameters I'm giving it, I hardly expect my 32-GB RAM to need to swap. The largest buffer-size I've ever tried is 4g, and that was already a bad idea and I don't do that any more. I'm distinctly sub-G now.