Quote:
Originally Posted by
kogorman3
Nice to know. I tried e4defrag and it showed a fragmentation score of 0 on all directories.
I'm still a bit new to this, even after peeking at the source code. But it seems to me that there are two distinct phases to GNU sort. Both are merge sorts, but there's a big difference between merging in RAM and merging disk files.
Not really.
If your files can fit in memory, either way you do it -- cache or sort buffers -- it will be held in RAM.
Quote:
The main point of my optimizing effort turned out to be minimizing the number of file merges.
Yes, too many files
at once is bad since the disk has to seek them individually.
I don't think this has much to do with the size of the temp files as much as their number. Doing a lot of seeking on a non-SSD disk makes its performance
really bad. I once measured a disk's random read performance with caching disabled -- hopping from one sector to another, reading then moving, gets you
fifty kilobytes per second on a disk good for 100MB/s. Using all available RAM is about as bad as turning off caching, by the way. Worse actually -- memory pressure will start pushing out useful things, making programs wait pointlessly for bits of code to return to them at need.
Cache could also explain the differing times your runs take. If there's significant parts of your file left in cache, that could speed up the next run.