Quote:
Originally Posted by
amro1
We DO need from time to time to defrag UNIX related filesystem. The problem is that unlike NTFS (which is VERY GOOD, well designed, originated in absolutely terrific OpenVMS, file system)
MS must have made an absolute hash of it, then because, as I've said, I've seen it fragment terribly on drives only 50% full. It certainly doesn't seem to make any effort to obey your ideal of squashing everything at the head of the drive, either.
Quote:
However, if you on single drive PC with Linux on it and do something that makes a lot of small files, then removes them and then do it over again, your system will be fragmented as a hell.
I use a distro that keeps 1.9 gigs of metadata in a tree of 100,000 tiny files with frequent replacement. I've seen ReiserFS fragment badly on that(the files didn't fragment, but the directories themselves did, leading to very slow ls), but not the more common Linux filesystems.
Quote:
By doing that, the restored files fill into the drive one by another, no gaps and no interleaving.
You've got an odd idea of fragmentation. It doesn't mean "all files in one giant clump at the start of the drive", which is a recipe
for fragmentation -- growing files will have no room to expand, and get scattered in pieces when they do.
And there's certainly better alternatives to dumping the entire filesystem, like
shake.