Quote:
Originally Posted by
aimy
Sorry to make you confused Sir.
Of course the neat output of your script solved my problem.
It isn't a problem; I just wanted to be sure your problem had been fixed. I'm glad my suggestion worked.
Quote:
The one that I showed you is from the original script output. But what is wrong with the script I posted if you can detect it?
The size reported by the
lstat() system call for a directory varies depending on filesystem type. Some file systems report the number of files contained in the directory; some report the space needed to hold the i-node numbers and the names of the files contained in the directory; some report the accumulated sizes of the files contained in the directory (which I would guess is what happened in your case); and some report other values. Furthermore, when a file is unlinked from a directory, the size of the directory might or might not shrink. By adding the
-d option to
ls, the output reported just the directories larger than the size you specified instead of those directories AND the contents of those directories.
Quote:
And by the way, doesn't that +10000 indicate the byte size? How could it be equivalent to the 512000 bytes?
Sorry. The 512000 was a typo. It should have been 5120000. If you look at the
find man page's description of the
size primary, you'll see that the size specified is the number of 512 byte blocks; not the number of bytes. If you want files with sizes larger than 10,000 bytes (instead of larger than 10,000 512-byte blocks), use
-size 10000c.
The change from
-exec ls -lad {} \; (which causes
find to invoke
ls for each file found meeting your size limits) to
-exec ls -lad + (which causes
find to invoke
ls with as many arguments as it can without overflowing
ARG_MAX limits) just makes your script run a little faster (or, if there are a lot of files meeting your size limits, a lot faster).