Is it better/possible to pause the rsyncing of a very large directory?
Possibly a dumb question, but I'm deciding how I'm going to do this. I'm currently rsyncing a 25TB directory (with several layers of sub directories most of which have video files ranging from 500 megs to 4-5 gigs), from one NAS to another using rsync -av. By the time I need to act ~15TB should have been moved. I need to stop the transfer for ~12 hours. Can I just ^z the process and come back and fg it (this is running in a screen session) or should I just ^c it, and kick it back off and let rsync figure out what's already been transferred on it's own?
Awesome. And there shouldn't be a significant (given the scale of what's happening already) time lost from rsync having to rebuild the file list initially? That took quite a while the first time, and I assume now it'll have to redo that and also do the comparison against what's already been transferred.
On restart: rsync takes a list of source files and looks at file times. Then compares those times to the existing destination filetimes. If you already copied 8000 files and those 8000 files match what is in the new directory, rsync can figure that out in a few minutes, tops. It then goes on to copy the files it has not already done. That takes time. rsync has to read and hash every block, send it write it and verify using the hash.
It really sounds like you need to segment your operation if you want to max I/O throughput. Of course if this is production, then you cannot eat the box alive just for rsync.