this would take awhile but tar the content of the mounted directory and then compress them (with your favorite compression). run digest against this to get your hash. copy to the other drive. run digest again to verify your copy.
more importantly, where are these nfs mounts? are they servers or filers? no need to use the overhead of nfs and another network stack to copy files between two servers/filers.
I thought of different ways of integrity check for the backup and look for the fastest approach to start programming.
in all these approaches randomness is used.
I would appreciate if someone give more suggestions or correct me.
1- Machine Name Check We can check if the machines were... (5 Replies)
How can I ensure the folder that I tar and compress is good to be archive in DVD or tape? Must I uncompress and untar the file, or there is any way to tell the integerity of the compressed file before send to archive? I have bad experience on this, which the archive compressed file cold not be... (2 Replies)
I need copied 100gd of data to other Solaris server. Could anyone help me guiding appropriate way of checking data integrity at source and destination so can I delete the data at source location . How can print/check cksum of individual file in each folder and match it with... (7 Replies)
I have an CentosOS 5 box running Apache, I want to Install a powerful File Integrity checker with recovery option to maintain any changes may be happened without my hand
Could you help me to recommend such solution
Thanks (3 Replies)
(1) I would like to know any unix/Linux command to check EOF char in a file.
(2) Or Any way I can check a file has been reached completely at machine B from machine A. Note that machine A ftp/scp the file to machine B at unknown time. (5 Replies)