Sponsored Content
Special Forums Hardware Filesystems, Disks and Memory Life span of HDD - maximum reads/writes etc Post 302378386 by Azhrei on Monday 7th of December 2009 07:05:54 PM
Old 12-07-2009
From an engineering standpoint, yes. It would have to.

Most of the MTBF rates are based on some standard percentage of use, such as 30%. If you are keeping the drive busier than that, then you can expect the lifetime to be decreased.

Heavy usage over a long period is more likely to result in a thermal failure than anything else. The magnetic surface is good for X number of write operations and the servo motors are good for X number of direction reverses and distance traveled.

I would suggest that you have multiple spare drives ready to go (most shops will do this just out of a need for short-as-possible downtime). You could of course use some form of RAID that spreads the usage out over multiple drives. Using a form of RAID that provides redundancy also means that a drive failure doesn't immediately impact your uptime too.

Another option is to use rsync or similar technology that only copies the new files or those parts of the files that have changed, reducing the number of overall writes to the drive. The problem with rsync is that it has to read the file first and calculate block checksums which will have an impact on the overall access to the drives if they are configured for concurrent use.
 

9 More Discussions You Might Find Interesting

1. Shell Programming and Scripting

sh script that reads/writes based upon contents of a file

Hi everyone, Ive got a quick question about the feasibility and any suggestions for a shell script. I can use sh or ksh, doesnt matter. Basically, Ive got an output file from a db2 command that looks like so: SCHEMA NAME CARD LEAF ELEAF LVLS ISIZE NDEL KEYS F4 F5 ... (3 Replies)
Discussion started by: rdudejr
3 Replies

2. UNIX for Advanced & Expert Users

End of Life / Life Cycle Information

Hello everyone, I was searching for locations where I can get End of Life information on multiple versions of Unix. I have found some information which I list below, what I have not found or confirmed is where I can get the information for: DEC Unix/OSF1 V4.0D NCR Unix SVR4 MP-RAS Rel 3.02.... (2 Replies)
Discussion started by: robertmcol
2 Replies

3. Shell Programming and Scripting

identify the unix processes performing high disk i/o reads and writes

I would like to write shell/perl script which identifies the top unix processes that are performing high disk I/O's or/and writes If any one knows the solution please help me? -Swamy (0 Replies)
Discussion started by: avsswamy
0 Replies

4. UNIX for Advanced & Expert Users

identify the unix process performing high disk i/o reads and writes

Guys, Is there any UNIX command that captures the 'Unix process which is performing high disk I/O reads and writes'. can you help me in this? -Swamy (6 Replies)
Discussion started by: avsswamy
6 Replies

5. UNIX for Dummies Questions & Answers

Old HDD copy to new HDD ? im lost...

Over the last few months the HDD spins louder and louder, so I fiqured its time to replace the HDD. Its been running 24/7/365 since 98 :eek:. yes i said since 98 :D I have an IBM system 43P Model 240. 233 MHz. running AIX Version 4. The current HDD is an IBM DGHS COMP IEC -950 FRU PN#... (5 Replies)
Discussion started by: Chevy89rocks
5 Replies

6. Filesystems, Disks and Memory

Does vmstat -d give a count of actual physical writes/reads done to/from hard disk?

Hi, I am trying to find the reliability of 'vmstat -d' for showing the actual physical writes on sectors on hard disk. Can anyone please tell me if the numbers in the "sectors" field under "read" or "write" headers show a count of the actual write commands sent to disk from the low level... (2 Replies)
Discussion started by: jake24
2 Replies

7. UNIX for Dummies Questions & Answers

Difference between buffered disk reads and cached reads?

I was analyzing the Disk read using hdparm utility. This is what i got as a result. # hdparm -t /dev/sda /dev/sda: Timing buffered disk reads: 108 MB in 3.04 seconds = 35.51 MB/sec # hdparm -T /dev/sda /dev/sda: Timing cached reads: 3496 MB in 1.99 seconds = 1756.56 MB/sec... (1 Reply)
Discussion started by: pinga123
1 Replies

8. Shell Programming and Scripting

Help with script that reads and writes java console Minecraft

Hi I am looking for an easy way to lock game mode (0) for everyone included op on a Minecraft server. It can be a script that every time a player changes game to 1 the script changes back to 0. What the player writes is visible in the java console. I am not good at script programming and my... (0 Replies)
Discussion started by: MyMorris
0 Replies

9. AIX

IBM AIX Internal HDD vs SAN HDD and Oracle

Hi Folks, I am facing an issue with the performance. P4 with 1 processor and 16 GB RAM and SAN HDD = Oracle report takes 25 minutes P5 with 2 processors and 16 GB RAM internall HDD with LPAR = Oracle Report takes 1 hour 15 minutes ( please note I have assigned all the max processors and... (7 Replies)
Discussion started by: filosophizer
7 Replies
RDUP-BACKUPS(7) 						       rdup							   RDUP-BACKUPS(7)

NAME
rdup-backups - introduction into making backups with rdup INTRODUCTION
rdup is a simple program that prints out a list of files and directories that are changed changed on a filesystem. It is more sophisticated than for instance find, because rdup will find files that are removed or directories that are renamed. A long time ago rdup included a bunch of shell and Perl scripts that implemented a backup policy. These could be used in a pipeline to per- form a backup. Currently rdup consists out of three basic utilities: rdup With rdup you create the file list on which later programs in the pipeline can work. The default output format also includes the files' content. rdup can be seen as a tar replacement in this respect, but rdup also allows for all kinds of transformations of the content (encryption, compression, reversal), see the -P switch in rdup(1) for more information. rdup-tr With rdup-tr you can transform the files rdup delivers to you. You can create tar, cpio or pax files. You can encrypt pathnames. rdup-tr is filter that reads from standard input and writes to standard output. See rdup-tr(1) for more information. With rdup and rdup-tr you can create an encrypted archive which is put in a directory structure that is also encrypted. rdup-up With rdup-up you can update an existing directory structure with the updates as described by rdup. rdup-up reads rdup input and will create the files, symbolic links, hard links and directories (and sockets, pipes and devices) in the file system. See rdup-up(1) for more information. So the general backup pipeline for rdup will look something like this: create filelist | transform | update filesystem ( rdup | rdup-tr | rdup-up ) Note 1: The same sequence is used for restoring. In both cases you want to move files from location A to B. The only difference is that the transformation is reversed when you restore. Note 2: The use of rdup-tr is optional. BACKUPS AND RESTORES
For rdup there is no difference between backups and restores. If you think about this for a minute you understand why. Making a backup means copying a list of files somewhere else. Restoring files is copying a list of files back to the place they came from. Same difference. So rdup can be used for both, if you did any transformation with rdup during the backup you just need to reverse those operations during the restore. BACKUPS
It is always best to backup to another medium, be it a different local harddisk or a NFS/CIFS mounted filesystem. You can also use ssh to store file on a remote server, ala rsync (although not as network efficient). If you backup to a local disk you can just as well use rsync or plain old tar, but if you store your files at somebody else's disk you will need encryption. This is where you go beyond rsync and rdup comes in. Rsync cannot do per-file encryption, sure you can encrypt the network traffic with ssh, but at the remote side your files are kept in plain view. If you implement remote backups, the easy route is to allow root access on the backup medium. If the backup runs without root access the created files will not have their original ownership. For NFS this can be achieved by using no_root_squash, for ssh you could enable PermitRootLogin. Note that this may be a security risk. SNAPSHOT BACKUPS
We need a little help here in the form of the rdup-simple script. Keep in mind that the following scripts can also be run remotely with the help of ssh. The following script implements the algorithm of rdup-simple. #!/bin/bash # some tmp files are saved in ~/.rdup. This directory must exist DIR=/home # what to backup BACKUP=/vol/backup TODAY=$(date +%Y%m/%d) LIST=~/.rdup/list-$HOSTNAME STAMP=~/.rdup/timestamp-$HOSTNAME # for remote backup, this has to run on the remote host! BUGBUG RET=$? case $RET in 2|*) echo Error >&2 exit 1 ;; 1) # full dump, remove file-list and time-stamp file rm $LIST $STAMP ;; 0) # inc dump # do nothing here ;; esac # this is the place where you want to modify the command line # right now, nothing is translated we just use 'cat' rdup -N $STAMP -Pcat $LIST $DIR | rdup-up $BACKUP/$HOSTNAME/$TODAY # or do a remote backup #rdup -N $STAMP -Pcat $LIST $DIR | ssh root@remotehost # rdup-up $BACKUP/$HOSTNAME/$TODAY LOCAL BACKUPS
With rdup-simple you can easily create backups. Backing up my home directory to a backup directory: rdup-simple ~ /vol/backup/$HOSTNAME This will create a backup in /vol/backup/$HOSTNAME/200705/15. So each day will have its own directory. Multiple sources are allowed, so: rdup-simple ~ /etc/ /var/lib /vol/backup/$HOSTNAME Will backup your home directory, /etc and /var/lib to the backup location. Also if you need to compress your backup, simple add a '-z' switch: rdup-simple -z ~ /etc/ /var/lib /vol/backup/$HOSTNAME REMOTE BACKUPS
For a remote backup to work, both the sending machine and the receiving machine must have rdup installed. The currently implemented proto- col is ssh. Dumping my homedir to the remote server: rdup-simple ~ ssh://miekg@remote/vol/backup/$HOSTNAME The syntax is almost identical, only the destination starts with the magic string 'ssh://'. Compression and encryption are just as easily enabled as with a local backup, just add '-z' and/or a '-k keyfile' argument: rdup-simple -z -k 'secret-file' ~ ssh://miekg@remote/vol/backup/$HOSTNAME Remember though, that because of these advanced features (compression, encryption, etc, ...) the network transfer can never be as efficient as rsync. ALSO SEE
rdup(1), rdup-tr(1), rdup-up(1) and http://www.miek.nl/projects/rdup/ 1.1.x 15 Dec 2008 RDUP-BACKUPS(7)
All times are GMT -4. The time now is 11:49 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy