Normally when a disaster recovery (DR) is performed, there is nothing already installed on the system ie, bare metal. So starting with a mirror image already installed is unorthodox. Also, if the image is old there may be files on there which have now been deleted from the system, etc.
To avoid me having to repeat, read this from one of my post some time ago: (couldn't get a link to work for whatever reason so cut and paste used)
Quote:
Right then, I get the question; there could be many answers and every professional could have a different opinion. You asked for ideas to be shared so here goes.
The scenario I've faced many times is that I have a very big system with lots of non-root filesystems and tons of storage. Minimum down time is critical but suddenly the system won't boot. I just want to get the system on its feet so that I can take a look around.
Backup:
1. Create a NFS share on a remote system and mount it
2. Pick a fairly quiescent time and 'fssnap' the root filesystem (to freeze it temporarily) sending the journal ('backing_store' switch on fssnap) to one of your other local filesystems.
3. Run 'ufsdump' to backup the whole filesystem to the NFS storage.
4. Make a note of your ip interface name (eg, e1000g0 or whatever)
5. Make a note of all the VTOC's
Note that you need to gauge the frequency of doing the backup because, in the event of a recovery, new users, groups, security changes and patches will be mssing.
Recovery:
Suddenly the system won't boot so.....
1. Boot from CD into single user:
PRE { OVERFLOW: auto }
Code:
boot cdrom -s
2. Use 'format' to check disk visibility and check slicing of disks.
3. After confirming that local recovery (eg, fsck, etc) will not fix the issue, 'newfs' the root disk slice. Root filesystem is now empty.
4. Mount the new empty root filesystem under /a
5. Use 'ifconfig' to manually 'plumb', 'address' and 'up' your network interface.
6. Check that you can ping the NFS node holding your ufsdump(s)
7. Mount the remote NFS storage under /mnt
8. Change directory to the top of your hard disk (empty) root
9. 'ufsrestore' the backup from the NFS storage to the root hard disk
10. 'sync' and 'umount' the NFS storage and the root hard disk and do an orderly shutdown.
11. System should now boot.
Note that if your /usr filesystem is separate from the root filesystem you should consider backing up that for emergency recovery too since without being able to mount /usr the recovered system will probably go into maintenance if it cannot mount /usr
As I say, all professionals have their own opinion and you may well get a torrent of alternatives posted to this thread. You may also have further question about what I have written above. Feel free to ask.
You could, of course, use 'flarcreate' to create a flash of just your root disk filesystem and that is certainly a good option. The above is just the method that I have used on Solaris 10 with ufs. You can, of course, test your recovery procedure by using a dummy root slice elsewhere on the system (not the real one).
This was referring to Solaris 10 but equally applies to Solaris 9 with ufs.
If the hardware platform you are recovering to is not identical (processor type, disk controllers, network interfaces, etc) then some adjustment may need to be done after
ufsrestore like modify /etc/vfstab, modify /etc/system, create new device nodes (/dev/dsk/xxxxx, for new filesystem locations), plumb in new network interfaces, etc.
NOTE: If your /usr filesystem is separate from root then you will need to restore that too otherwise the system will go into maintenance mode when it boots.
There's no substitute whatsoever for actually doing it yourself and asking the questions as you go. We're here to help.
Also, with DR planning, using
flarcreate is the more supported means for DR.