11-07-2002
We lost a fairly large computer center in the World Trade Center disaster. The services that it provided were back on the air within 12 hours. And we didn't lose a byte of data.
That's what disaster recovery is. It's very hard and very expensive.
9 More Discussions You Might Find Interesting
1. Cybersecurity
Hello,
I am trying to make a disaster recovery of my Unix System.
Is there a site where I can find template from Disaster Recovery Domain. So this can help me to have the principals chapter to make a good report.
Thanks a lot ........ (5 Replies)
Discussion started by: steiner
5 Replies
2. UNIX for Advanced & Expert Users
Can anyone tell me of what to expect? I've been nominated to join a team of unix admins to do a DR testing. we already have the guys who are gono be doing the restores. besides the restore, anybody know what else to look forward to?? (2 Replies)
Discussion started by: TRUEST
2 Replies
3. UNIX for Dummies Questions & Answers
I am looking into disaster recovery and I wanted to know what files and/or other information do I need to keep copies of to sucessfully restore my system from the ground up..... Any help is greatly appreciated. I am running Solaris 8 on an Ultra 60. (5 Replies)
Discussion started by: rod23
5 Replies
4. Solaris
Recovering Solaris to an alternate server
I was just wondering if anyone could give me some points on restoring a Solaris 9 backup to an alternate server. Basically, we use netbackup 6 and I was wondering what the best procedures are for doing this? What things do we need to take into... (3 Replies)
Discussion started by: aaron2k
3 Replies
5. AIX
Are there any products out there that provide a disk imaging solution for AIX (and HPUX and Solaris for that matter)? In a development environment where users are looking to restore an OS quickly back to a certain point in time, what is there available for this besides opening up the system,... (7 Replies)
Discussion started by: tb0ne
7 Replies
6. Solaris
Hello everyone I am Kevin and new to this forum.
I have encounter an issue I can't seem to resolve. I am currently using Solaris 8 02/04 on Sun V240 servers. I know how to create a flar image of the server and restore it using NFS (network server) or Local Tape (tape drive). What I need to do... (2 Replies)
Discussion started by: Kevin1166
2 Replies
7. AIX
Hi Guys,
is it possible to failover a hacmp cluster in one datacentre via SRDF to a single node in another datacentre, or do I need a cluster there in any case? This is only meant as worst case scenario and my company doesn't want to spend more money than absolutely necessary.
I know the... (3 Replies)
Discussion started by: zxmaus
3 Replies
8. UNIX for Dummies Questions & Answers
We have a SCO OpenServer Unix server that has been damaged. Fortunately we have a good backup of the entire system (using BackupEdge.) On a new server, if we install SCO from original SCO CD's (we have all necessary activation codes) then drop the tape (we can restore with tar), will the... (3 Replies)
Discussion started by: jmhohne
3 Replies
9. Red Hat
Hi,
I just want to throw something out there for opinions and viewpoints relating to a Disaster Recovery site.
Besides the live production environment, do you think a DR environment should include:
- pre-production environment
- QA Environment
......or would this be considered to be OTT... (3 Replies)
Discussion started by: Duffs22
3 Replies
LEARN ABOUT OSF1
db_archive
db_archive(8) System Manager's Manual db_archive(8)
NAME
db_archive - displays security database log files no longer involved in active transactions (Enhanced Security)
SYNOPSIS
/usr/tcb/bin/db_archive [-alsv] [-h home]
FLAGS
Write all pathnames as absolute pathnames, instead of relative to the database home directories. Specify a home directory for the data-
base. The correct directory for enhanced security is /var/tcb/files. Write out the pathnames of all of the database log files, whether or
not they are involved in active transactions. Write the pathnames of all of the database files that need to be archived in order to
recover the database from catastrophic failure. If any of the database files have not been accessed during the lifetime of the current log
files, db_archive does not include them in this output.
It is possible that some of the files referenced in the log have since been deleted from the system. In this case, db_archive ignores
them. When db_recover is run, any files referenced in the log that are not present during recovery are assumed to have been deleted and
are not be recovered. Run in verbose mode, listing the checkpoints in the log files as they are reviewed.
DESCRIPTION
A customized version of the Berkeley Database (Berkeley DB) is embedded in the operating system to provide high-performance database sup-
port for critical security files. The DB includes full transactional support and database recovery, using write-ahead logging and check-
pointing to record changes.
The db_archive utility is provided for maintenance of the log files associated with the security database. It writes the pathnames of log
files that are no longer in use (that is, no longer involved in active transactions), to the standard output, one pathname per line. These
log files should be written to backup media to provide for recovery in the case of catastrophic failure (which also requires a snapshot of
the database files), but they may then be deleted from the system to reclaim disk space. You should perform a db_checkpoint -1 before
using db_archive.
The secconfig utility can create a cron job that periodically checks the security log files and deletes those no longer in use, as deter-
mined by db_archive. Be sure to coordinate this with the site backup schedule.
The db_archive utility attaches to one or more of the Berkeley DB shared memory regions. In order to avoid region corruption, it should
always be given the chance to detach and exit gracefully. To cause db_archive to clean up after itself and exit, send it an interrupt sig-
nal (SIGINT).
RETURN VALUES
The db_archive utility exits 0 on success, and >0 if an error occurs.
ENVIRONMENT VARIABLES
If the -h option is not specified and the environment variable DB_HOME is set, it is used as the path of the database home. The home
directory for security is /var/tcb/files.
FILES
/var/tcb/files/auth.db
/var/tcb/files/dblogs/*
RELATED INFORMATION
Commands: db_checkpoint(8), db_dump(8), db_load(8), db_printlog(8), db_recover(8), db_stat(8), secconfig(8) delim off
db_archive(8)