01-25-2002
Quote:
Originally posted by refram
actually has pieces falling off of it, and they haven't been able to get the backup to run for 2 years!!!!! They really need to get rid of it. I would be glad to let them continue to use UNIX with a [/B]
Whoa... No backups for two years?
I do not think that I would want clients that were not concerned about their data.
What form is the data in? Database, flatfile, some binary proprietary format?
Is it possible to extract the old data and convert it into the new format? I am thinking no because you have stated that the original creator of the program is long gone. This must be a modern replacement by another company that uses a different data format?
Question, questions and even more questions.
10 More Discussions You Might Find Interesting
1. UNIX for Advanced & Expert Users
hi,
is there any tool that i can use to update my scripts (SH scripts)
form Unix to linux.
please mention any useful websites.
thanx in advance (2 Replies)
Discussion started by: omran
2 Replies
2. UNIX for Advanced & Expert Users
Hi all,
Would appreciate advise on my situation.
Currently server A is in production. Server A takes in data from Server X, does some processing and send to server Y. We are going to develop a different system in server B, something like an enhanced version of A. Server A will be retired once... (2 Replies)
Discussion started by: new2ss
2 Replies
3. Solaris
Hi,
I need docs related to migration of Solaris8.0 to solaris10.0.
steps to be taken care from developement side..
cheers
shell scripts developer (0 Replies)
Discussion started by: mohanpadamata
0 Replies
4. UNIX for Advanced & Expert Users
Hi ,
There is going to be a server migration from Solaris 8.0 to Solaris10.0.
Could anyone give me some tips and documents regarding the steps to be remembered,tips to be followed etc.
like
syntax differences
any new changes to the existing commands and tools we use
whatever the... (1 Reply)
Discussion started by: mohanpadamata
1 Replies
5. Shell Programming and Scripting
Hi,
I am working on migration project. thier are 50,000 scripts. As we are doing 70% of automization for changing the scripts. The remaining 30% doing manually.
After doing manual changes i have to find the wheather the change is dont or not and also clode review process.
Is there any... (2 Replies)
Discussion started by: unihp1
2 Replies
6. UNIX for Advanced & Expert Users
We are about to get a new server and I need to prepare for migration to the new one. This will be my first migration so I'm sure I will be learning alot.
My current server is running CentOS 4.x and I want to move to a sever running Centos 5.x , thought it would make things easier.
The old... (1 Reply)
Discussion started by: mcraul
1 Replies
7. AIX
Hi,
Existing several p5 server with lpar (aix5.3), also implemented with hacmp.
And now planning to buy new set of server (installing aix7.1)and SAN to replace the existing server.
My question is, how to perform data migration from old server/SAN to new server/SAN.
Suppose I install... (6 Replies)
Discussion started by: Oceanlo2013
6 Replies
8. AIX
Hi all,
We are migrating our SAN storage from HSV360 to 3PAR. The system runs aix 6.1 version with HACMP.
Please let me know what are requirements from OS side and how are the data copied to the new disks. (10 Replies)
Discussion started by: ElizabethPJ
10 Replies
9. AIX
We have a 2 node oracle rac cluster one node is in frame 1 and other is in frame 2
Now,because of some hardware failure(processor card and cable) in frame 1 we will failover database services from lpar in frame 1 to lpar(oracle rac cluster node2) in frame2 and the entire replacement of hardware... (9 Replies)
Discussion started by: admin_db
9 Replies
10. AIX
I'm New to AIX / VIOS
We're doing a FC switch cutover on an ibm device, connected via SAN.
How do I tell if one path to my remote disk is lost? (aix lvm)
How do I tell when my link is down on my HBA port?
Appreciate your help, very much! (4 Replies)
Discussion started by: BG_JrAdmin
4 Replies
LEARN ABOUT DEBIAN
expire_backups
EXPIRE_BACKUPS(1) S3QL EXPIRE_BACKUPS(1)
NAME
expire_backups - Intelligently expire old backups
SYNOPSIS
expire_backups [options] <age> [<age> ...]
DESCRIPTION
The expire_backups command intelligently remove old backups that are no longer needed.
To define what backups you want to keep for how long, you define a number of age ranges. expire_backups ensures that you will have at least
one backup in each age range at all times. It will keep exactly as many backups as are required for that and delete any backups that become
redundant.
Age ranges are specified by giving a list of range boundaries in terms of backup cycles. Every time you create a new backup, the existing
backups age by one cycle.
Example: when expire_backups is called with the age range definition 1 3 7 14 31, it will guarantee that you always have the following
backups available:
1. A backup that is 0 to 1 cycles old (i.e, the most recent backup)
2. A backup that is 1 to 3 cycles old
3. A backup that is 3 to 7 cycles old
4. A backup that is 7 to 14 cycles old
5. A backup that is 14 to 31 cycles old
Note If you do backups in fixed intervals, then one cycle will be equivalent to the backup interval. The advantage of specifying the age
ranges in terms of backup cycles rather than days or weeks is that it allows you to gracefully handle irregular backup intervals.
Imagine that for some reason you do not turn on your computer for one month. Now all your backups are at least a month old, and if
you had specified the above backup strategy in terms of absolute ages, they would all be deleted! Specifying age ranges in terms of
backup cycles avoids these sort of problems.
expire_backups usage is simple. It requires backups to have names of the forms year-month-day_hour:minute:seconds (YYYY-MM-DD_HH:mm:ss) and
works on all backups in the current directory. So for the above backup strategy, the correct invocation would be:
expire_backups.py 1 3 7 14 31
When storing your backups on an S3QL file system, you probably want to specify the --use-s3qlrm option as well. This tells expire_backups
to use the s3qlrm command to delete directories.
expire_backups uses a "state file" to keep track which backups are how many cycles old (since this cannot be inferred from the dates con-
tained in the directory names). The standard name for this state file is .expire_backups.dat. If this file gets damaged or deleted,
expire_backups no longer knows the ages of the backups and refuses to work. In this case you can use the --reconstruct-state option to try
to reconstruct the state from the backup dates. However, the accuracy of this reconstruction depends strongly on how rigorous you have been
with making backups (it is only completely correct if the time between subsequent backups has always been exactly the same), so it's gener-
ally a good idea not to tamper with the state file.
OPTIONS
The expire_backups command accepts the following options:
--quiet
be really quiet
--debug
activate debugging output
--version
just print program version and exit
--state <file>
File to save state information in (default: ".expire_backups.dat")
-n Dry run. Just show which backups would be deleted.
--reconstruct-state
Try to reconstruct a missing state file from backup dates.
--use-s3qlrm
Use s3qlrm command to delete backups.
EXIT STATUS
expire_backups returns exit code 0 if the operation succeeded and 1 if some error occured.
SEE ALSO
expire_backups is shipped as part of S3QL, http://code.google.com/p/s3ql/.
COPYRIGHT
2008-2011, Nikolaus Rath
1.11.1 August 27, 2014 EXPIRE_BACKUPS(1)