i am trying to copy files from a SCO Openserver 5.0.6 to a NAS Server using NFS. I have a cron job that takes 1.5 hours to run and most of the data is static. I would like to find a faster way. In the event I needed to running manually or avoid an issue with taking down the servers due to thunderstorms. Below is a copy of the shell script.
Any help will be much appreciated.
Last edited by Scrutinizer; 02-29-2016 at 02:57 PM..
Reason: NSF-> NFS
I have been tasked with getting an AIX 4.3.3 box to backup to a NAS applicance device which provides NFS service. It is an intermediary repository so that other tools can transport the resulting backup file to another NAS Applicance at a remote site on a secondary frame connection.
Anyone have... (10 Replies)
Dear All
I am using cat command for concatenating multiple files. Some time i also use append command when there are few files.
Is there faster way of concatenating multiple files(60 to 70 files) each of
156 MB or less/more.:)
Thanx (1 Reply)
Hi guys,
I have a redhat laptop and a sun solaris 8 server networked together
I created an nfs share on the sun server and backed up an image of the Redhat laptop to it.
The Hard disk size of the laptop is 40Gb but I have about 38Gb free space on the sun server. So I compressed the image... (9 Replies)
Hi All,
I have some 80,000 files in a directory which I need to rename. Below is the command which I am currently running and it seems, it is taking fore ever to run this command. This command seems too slow. Is there any way to speed up the command. I have have GNU Parallel installed on my... (6 Replies)
Hi Everyone,
we are running rsync with --backup mode, Are there any rsync options to remove backup folders on successful deployment?
Thanks in adv. (0 Replies)
Hello,
I was wondering if anyone knows a faster way to search and compare strings and dates from 2 files?
I'm currently using "for loop" but seems sluggish as i have to cycle through 10 directories with 10 files each containing thousands of lines.
Given:
-10 directories
-10 files... (4 Replies)
Good evening
Im new at unix shell scripting and im planning to script a shell that removes headers for about 120 files in a directory and each file contains about 200000
lines in average.
i know i will loop files to process each one and ive found in this great forum different solutions... (5 Replies)
We are taking backup of our application data(cobol file system, AIX/unix) before and after EOD job runs. The data size is approximately 260 GB in biggest branch. To reduce the backup time, 5 parallel execution is scheduled through control-m which backups up the files in 5 different *.gz. The job... (2 Replies)
We are taking backup of our application data(cobol file system, AIX/unix) before and after EOD job runs. The data size is approximately 260 GB in biggest branch. To reduce the backup time, 5 parallel execution is scheduled through control-m which backups up the files in 5 different *.gz. The job... (8 Replies)
I have a very big input file <inputFile1.txt> which has list of mobile no
inputFile1.txt
3434343
3434323
0970978
85233
... around 1 million records
i have another file as inputFile2.txt which has some log detail big file
inputFile2.txt
afjhjdhfkjdhfkd df h8983 3434343 | 3483 | myout1 |... (3 Replies)
Discussion started by: reldb
3 Replies
LEARN ABOUT DEBIAN
rdup-backups
RDUP-BACKUPS(7) rdup RDUP-BACKUPS(7)NAME
rdup-backups - introduction into making backups with rdup
INTRODUCTION
rdup is a simple program that prints out a list of files and directories that are changed changed on a filesystem. It is more sophisticated
than for instance find, because rdup will find files that are removed or directories that are renamed.
A long time ago rdup included a bunch of shell and Perl scripts that implemented a backup policy. These could be used in a pipeline to per-
form a backup.
Currently rdup consists out of three basic utilities:
rdup With rdup you create the file list on which later programs in the pipeline can work. The default output format also includes the
files' content. rdup can be seen as a tar replacement in this respect, but rdup also allows for all kinds of transformations of the
content (encryption, compression, reversal), see the -P switch in rdup(1) for more information.
rdup-tr
With rdup-tr you can transform the files rdup delivers to you. You can create tar, cpio or pax files. You can encrypt pathnames.
rdup-tr is filter that reads from standard input and writes to standard output. See rdup-tr(1) for more information. With rdup and
rdup-tr you can create an encrypted archive which is put in a directory structure that is also encrypted.
rdup-up
With rdup-up you can update an existing directory structure with the updates as described by rdup.
rdup-up reads rdup input and will create the files, symbolic links, hard links and directories (and sockets, pipes and devices) in
the file system. See rdup-up(1) for more information.
So the general backup pipeline for rdup will look something like this:
create filelist | transform | update filesystem
( rdup | rdup-tr | rdup-up )
Note 1:
The same sequence is used for restoring. In both cases you want to move files from location A to B. The only difference is that the
transformation is reversed when you restore.
Note 2:
The use of rdup-tr is optional.
BACKUPS AND RESTORES
For rdup there is no difference between backups and restores. If you think about this for a minute you understand why.
Making a backup means copying a list of files somewhere else. Restoring files is copying a list of files back to the place they came from.
Same difference. So rdup can be used for both, if you did any transformation with rdup during the backup you just need to reverse those
operations during the restore.
BACKUPS
It is always best to backup to another medium, be it a different local harddisk or a NFS/CIFS mounted filesystem. You can also use ssh to
store file on a remote server, ala rsync (although not as network efficient).
If you backup to a local disk you can just as well use rsync or plain old tar, but if you store your files at somebody else's disk you will
need encryption. This is where you go beyond rsync and rdup comes in. Rsync cannot do per-file encryption, sure you can encrypt the network
traffic with ssh, but at the remote side your files are kept in plain view. If you implement remote backups, the easy route is to
allow root access on the backup medium. If the backup runs without root access the created files will not have their original ownership.
For NFS this can be achieved by using no_root_squash, for ssh you could enable PermitRootLogin. Note that this may be a security risk.
SNAPSHOT BACKUPS
We need a little help here in the form of the rdup-simple script. Keep in mind that the following scripts can also be run remotely with
the help of ssh.
The following script implements the algorithm of rdup-simple.
#!/bin/bash
# some tmp files are saved in ~/.rdup. This directory must exist
DIR=/home # what to backup
BACKUP=/vol/backup
TODAY=$(date +%Y%m/%d)
LIST=~/.rdup/list-$HOSTNAME
STAMP=~/.rdup/timestamp-$HOSTNAME
# for remote backup, this has to run on the remote host!
BUGBUG
RET=$?
case $RET in
2|*)
echo Error >&2
exit 1
;;
1)
# full dump, remove file-list and time-stamp file
rm $LIST $STAMP
;;
0)
# inc dump
# do nothing here
;;
esac
# this is the place where you want to modify the command line
# right now, nothing is translated we just use 'cat'
rdup -N $STAMP -Pcat $LIST $DIR | rdup-up $BACKUP/$HOSTNAME/$TODAY
# or do a remote backup
#rdup -N $STAMP -Pcat $LIST $DIR | ssh root@remotehost
# rdup-up $BACKUP/$HOSTNAME/$TODAY
LOCAL BACKUPS
With rdup-simple you can easily create backups. Backing up my home directory to a backup directory:
rdup-simple ~ /vol/backup/$HOSTNAME
This will create a backup in /vol/backup/$HOSTNAME/200705/15. So each day will have its own directory. Multiple sources are allowed, so:
rdup-simple ~ /etc/ /var/lib /vol/backup/$HOSTNAME
Will backup your home directory, /etc and /var/lib to the backup location. Also if you need to compress your backup, simple add a '-z'
switch:
rdup-simple -z ~ /etc/ /var/lib /vol/backup/$HOSTNAME
REMOTE BACKUPS
For a remote backup to work, both the sending machine and the receiving machine must have rdup installed. The currently implemented proto-
col is ssh.
Dumping my homedir to the remote server:
rdup-simple ~ ssh://miekg@remote/vol/backup/$HOSTNAME
The syntax is almost identical, only the destination starts with the magic string 'ssh://'. Compression and encryption are just as easily
enabled as with a local backup, just add '-z' and/or a '-k keyfile' argument:
rdup-simple -z -k 'secret-file' ~ ssh://miekg@remote/vol/backup/$HOSTNAME
Remember though, that because of these advanced features (compression, encryption, etc, ...) the network transfer can never be as efficient
as rsync.
ALSO SEE rdup(1), rdup-tr(1), rdup-up(1) and http://www.miek.nl/projects/rdup/
1.1.x 15 Dec 2008 RDUP-BACKUPS(7)