Backup of files using NFS a faster way


 
Thread Tools Search this Thread
Operating Systems SCO Backup of files using NFS a faster way
# 1  
Old 02-27-2016
Backup of files using NFS a faster way

Hi All!

i am trying to copy files from a SCO Openserver 5.0.6 to a NAS Server using NFS. I have a cron job that takes 1.5 hours to run and most of the data is static. I would like to find a faster way. In the event I needed to running manually or avoid an issue with taking down the servers due to thunderstorms. Below is a copy of the shell script.

Any help will be much appreciated.


Code:
# ********************************************************************
# *
# * Script Name : sysbck
# *
# * Description : Performs system backup to a remote NSF share
# *
# *********************************************************************
# *   Copyright 2015-20xx by Trolley Computers
# *********************************************************************
#!/bin/sh
# *********************************************************************
# *	V A R I A B L E S
# *********************************************************************

# *********************************************************************
# *	F U N C T I O N S
# *********************************************************************

# *********************************************************************
# *	E R R O R   H A N D L I N G
# *********************************************************************

# *********************************************************************
# *	M A I N   S C R I P T
# *********************************************************************

    echo ==============================
    date
    echo ==============================
#
# Mount Remote NSF Share
#
    echo ""
    echo "Mount Remote NSF Share"
    echo ""

    /etc/mount /mnt/pmsroot
    sleep 3

    echo ""
    echo ""
    df -v

#
# Remove Specific Directories
#
    echo ""
    echo "Remove Specific Directories"
    echo ""

    cd /mnt/pmsroot

    rm -fr /mnt/pmsroot/bin
    rm -fr /mnt/pmsroot/ecs
    rm -fr /mnt/pmsroot/angrist
    rm -fr /mnt/pmsroot/fercho
    rm -fr /mnt/pmsroot/huppert
    rm -fr /mnt/pmsroot/nena
    rm -fr /mnt/pmsroot/sc60
    rm -fr /mnt/pmsroot/trolley

    echo ""
    lc
#
# Backup Files
#
    echo ""
    echo "Backup Files..."
    echo ""

    cp -Rp /u2/* /mnt/pmsroot/.

    echo ""
    lc
#
# Un-Mount Remote NSF Share
#
    cd
    echo ""
    echo "Un-Mount Remote NSF Share"
    echo ""

    /etc/umount /mnt/pmsroot

    echo ==============================
    date
    echo ==============================

# *********************************************************************
# *	E X I T   S C R I P T
# *********************************************************************


Last edited by Scrutinizer; 02-29-2016 at 02:57 PM.. Reason: NSF-> NFS
# 2  
Old 02-28-2016
Well the only way to speed up the process is not to copy entire hierarchy every time a script is ran.

Have you considered using rsync , since the data is static it will only copy changes every time it is ran ?
Using rsync the script will probably be couple of lines long.

How big is the directory (size and number of files) and how much bandwidth do you have ?

Perhaps you could also speed up the process with various mount options for NFS or setting atime off for the source mountpoint, but one can only guess.

Other approach is to have a checksum table of all files (source and destination) in perhaps sqlite3 database or a plain text file which you will ran you script against so it only copies the changes.
This will require a bit more effort tho unlike couple of rsync lines Smilie

Hope that helps
Regards
Peasant.
# 3  
Old 02-28-2016
rsync is available from the skunkware section on the sco site.
# 4  
Old 02-28-2016
Also, the script posted in post #1 appears to be using a single NAS backup volume (defined in /etc/fstab). IMO an approach like this should only be used if there are multiple backup NAS volumes, either taken from the host itself or from the NAS volume.

Otherwise this would not be a secure way of doing it, since it first wipes the old backup before making a new one, so if anything happens during that backup you end up with nothing..
# 5  
Old 02-28-2016
I did not find rsync on the Skunkware website for OSR5. I did download, compile and install version 3.1.2 from the website for Rsync. I am going to need to learn how to use it.

I did notice on my NAS device, there is a service called rsync.

Yes, I need to have a better backup, where I am not messing with previous days backup. I was trying to get something quick and dirty done. I now need to work on something more secure and beneficial to me.

Thanks for all the suggestions and comments.
# 6  
Old 02-28-2016
Just a minor observation. This:
Quote:
Originally Posted by trolley
Code:
# ********************************************************************
# *
# * Script Name : sysbck
# *
# * Description : Performs system backup to a remote NSF share
# *
# *********************************************************************
# *   Copyright 2015-20xx by Trolley Computers
# *********************************************************************
#!/bin/sh

won't work because "#! /some/shell" is only supported in the first line, nowhere else.

I hope this helps.

bakunin
# 7  
Old 02-29-2016
I started playing around with rsync and it has promise. Much faster than the way I am doing it. I post the final shell script when done.
Login or Register to Ask a Question

Previous Thread | Next Thread

10 More Discussions You Might Find Interesting

1. UNIX for Advanced & Expert Users

Need help for faster file read and grep in big files

I have a very big input file <inputFile1.txt> which has list of mobile no inputFile1.txt 3434343 3434323 0970978 85233 ... around 1 million records i have another file as inputFile2.txt which has some log detail big file inputFile2.txt afjhjdhfkjdhfkd df h8983 3434343 | 3483 | myout1 |... (3 Replies)
Discussion started by: reldb
3 Replies

2. AIX

GTAR - new ways to faster backup - help required

We are taking backup of our application data(cobol file system, AIX/unix) before and after EOD job runs. The data size is approximately 260 GB in biggest branch. To reduce the backup time, 5 parallel execution is scheduled through control-m which backups up the files in 5 different *.gz. The job... (8 Replies)
Discussion started by: Bharath_79
8 Replies

3. AIX

GTAR - new ways for faster backup - help required

We are taking backup of our application data(cobol file system, AIX/unix) before and after EOD job runs. The data size is approximately 260 GB in biggest branch. To reduce the backup time, 5 parallel execution is scheduled through control-m which backups up the files in 5 different *.gz. The job... (2 Replies)
Discussion started by: Bharath_79
2 Replies

4. Shell Programming and Scripting

Faster command to remove headers for files in a directory

Good evening Im new at unix shell scripting and im planning to script a shell that removes headers for about 120 files in a directory and each file contains about 200000 lines in average. i know i will loop files to process each one and ive found in this great forum different solutions... (5 Replies)
Discussion started by: alexcol
5 Replies

5. Shell Programming and Scripting

Faster Line by Line String/Date Comparison of 2 Files

Hello, I was wondering if anyone knows a faster way to search and compare strings and dates from 2 files? I'm currently using "for loop" but seems sluggish as i have to cycle through 10 directories with 10 files each containing thousands of lines. Given: -10 directories -10 files... (4 Replies)
Discussion started by: agentgrecko
4 Replies

6. Shell Programming and Scripting

rsync backup mode(--backup) Are there any options to remove backup folders on successful deployment?

Hi Everyone, we are running rsync with --backup mode, Are there any rsync options to remove backup folders on successful deployment? Thanks in adv. (0 Replies)
Discussion started by: MVEERA
0 Replies

7. Shell Programming and Scripting

Running rename command on large files and make it faster

Hi All, I have some 80,000 files in a directory which I need to rename. Below is the command which I am currently running and it seems, it is taking fore ever to run this command. This command seems too slow. Is there any way to speed up the command. I have have GNU Parallel installed on my... (6 Replies)
Discussion started by: shoaibjameel123
6 Replies

8. UNIX for Dummies Questions & Answers

backup to NFS mount Redhat-Solaris

Hi guys, I have a redhat laptop and a sun solaris 8 server networked together I created an nfs share on the sun server and backed up an image of the Redhat laptop to it. The Hard disk size of the laptop is 40Gb but I have about 38Gb free space on the sun server. So I compressed the image... (9 Replies)
Discussion started by: Stin
9 Replies

9. UNIX for Advanced & Expert Users

Faster Concatenation of files

Dear All I am using cat command for concatenating multiple files. Some time i also use append command when there are few files. Is there faster way of concatenating multiple files(60 to 70 files) each of 156 MB or less/more.:) Thanx (1 Reply)
Discussion started by: tkbharani
1 Replies

10. UNIX for Dummies Questions & Answers

Backup system to NFS Appliance device

I have been tasked with getting an AIX 4.3.3 box to backup to a NAS applicance device which provides NFS service. It is an intermediary repository so that other tools can transport the resulting backup file to another NAS Applicance at a remote site on a secondary frame connection. Anyone have... (10 Replies)
Discussion started by: sirhisss
10 Replies
Login or Register to Ask a Question