Sponsored Content
Operating Systems SCO Backup of files using NFS a faster way Post 302967749 by trolley on Saturday 27th of February 2016 08:49:05 PM
Old 02-27-2016
Backup of files using NFS a faster way

Hi All!

i am trying to copy files from a SCO Openserver 5.0.6 to a NAS Server using NFS. I have a cron job that takes 1.5 hours to run and most of the data is static. I would like to find a faster way. In the event I needed to running manually or avoid an issue with taking down the servers due to thunderstorms. Below is a copy of the shell script.

Any help will be much appreciated.


Code:
# ********************************************************************
# *
# * Script Name : sysbck
# *
# * Description : Performs system backup to a remote NSF share
# *
# *********************************************************************
# *   Copyright 2015-20xx by Trolley Computers
# *********************************************************************
#!/bin/sh
# *********************************************************************
# *	V A R I A B L E S
# *********************************************************************

# *********************************************************************
# *	F U N C T I O N S
# *********************************************************************

# *********************************************************************
# *	E R R O R   H A N D L I N G
# *********************************************************************

# *********************************************************************
# *	M A I N   S C R I P T
# *********************************************************************

    echo ==============================
    date
    echo ==============================
#
# Mount Remote NSF Share
#
    echo ""
    echo "Mount Remote NSF Share"
    echo ""

    /etc/mount /mnt/pmsroot
    sleep 3

    echo ""
    echo ""
    df -v

#
# Remove Specific Directories
#
    echo ""
    echo "Remove Specific Directories"
    echo ""

    cd /mnt/pmsroot

    rm -fr /mnt/pmsroot/bin
    rm -fr /mnt/pmsroot/ecs
    rm -fr /mnt/pmsroot/angrist
    rm -fr /mnt/pmsroot/fercho
    rm -fr /mnt/pmsroot/huppert
    rm -fr /mnt/pmsroot/nena
    rm -fr /mnt/pmsroot/sc60
    rm -fr /mnt/pmsroot/trolley

    echo ""
    lc
#
# Backup Files
#
    echo ""
    echo "Backup Files..."
    echo ""

    cp -Rp /u2/* /mnt/pmsroot/.

    echo ""
    lc
#
# Un-Mount Remote NSF Share
#
    cd
    echo ""
    echo "Un-Mount Remote NSF Share"
    echo ""

    /etc/umount /mnt/pmsroot

    echo ==============================
    date
    echo ==============================

# *********************************************************************
# *	E X I T   S C R I P T
# *********************************************************************


Last edited by Scrutinizer; 02-29-2016 at 02:57 PM.. Reason: NSF-> NFS
 

10 More Discussions You Might Find Interesting

1. UNIX for Dummies Questions & Answers

Backup system to NFS Appliance device

I have been tasked with getting an AIX 4.3.3 box to backup to a NAS applicance device which provides NFS service. It is an intermediary repository so that other tools can transport the resulting backup file to another NAS Applicance at a remote site on a secondary frame connection. Anyone have... (10 Replies)
Discussion started by: sirhisss
10 Replies

2. UNIX for Advanced & Expert Users

Faster Concatenation of files

Dear All I am using cat command for concatenating multiple files. Some time i also use append command when there are few files. Is there faster way of concatenating multiple files(60 to 70 files) each of 156 MB or less/more.:) Thanx (1 Reply)
Discussion started by: tkbharani
1 Replies

3. UNIX for Dummies Questions & Answers

backup to NFS mount Redhat-Solaris

Hi guys, I have a redhat laptop and a sun solaris 8 server networked together I created an nfs share on the sun server and backed up an image of the Redhat laptop to it. The Hard disk size of the laptop is 40Gb but I have about 38Gb free space on the sun server. So I compressed the image... (9 Replies)
Discussion started by: Stin
9 Replies

4. Shell Programming and Scripting

Running rename command on large files and make it faster

Hi All, I have some 80,000 files in a directory which I need to rename. Below is the command which I am currently running and it seems, it is taking fore ever to run this command. This command seems too slow. Is there any way to speed up the command. I have have GNU Parallel installed on my... (6 Replies)
Discussion started by: shoaibjameel123
6 Replies

5. Shell Programming and Scripting

rsync backup mode(--backup) Are there any options to remove backup folders on successful deployment?

Hi Everyone, we are running rsync with --backup mode, Are there any rsync options to remove backup folders on successful deployment? Thanks in adv. (0 Replies)
Discussion started by: MVEERA
0 Replies

6. Shell Programming and Scripting

Faster Line by Line String/Date Comparison of 2 Files

Hello, I was wondering if anyone knows a faster way to search and compare strings and dates from 2 files? I'm currently using "for loop" but seems sluggish as i have to cycle through 10 directories with 10 files each containing thousands of lines. Given: -10 directories -10 files... (4 Replies)
Discussion started by: agentgrecko
4 Replies

7. Shell Programming and Scripting

Faster command to remove headers for files in a directory

Good evening Im new at unix shell scripting and im planning to script a shell that removes headers for about 120 files in a directory and each file contains about 200000 lines in average. i know i will loop files to process each one and ive found in this great forum different solutions... (5 Replies)
Discussion started by: alexcol
5 Replies

8. AIX

GTAR - new ways for faster backup - help required

We are taking backup of our application data(cobol file system, AIX/unix) before and after EOD job runs. The data size is approximately 260 GB in biggest branch. To reduce the backup time, 5 parallel execution is scheduled through control-m which backups up the files in 5 different *.gz. The job... (2 Replies)
Discussion started by: Bharath_79
2 Replies

9. AIX

GTAR - new ways to faster backup - help required

We are taking backup of our application data(cobol file system, AIX/unix) before and after EOD job runs. The data size is approximately 260 GB in biggest branch. To reduce the backup time, 5 parallel execution is scheduled through control-m which backups up the files in 5 different *.gz. The job... (8 Replies)
Discussion started by: Bharath_79
8 Replies

10. UNIX for Advanced & Expert Users

Need help for faster file read and grep in big files

I have a very big input file <inputFile1.txt> which has list of mobile no inputFile1.txt 3434343 3434323 0970978 85233 ... around 1 million records i have another file as inputFile2.txt which has some log detail big file inputFile2.txt afjhjdhfkjdhfkd df h8983 3434343 | 3483 | myout1 |... (3 Replies)
Discussion started by: reldb
3 Replies
OCF_HEARTBEAT_RSYNCD(7) 					OCF resource agents					   OCF_HEARTBEAT_RSYNCD(7)

NAME
ocf_heartbeat_rsyncd - Manages an rsync daemon SYNOPSIS
rsyncd [start | stop | monitor | meta-data | validate-all] DESCRIPTION
This script manages rsync daemon SUPPORTED PARAMETERS
binpath The rsync binary path. For example, "/usr/bin/rsync" (optional, string, default "rsync") conffile The rsync daemon configuration file name with full path. For example, "/etc/rsyncd.conf" (optional, string, default "/etc/rsyncd.conf") bwlimit This option allows you to specify a maximum transfer rate in kilobytes per second. This option is most effective when using rsync with large files (several megabytes and up). Due to the nature of rsync transfers, blocks of data are sent, then if rsync determines the transfer was too fast, it will wait before sending the next data block. The result is an average transfer rate equaling the specified limit. A value of zero specifies no limit. (optional, string, no default) SUPPORTED ACTIONS
This resource agent supports the following actions (operations): start Starts the resource. Suggested minimum timeout: 20s. stop Stops the resource. Suggested minimum timeout: 20s. monitor Performs a detailed status check. Suggested minimum timeout: 20s. Suggested interval: 60s. validate-all Performs a validation of the resource configuration. Suggested minimum timeout: 20s. meta-data Retrieves resource agent metadata (internal use only). Suggested minimum timeout: 5s. EXAMPLE
The following is an example configuration for a rsyncd resource using the crm(8) shell: primitive p_rsyncd ocf:heartbeat:rsyncd op monitor depth="0" timeout="20s" interval="60s" SEE ALSO
http://www.linux-ha.org/wiki/rsyncd_(resource_agent) AUTHOR
Linux-HA contributors (see the resource agent source for information about individual authors) resource-agents UNKNOWN 06/09/2014 OCF_HEARTBEAT_RSYNCD(7)
All times are GMT -4. The time now is 08:12 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy