We are taking backup of our application data(cobol file system, AIX/unix) before and after EOD job runs. The data size is approximately 260 GB in biggest branch. To reduce the backup time, 5 parallel execution is scheduled through control-m which backups up the files in 5 different *.gz. The job takes approximately 90 minutes to complete. Backup is done locally and later on it's passed to different location for retention.
Issue:
Each execution takes approximately 10% CPU which is putting heavy load on the server. This is causing the issue as the server is hosting multiple branches.
Is there anyway, we can improve the backup further? Is there any new features in GTAR to fasten the backup or any new backup command which can replace GTAR? Any suggestion would be really appreciated.
Yes, the splitting is helping to completing the job execution in less than 90 mins(otherwise would have taken 5+ hours). Each job run is consuming 10% CPU, overall 50% CPU for 5 runs, this is putting a lot of load on the server as the same server is shared by multiple branches. My question - is there any change we can do on the GTAR command I provided earlier for faster backup? Is there any faster alternative to GTAR? Any other suggestion pls?
Blocking factor - I used blocking factor option in GTAR thinking that it would give better performance. Trying to zip and backup 5 gb file on local disc. I didnt find any difference between the below execution interms of end result(backup completion time or size)
Any suggestion pls?
Last edited by Corona688; 05-22-2014 at 12:21 PM..
My suggestion would be to spell it 'please', not 'pls'.
File I/O, to a disk, doesn't really care about block size as much as raw tape I/O would -- especially since compressing it with -z is going to mess up all your block sizes anyway. 1024 bytes in, ???? bytes out... You could try --block-compress to force it to write to the disk in fixed-size blocks instead of arbitrary.
Also try bigger block sizes -- just doubling it isn't going to make much difference. Maybe 4096 or 8192, conveniently the same size as CPU memory pages.
It might help a little, but the difference is unlikely to be that dramatic... Either your disk, or the compression, is liable to be what's slowing it down. More likely the compression if running several in parallel makes it faster. Try writing to a different partition than the source of the files. Try using a more CPU-efficient compressor, like lzop i.e. tar -cf - bigfile | lzop > file.tar.lzop
Last edited by Corona688; 05-22-2014 at 12:23 PM..
I suppose the real problem is that "gzip" is a single-threaded application (and probably has to be). So each "gzip"-process will have a natural maximum operation speed which is how fast one CPU can work one single thread. The more "gzip"s you can distribute the backup process to the faster it will be done, but the more CPU resources will be used during this time.
You could try to move often-accessed data (like the work directory of the tar/gzip-processes) to a SSD. This might speed things up.
Hi all
I have a unix based firewall, which creates a daily backup file on the device.
I need a script to scp this file over to a remote server.
I can get this working daily using a basic script and a cron job.
However, I only want it to send the latest config back up file and currently... (4 Replies)
Hi All!
i am trying to copy files from a SCO Openserver 5.0.6 to a NAS Server using NFS. I have a cron job that takes 1.5 hours to run and most of the data is static. I would like to find a faster way. In the event I needed to running manually or avoid an issue with taking down the servers... (9 Replies)
We are taking backup of our application data(cobol file system, AIX/unix) before and after EOD job runs. The data size is approximately 260 GB in biggest branch. To reduce the backup time, 5 parallel execution is scheduled through control-m which backups up the files in 5 different *.gz. The job... (2 Replies)
Hi Everyone,
we are running rsync with --backup mode, Are there any rsync options to remove backup folders on successful deployment?
Thanks in adv. (0 Replies)
Ok so once again im back with what is probably a beginner question although somewhat more complicated (for me) than the last.
Background:
A client has a daily backup which is carried out via rsync.
Due to this, when they move a file around that file is then coppied a second time.
On top of... (4 Replies)
Hello All,
I am preparing a script to view or Extract contents of a tape drive using gtar.But facing a strange issue while trying to extract files using gtar.
If running script using sudo the getting the below error.
################
/usr/local/lib /usr/X11/lib /usr/X11R6/lib... (1 Reply)
Hi all,
will gtar zcvf command work in csh and tcsh shells? Becuase when i'm executing one script in bash and ksh, it's working fine. But it's not working in csh and tcsh shells. We have to run multiple scripts in tcsh, so we can not change the shell while executing these scripts. One of my... (2 Replies)
My data is something like shown below.
date1 date2 aaa bbbb ccccc
date3 date4 dddd eeeeeee ffffffffff ggggg hh
I want the output like this
date1date2 aaa eeeeee
I serached in the forum but didn't find the exact matching solution. Please help. (7 Replies)
I am trying to write a very large file, 570 gb, to a tape using gtar like this :
gtar czxf /dev/rmt/1 ./*
I get a message:
off_t value 570635451556 too large (max=68719476735)
It is writing to tape, but will it be good?
Thanks (1 Reply)
Hi All,
We have a gtar file and we are trying to untar the file with the option
gtar -xvzf <filename>
The gtar gets us till the end and throws the error message as highlighed below
mfcp/XFHFCD2.CPY
mfcp/XFHFCD3.CPY
mfcp/XFHFCD.CPY
gzip: stdin: unexpected end of file
gtar:... (1 Reply)