GTAR - new ways to faster backup - help required


 
Thread Tools Search this Thread
Operating Systems AIX GTAR - new ways to faster backup - help required
# 1  
Old 05-21-2014
GTAR - new ways to faster backup - help required

We are taking backup of our application data(cobol file system, AIX/unix) before and after EOD job runs. The data size is approximately 260 GB in biggest branch. To reduce the backup time, 5 parallel execution is scheduled through control-m which backups up the files in 5 different *.gz. The job takes approximately 90 minutes to complete. Backup is done locally and later on it's passed to different location for retention.

Issue:
Each execution takes approximately 10% CPU which is putting heavy load on the server. This is causing the issue as the server is hosting multiple branches.


Code:
 
gtar -cvzf   ${DBTBKUP}/${DATADIR}/${DATADIR}${BKUPSEQ}.gz  [${FILESET}*

${DBTBKUP}/${DATADIR}/${DATADIR}${BKUPSEQ}.gz - {backup directory}
[${FILESET}* - fileset backed-up

Is there anyway, we can improve the backup further? Is there any new features in GTAR to fasten the backup or any new backup command which can replace GTAR? Any suggestion would be really appreciated.
# 2  
Old 05-21-2014
If computers had a "go faster" setting, everyone would be already using it.

Are your disks organized in such a way that splitting the job in 5 makes it better, or worse?
This User Gave Thanks to Corona688 For This Post:
# 3  
Old 05-22-2014
Thanks for your response.

Yes, the splitting is helping to completing the job execution in less than 90 mins(otherwise would have taken 5+ hours). Each job run is consuming 10% CPU, overall 50% CPU for 5 runs, this is putting a lot of load on the server as the same server is shared by multiple branches. My question - is there any change we can do on the GTAR command I provided earlier for faster backup? Is there any faster alternative to GTAR? Any other suggestion pls?
# 4  
Old 05-22-2014
GTAR - blocking factor

Moderator's Comments:
Mod Comment Threads merged. Please keep this to one thread.


Blocking factor - I used blocking factor option in GTAR thinking that it would give better performance. Trying to zip and backup 5 gb file on local disc. I didnt find any difference between the below execution interms of end result(backup completion time or size)

Code:
 
gtar -cvzb 2 -f ABC3.gz BIGFILE
gtar -cvzb 1024 -f ABC3.gz BIGFILE
gtar -cvzf ABC3.gz BIGFILE

Any suggestion pls?

Last edited by Corona688; 05-22-2014 at 12:21 PM..
# 5  
Old 05-22-2014
My suggestion would be to spell it 'please', not 'pls'.

File I/O, to a disk, doesn't really care about block size as much as raw tape I/O would -- especially since compressing it with -z is going to mess up all your block sizes anyway. 1024 bytes in, ???? bytes out... You could try --block-compress to force it to write to the disk in fixed-size blocks instead of arbitrary.

Also try bigger block sizes -- just doubling it isn't going to make much difference. Maybe 4096 or 8192, conveniently the same size as CPU memory pages.

It might help a little, but the difference is unlikely to be that dramatic... Either your disk, or the compression, is liable to be what's slowing it down. More likely the compression if running several in parallel makes it faster. Try writing to a different partition than the source of the files. Try using a more CPU-efficient compressor, like lzop i.e. tar -cf - bigfile | lzop > file.tar.lzop

Last edited by Corona688; 05-22-2014 at 12:23 PM..
This User Gave Thanks to Corona688 For This Post:
# 6  
Old 05-22-2014
How much does 10% mean on your machine? Is that one full CPU on a ten-CPU machine for example?

You could try using a lower-load compressor. lzop will end up slightly bigger than gzip but its performance is much better CPU wise.

tar -cf - path/to/files | lzop > file.tar.lzop
# 7  
Old 05-22-2014
I suppose the real problem is that "gzip" is a single-threaded application (and probably has to be). So each "gzip"-process will have a natural maximum operation speed which is how fast one CPU can work one single thread. The more "gzip"s you can distribute the backup process to the faster it will be done, but the more CPU resources will be used during this time.

You could try to move often-accessed data (like the work directory of the tar/gzip-processes) to a SSD. This might speed things up.

I hope this helps.

bakunin
This User Gave Thanks to bakunin For This Post:
Login or Register to Ask a Question

Previous Thread | Next Thread

10 More Discussions You Might Find Interesting

1. Shell Programming and Scripting

Help required to get a backup script working

Hi all I have a unix based firewall, which creates a daily backup file on the device. I need a script to scp this file over to a remote server. I can get this working daily using a basic script and a cron job. However, I only want it to send the latest config back up file and currently... (4 Replies)
Discussion started by: jimmyzoom
4 Replies

2. SCO

Backup of files using NFS a faster way

Hi All! i am trying to copy files from a SCO Openserver 5.0.6 to a NAS Server using NFS. I have a cron job that takes 1.5 hours to run and most of the data is static. I would like to find a faster way. In the event I needed to running manually or avoid an issue with taking down the servers... (9 Replies)
Discussion started by: trolley
9 Replies

3. AIX

GTAR - new ways for faster backup - help required

We are taking backup of our application data(cobol file system, AIX/unix) before and after EOD job runs. The data size is approximately 260 GB in biggest branch. To reduce the backup time, 5 parallel execution is scheduled through control-m which backups up the files in 5 different *.gz. The job... (2 Replies)
Discussion started by: Bharath_79
2 Replies

4. Shell Programming and Scripting

rsync backup mode(--backup) Are there any options to remove backup folders on successful deployment?

Hi Everyone, we are running rsync with --backup mode, Are there any rsync options to remove backup folders on successful deployment? Thanks in adv. (0 Replies)
Discussion started by: MVEERA
0 Replies

5. UNIX for Dummies Questions & Answers

Backup Rationalisation Script - Help Required

Ok so once again im back with what is probably a beginner question although somewhat more complicated (for me) than the last. Background: A client has a daily backup which is carried out via rsync. Due to this, when they move a file around that file is then coppied a second time. On top of... (4 Replies)
Discussion started by: jeked
4 Replies

6. Solaris

Gtar Lib file not found.

Hello All, I am preparing a script to view or Extract contents of a tape drive using gtar.But facing a strange issue while trying to extract files using gtar. If running script using sudo the getting the below error. ################ /usr/local/lib /usr/X11/lib /usr/X11R6/lib... (1 Reply)
Discussion started by: ajaincv
1 Replies

7. Solaris

about gtar command with zcvf options

Hi all, will gtar zcvf command work in csh and tcsh shells? Becuase when i'm executing one script in bash and ksh, it's working fine. But it's not working in csh and tcsh shells. We have to run multiple scripts in tcsh, so we can not change the shell while executing these scripts. One of my... (2 Replies)
Discussion started by: raghu.iv85
2 Replies

8. Shell Programming and Scripting

Getting required fields from a test file in required fromat in unix

My data is something like shown below. date1 date2 aaa bbbb ccccc date3 date4 dddd eeeeeee ffffffffff ggggg hh I want the output like this date1date2 aaa eeeeee I serached in the forum but didn't find the exact matching solution. Please help. (7 Replies)
Discussion started by: rdhanek
7 Replies

9. UNIX for Dummies Questions & Answers

gtar - question

I am trying to write a very large file, 570 gb, to a tape using gtar like this : gtar czxf /dev/rmt/1 ./* I get a message: off_t value 570635451556 too large (max=68719476735) It is writing to tape, but will it be good? Thanks (1 Reply)
Discussion started by: iancrozier
1 Replies

10. Shell Programming and Scripting

gtar error

Hi All, We have a gtar file and we are trying to untar the file with the option gtar -xvzf <filename> The gtar gets us till the end and throws the error message as highlighed below mfcp/XFHFCD2.CPY mfcp/XFHFCD3.CPY mfcp/XFHFCD.CPY gzip: stdin: unexpected end of file gtar:... (1 Reply)
Discussion started by: ganga.dharan
1 Replies
Login or Register to Ask a Question