Rsync size descrepancies


 
Thread Tools Search this Thread
Top Forums UNIX for Dummies Questions & Answers Rsync size descrepancies
# 1  
Old 02-19-2013
Rsync size descrepancies

I'm using rsync to transfer data from one system (nfs01) to another (nfs02) but I'm seeing 28GB more data on the target than what's on the source. The source and target filesystems are both 138 GB. The source shows 100GB used and after running rsync the target shows 128 GB used. Shouldn't they be the same? As you can see below, three subfolders on the target system are each about 10GB larger than the source. I'm cluless, any help would be appreciated.

I'm running this rsync command on the target system (nfs02):
Code:
rsync -avr --delete root@nfs01r5v.lamar.edu:/bannertreedev/ /bannertreedev/

The source:
Quote:

[root@nfs01r5v ~]# df -h
Filesystem Size Used Avail Use% Mounted on
242G 14G 216G 7% /lu99
/dev/mapper/vg--bantreeDEV-lv--bantreeDEV
138G 100G 32G 77% /bannertreedev
/dev/mapper/vg--bantreePRD-lv--bantreePRD
138G 22G 109G 17% /bannertreeprd

[root@nfs01r5v ~]# du -sch /bannertreedev/*
269M /bannertreedev/bannerOH
1.3G /bannertreedev/bookshelf
36G /bannertreedev/BT
32K /bannertreedev/config
28G /bannertreedev/LI
16K /bannertreedev/lost+found
36G /bannertreedev/RG
The Target:
Quote:
[root@nfs02r5v ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VolGroup_BannerTreeDev-LogVol_BannerTreeDev
138G 128G 3.5G 98% /bannertreedev

[root@nfs02r5v ~]# du -sch /bannertreedev/*
269M /bannertreedev/bannerOH
1.3G /bannertreedev/bookshelf
45G /bannertreedev/BT
32K /bannertreedev/config
38G /bannertreedev/LI
8.0K /bannertreedev/lost+found
45G /bannertreedev/RG
128G total

# 2  
Old 02-19-2013
du does not measure the strict size of the files. du measures disk used, at the sector and cluster level.

Perhaps one of your files was a sparse file -- a file with a big hole of empty space in the middle. Many UNIX filesystems support these. You can make one by opening a file, seeking forward a whole bunch, then writing; the operating system will just mark the space before it as empty instead of actually storing all the 0's. du's results will be skewed for it, since it measures what's stored on disk, not the length of the file itself.

It's difficult to replicate sparse files with the sparse section intact. They usually get converted to real binary 0's instead of an actual hole. This would make them bigger when copied by any traditional means.
# 3  
Old 02-19-2013
I see. Well perhaps rsync isn't the tool I need. Maybe I should look at other methods to accomplish my goal. This is what I'm trying to do:

I have a production nfs server (nfs01) which shares one particular filesystem with our development environment. We have a requirement to segregate our prod and dev environments so I've stood up a second nfs server just for the dev environment (nfs02). Now I need to migrate my entire /bannertreedev on nfs01 to /bannertreedev on nfs02. As a note nfs01:/bannertreedev is a logical volume as is the one on nfs02.

As you can see, I figured I could just create the filesystem on nfs02 and rsync it. I was suprised by the size difference. I'm reasonable sure all my data transfered but I had not planned on such a difference. I may need to increase the size of the LV on nfs02.

Perhaps someone could suggest an alternate method? I'm also trying to prevent any downtime to nfs01.
# 4  
Old 02-19-2013
Well, what's your system?
# 5  
Old 02-19-2013
Sorry about that. Both systems are VM's running RHEL 5. We simply served up an entire disk for the filesystem and created the logical volumes using the entire disk.

I considered using "dd" but was having trouble figuring that out.
# 6  
Old 02-19-2013
Most file-level copying will have the same difficulty, especially remote ones. Your disk images must be sparse files, and the space gets filled in the copying.
 
Login or Register to Ask a Question

Previous Thread | Next Thread

10 More Discussions You Might Find Interesting

1. Shell Programming and Scripting

Rsync Error: rsync: link_stat failed: No such file or directory (2)

I wish to copy all the files & folder under /web/Transfer_Files/data/ on mymac1 (Linux) to remote server mybank.intra.com (Solaris 10) /tmp/ location I am using Ansible tool synchronize module which triggers the unix rsync command as below:rsync --delay-updates -F --compress --archive --rsh=ssh... (2 Replies)
Discussion started by: mohtashims
2 Replies

2. UNIX for Dummies Questions & Answers

Ls directory size reporting byte size instead of file count

I have been searching both on Unix.com and Google and have not been able to find the answer to my question. I think it is partly because I can't come up with the right search terms. Recently, my virtual server switched storage devices and I think the problem may be related to that change.... (2 Replies)
Discussion started by: jmgibby
2 Replies

3. Solaris

/tmp size is less whereas size allocated to swap is more

Hi, the /tmp size is less whereas the size allocated to swap is quite big. how to increase the size of /tmp - #: swap -l swapfile dev swaplo blocks free /dev/md/dsk/d20 85,20 8 273096 273096 #: swap -s total: 46875128k bytes allocated + 2347188k reserved =... (2 Replies)
Discussion started by: psb74
2 Replies

4. UNIX for Advanced & Expert Users

Physical disk IO size smaller than fragment block filesystem size ?

Hello, in one default UFS filesystem we have 8K block size (bsize) and 1K fragmentsize (fsize). At this scenary I thought all "FileSytem IO" will be 8K (or greater) but never smaller than the fragment size (1K). If a UFS fragment/blocksize is allwasy several ADJACENTS sectors on disk (in a ... (4 Replies)
Discussion started by: rarino2
4 Replies

5. UNIX for Advanced & Expert Users

Limiting size of rsync batch output

Anyone know if there's a way to limit the size of rsync batch output blob? I need each batch to fix on a 64GB USB key. Using syntax like: rsync -av --only-write-batch=/Volumes/usb/batch --stats /Users/dfbadmin/sandbox/ /Users/dfbadmin/archives/ (7 Replies)
Discussion started by: dfbills
7 Replies

6. Shell Programming and Scripting

Script to read file size and send email only if size > 0.

Hi Experts, I have a script like $ORACLE_HOME/bin/sqlplus username/password # << ENDSQL set pagesize 0 trim on feedback off verify off echo off newp none timing off set serveroutput on set heading off spool Schemaerrtmp.txt select ' TIMESTAMP COMPUTER NAME ... (5 Replies)
Discussion started by: welldone
5 Replies

7. Shell Programming and Scripting

The scripts not able to make the file to size 0, every times it go back to its original size

#!/bin/sh ########################################################################################################## #This script is being used for AOK application for cleaning up the .out files and zip it under logs directory. # IBM # Created #For pdocap201/pdoca202 .out files for AOK #1.... (0 Replies)
Discussion started by: mridul10_crj
0 Replies

8. Solaris

Directory size larger than file system size?

Hi, We currently have an Oracle database running and it is creating lots of processes in the /proc directory that are 1000M in size. The size of the /proc directory is now reading 26T. How can this be if the root file system is only 13GB? I have seen this before we an Oracle temp file... (6 Replies)
Discussion started by: sparcman
6 Replies

9. Solaris

Rsync Size ????

Hi I have been using rsync for the past few days and would vouch for it anytime.However i am unable to find the total size of files being transferred. The output of rsync looks something like this: sent 2.92M bytes received 90.75K bytes 6.78K bytes/sec total size is 6.27G speedup... (2 Replies)
Discussion started by: Hari_Ganesh
2 Replies

10. Solaris

command to find out total size of a specific file size (spread over the server)

hi all, in my server there are some specific application files which are spread through out the server... these are spread in folders..sub-folders..chid folders... please help me, how can i find the total size of these specific files in the server... (3 Replies)
Discussion started by: abhinov
3 Replies
Login or Register to Ask a Question