Let's low tech this for a minute and test this by doing the steps manually. I am *assuming* the dump file name is the same everyday, or it has simple enough naming convention that you can make a script to rename a file.
1) You already have rsync'd and you have a folder named 04-05-08 on the backup server. It's now time to get the 05-05-08 version, and magically you know the DB Server is done creating today's dumpfile....
2) On the backup server, cp -rp ./04-05-08 ./05-05-08
-- you now have a new folder to rsync into, and it has yesterday's version of the dump file itself, with yesterday's name and we've preserved the timestamps and permissions on the files.
3) If necessary, rename the ./05-05-08/dumpfile to match today's dump name.
3) start your rsync between the 05-05-08 folders.
To quote the rsync man page:
If any of the files already exist on the remote system then the rsync remote-update protocol is used to update the file by sending only the differences. Your file already exists, so it will run update instead of a copy.
With any luck, the compare will take less time than an actual copy normally does. No matter what, 800MB is a lot to accomplish.
Important Question: Does your pgsql dump actually need to create a NEW 800MB file every day on the DB server, or can it just update the existing dumpfile?
If you can update the same "dump" instance, then I'd just start an rsync of the master dumpfile, and on the backup server just schedule a cp of the the currently rsync'd instance over to the daily folder. Schedule the rsync for every 3 hours, and you have a decent current backup, plus yesterday's copy on the shelf.
Please remember: If you are out of disk space, or your dumpfile failed to be created, it's really hard to finish a backup, so script with that in mind, or confirm something is monitoring your diskspace....
A backup is a terrible thing to lose.