The delta is not written. The data already exists in the original filesystem.
For instance, you have 4 files sized 20 GB on a zfs filesystem inside a zpool sized 100 GB.
Zpool current space utilization is 80%
For the sake of argument, we have only one filesystem in that zpool.
A snapshot has been made on that zfs filesystem.
You delete 1 of 4 files sized 20 GB.
Zpool will remain on 80%, since the snapshot is referencing on the deleted data, the data is not actually deleted from the zpool.
You issue zfs destroy on the snapshot. This operation actually deletes data from the zpool.
This is how i understand it, feel free to correct me
As for ARC :
Problem is it doesn't work well if a program requests a very large memory chunk (such as Oracle database), since it will request memory, if it is not given in certain time it will start swapping.
This is why i avoid zfs filesystems in general for Oracle database and use ASM with limited ZFS arc.
Take a look at the documentation regarding ZFS and databases. It requires a lot of love and attention.
I'd rather give that love to something else and run ASM
Chip in a good SSD or a local flash cache card as a CACHE device for Oracle, pin a couple of monster indexes in it and go get some beer
On the other hand, on several TB Solaris Cluster with ZFS which is a NFS server, i haven't touched that tunable.
Machines work fine with 95% memory consumed, mostly by filesystems, using the memory as ARC (which is desired).