Sponsored Content
Full Discussion: ZFS Filesystem
Operating Systems Solaris ZFS Filesystem Post 302953020 by Peasant on Tuesday 25th of August 2015 01:20:07 AM
Old 08-25-2015
The delta is not written. The data already exists in the original filesystem.

For instance, you have 4 files sized 20 GB on a zfs filesystem inside a zpool sized 100 GB.
Zpool current space utilization is 80%
For the sake of argument, we have only one filesystem in that zpool.

A snapshot has been made on that zfs filesystem.
You delete 1 of 4 files sized 20 GB.

Zpool will remain on 80%, since the snapshot is referencing on the deleted data, the data is not actually deleted from the zpool.

You issue zfs destroy on the snapshot. This operation actually deletes data from the zpool.

This is how i understand it, feel free to correct me Smilie

As for ARC :

Problem is it doesn't work well if a program requests a very large memory chunk (such as Oracle database), since it will request memory, if it is not given in certain time it will start swapping.

This is why i avoid zfs filesystems in general for Oracle database and use ASM with limited ZFS arc.
Take a look at the documentation regarding ZFS and databases. It requires a lot of love and attention.

I'd rather give that love to something else and run ASM Smilie
Chip in a good SSD or a local flash cache card as a CACHE device for Oracle, pin a couple of monster indexes in it and go get some beer Smilie

On the other hand, on several TB Solaris Cluster with ZFS which is a NFS server, i haven't touched that tunable.
Machines work fine with 95% memory consumed, mostly by filesystems, using the memory as ARC (which is desired).
 

7 More Discussions You Might Find Interesting

1. Solaris

How do I export a zfs filesystem that I created?

I created a zpool and two ZFS volumes in OpenSolaris. I would like both ZFS volumes to be exportable. However, I don't know how to set that up. These are the steps I did: 1) Create the zpool using raidz1 across five disks. I have six disks and created a zpool across 5 of them. c4t0d0... (3 Replies)
Discussion started by: sqa777
3 Replies

2. Solaris

Why does the # of blocks change for a file on a ZFS filesystem?

I created a zpool and zfs filesystem in OpenSolaris. I made two NFS mount points: > zpool history History for 'raidpool': 2009-01-15.17:12:48 zpool create -f raidpool raidz1 c4t1d0 c4t2d0 c4t3d0 c4t4d0 c4t5d0 2009-01-15.17:15:54 zfs create -o mountpoint=/vol01 -o sharenfs=on -o... (0 Replies)
Discussion started by: sqa777
0 Replies

3. Solaris

Mount old zfs filesystem

Hey all, I have a machine with 16 drive slots. Two of the drives have a ZFS mirror of the operating system, the other 14 contain the storage raidz. So, after installing Opensolaris on the OS drives, how can I remount the storage raid? TIA (11 Replies)
Discussion started by: PatrickBaer
11 Replies

4. Filesystems, Disks and Memory

Howto Convert a filesystem from Veritas to ZFS?

Hi Folks, Looking for info here more than any actual HowTo, does anyone know if there is an actual way of converting a Veritas or UFS filesystem to ZFS leaving the resident data intact. All that I have been able to find, including the commercial products seem to require the FS backed up from... (1 Reply)
Discussion started by: gull04
1 Replies

5. Solaris

Please explain why ZFS is said to be a hybrid filesystem and a volume manager also

Hi guys! How come ZFS is said to be not just a filesystem but a hybrid filesystem and also a volume manager? Please explain. I will appreciate your replies. Hope you can help me figure this out. Thanks in advance! (1 Reply)
Discussion started by: Klyde
1 Replies

6. Solaris

Extend zfs storage filesystem

Hello, Need to ask the question regarding extending the zfs storage file system. currently after using the command, df -kh u01-data-pool/data 600G 552 48G 93% /data /data are only 48 gb remaining and it has occupied 93% for total storage. zpool u01-data-pool has more then 200 gb... (14 Replies)
Discussion started by: shahzad53
14 Replies

7. UNIX for Beginners Questions & Answers

How to finish expanding a zfs filesystem?

I have a esxi 6.7 server running a Solaris 10 x86 vm (actually a bunch of them). The VM uses zfs for the pools (of course). I expand the underlying ESX logical disk, for example from 50GB to 100gb, then I set autoexpand=on <pool> that belongs to the esx logical disk. what am i missing to... (2 Replies)
Discussion started by: mrmurdock
2 Replies
GPTZFSBOOT(8)						    BSD System Manager's Manual 					     GPTZFSBOOT(8)

NAME
gptzfsboot -- GPT bootcode for ZFS on BIOS-based computers DESCRIPTION
gptzfsboot is used on BIOS-based computers to boot from a filesystem in a ZFS pool. gptzfsboot is installed in a freebsd-boot partition of a GPT-partitioned disk with gpart(8). IMPLEMENTATION NOTES
The GPT standard allows a variable number of partitions, but gptzfsboot only boots from tables with 128 partitions or less. BOOTING
gptzfsboot tries to find all ZFS pools that are composed of BIOS-visible hard disks or partitions on them. gptzfsboot looks for ZFS device labels on all visible disks and in discovered supported partitions for all supported partition scheme types. The search starts with the disk from which gptzfsboot itself was loaded. Other disks are probed in BIOS defined order. After a disk is probed and gptzfsboot determines that the whole disk is not a ZFS pool member, the individual partitions are probed in their partition table order. Currently GPT and MBR partition schemes are supported. With the GPT scheme, only partitions of type freebsd-zfs are probed. The first pool seen during probing is used as a default boot pool. The filesystem specified by the bootfs property of the pool is used as a default boot filesystem. If the bootfs property is not set, then the root filesystem of the pool is used as the default. zfsloader(8) is loaded from the boot filesystem. If /boot.config or /boot/config is present in the boot filesystem, boot options are read from it in the same way as boot(8). The ZFS GUIDs of the first successfully probed device and the first detected pool are made available to zfsloader(8) in the vfs.zfs.boot.primary_vdev and vfs.zfs.boot.primary_pool variables. USAGE
Normally gptzfsboot will boot in fully automatic mode. However, like boot(8), it is possible to interrupt the automatic boot process and interact with gptzfsboot through a prompt. gptzfsboot accepts all the options that boot(8) supports. The filesystem specification and the path to zfsloader(8) are different from boot(8). The format is [zfs:pool/filesystem:][/path/to/loader] Both the filesystem and the path can be specified. If only a path is specified, then the default filesystem is used. If only a pool and filesystem are specified, then /boot/zfsloader is used as a path. Additionally, the status command can be used to query information about discovered pools. The output format is similar to that of zpool status (see zpool(8)). The configured or automatically determined ZFS boot filesystem is stored in the zfsloader(8) loaddev variable, and also set as the initial value of the currdev variable. FILES
/boot/gptzfsboot boot code binary /boot.config parameters for the boot block (optional) /boot/config alternative parameters for the boot block (optional) EXAMPLES
gptzfsboot is typically installed in combination with a ``protective MBR'' (see gpart(8)). To install gptzfsboot on the ada0 drive: gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ada0 gptzfsboot can also be installed without the PMBR: gpart bootcode -p /boot/gptzfsboot -i 1 ada0 SEE ALSO
boot.config(5), boot(8), gpart(8), loader(8), zfsloader(8), zpool(8) HISTORY
gptzfsboot appeared in FreeBSD 7.3. AUTHORS
This manual page was written by Andriy Gapon <avg@FreeBSD.org>. BUGS
gptzfsboot looks for ZFS meta-data only in MBR partitions (known on FreeBSD as slices). It does not look into BSD disklabel(8) partitions that are traditionally called partitions. If a disklabel partition happens to be placed so that ZFS meta-data can be found at the fixed off- sets relative to a slice, then gptzfsboot will recognize the partition as a part of a ZFS pool, but this is not guaranteed to happen. BSD
September 15, 2014 BSD
All times are GMT -4. The time now is 04:01 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy