Sponsored Content
Full Discussion: ZFS - overfilled pool
Operating Systems Solaris ZFS - overfilled pool Post 302626411 by RychnD on Thursday 19th of April 2012 09:37:11 AM
Old 04-19-2012
ZFS - overfilled pool

installed Solaris 11 Express on my server machine a while ago. I created a Z2 RAID over five HDDs and created a few ZFS filesystems on it.

Once I (unintentionally) managed to fill the pool completely with data and (to my surprise) the filesystems stopped working - I could not read/delete any data and when I unmounted the pool, I could not even mount it again.

I've heard that this is a standard behavior of ZFS filesystems and that the correct way to avoid such problems in future is not to use the full capacity of the pool.

Now I'm thinking about creating quotas on my filesystems (as they describe in [this article](ZFS: Set or create a filesystem quota)), but I am wondering if that is enough.

I have got a tree hiearchy of filesystems on the pool, e.g. something like this (pool is the name of the zpool and also the name of the root filesystem on the pool):

/pool
/pool/svn
/pool/home
...

Is it OK to just set a quota on "pool" (as they should propagate to all sub-filesystems)? I mean, is this enough to prevent such event to happen again? For instance, would it prevent me to make a new fs snapshot should the quota be overrun?

How much space should I reserve, e.g. make unavailable (I read somewhere that it is a good practise to use only about 80% of the pool capacity)?

Finally, is there a better/more suitable solution to my original problem than setting the quota on fs?

Thank you very much for your advice.
Dusan
 

10 More Discussions You Might Find Interesting

1. Solaris

ZFS Pool Mix-up

Hi all I plan to install Solaris 10U6 on some SPARC server using ZFS as root pool, whereas I would like to keep the current setup done by VxVM: - 2 internal disks: c0t0d0 and c0t1d0 - bootable root-volume (mirrored, both disks) - 1 non-mirrored swap slice - 1 non-mirrored slices for Live... (1 Reply)
Discussion started by: blicki
1 Replies

2. Infrastructure Monitoring

zfs - migrate from pool to pool

Here are the details. cnjr-opennms>root$ zfs list NAME USED AVAIL REFER MOUNTPOINT openpool 20.6G 46.3G 35.5K /openpool openpool/ROOT 15.4G 46.3G 18K legacy openpool/ROOT/rds 15.4G 46.3G 15.3G / openpool/ROOT/rds/var 102M ... (3 Replies)
Discussion started by: pupp
3 Replies

3. Solaris

ZFS pool question

I created a pool the other day. I created a 10 gig files just for a test, then deleted it. I proceeded to create a few files systems. But for some reason the pool shows 10% full, but the files systems are both at 1%? Both files systems share the same pool. When I ls -al the pool I just... (6 Replies)
Discussion started by: mrlayance
6 Replies

4. Solaris

ZFS - list of disks used in a pool

Hi guys, We had created a pool as follows: zpool create filing_pool raidz c1t2d0 c1t3d0 ........ Due to some requirement, we need to destroy the pool and re-create another one. We wish to know now which disks have been included in the filing_pool, how do we list the disks used to create... (2 Replies)
Discussion started by: frum
2 Replies

5. Solaris

zfs pool migration

I need to migrate an existing raidz pool to a new raidz pool with larger disks. I need the mount points and attributes to migrate as well. What is the best procedure to accomplish this. The current pool is 6x36GB disks 202GB capacity and I am migrating to 5x 72GB disks 340GB capacity. (2 Replies)
Discussion started by: jac
2 Replies

6. Solaris

Best way to rename a ZFS Pool?

Other than export/import, is there a cleaner way to rename a pool without unmounting de FS? Something like, say "zpool rename a b"? Thanks. (2 Replies)
Discussion started by: verdepollo
2 Replies

7. Solaris

reassign zfs pool lun

I have a branded zone txdjintra that utilizes a pool named Pool_djintra that is no longer required. There is a 150 Gig Lun assigned to the pool that I need to reassign to another branded zone txpsrsrv07 with a pool named Pool_txpsrsrv07 on the same sun blade. What is the process to do this? ... (0 Replies)
Discussion started by: jeffsr
0 Replies

8. Solaris

Using liveupgrade on single ZFS pool

Hi Guys, I have a single ZFS pool with 2 disk which is mirrored if i create a new BE with lucreate should i specify which disk where the new BE should be created? (7 Replies)
Discussion started by: batas
7 Replies

9. Solaris

Need to remove a disk from zfs pool

I accidently added a disk in different zpool instead of pool, where I want. root@prtdrd21:/# zpool status cvfdb2_app_pool pool: cvfdb2_app_pool state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM cvfdb2_app_pool ONLINE 0 0 0... (1 Reply)
Discussion started by: solaris_1977
1 Replies

10. Solaris

Zfs send to compressed pool?

I have a newly created zpool, and I have set compression on, for the whole pool: # zfs set compression=on newPool Now I have zfs send | zfs receive lot of snapshots to my newPool, but the compression is gone. I was hoping that I would be able to send snapshots to the new pool (which is... (0 Replies)
Discussion started by: kebabbert
0 Replies
SETQUOTA(8)						      System Manager's Manual						       SETQUOTA(8)

NAME
setquota - set disk quotas SYNOPSIS
/usr/sbin/setquota [ -r ] [ -u | -g ] [ -F quotaformat ] name block-softlimit block-hardlimit inode-softlimit inode-hardlimit -a | filesys- tem... /usr/sbin/setquota [ -r ] [ -u | -g ] [ -F quotaformat ] [ -p protoname ] name -a | filesystem... /usr/sbin/setquota -t [ -u | -g ] [ -F quotaformat ] block-grace inode-grace -a | filesystem... DESCRIPTION
setquota is a command line quota editor. The filesystem, user/group name and new quotas for this filesystem can be specified on the com- mand line. -r Edit also remote quota use rpc.rquotad on remote server to set quota. -F quotaformat Perform setting for specified format (ie. don't perform format autodetection). Possible format names are: vfsold (version 1 quota), vfsv0 (version 2 quota), rpc (quota over NFS), xfs (quota on XFS filesystem) -u Set user quotas for named user. This is the default. -g Set group quotas for named group. -p protoname Use quota settings of user or group protoname to set the quota for the named user or group. -t Set grace times for users/groups. Times block-grace and inode-grace are specified in seconds. -a Go through all filesystems with quota in /etc/mtab and perform setting. To disable a quota, set the coresponding parameter to 0. To change quotas for several filesystems, invoke once for each filesystem. Only the super-user may edit quotas. FILES
aquota.user or aquota.group quota file at the filesystem root (version 2 quota, non-XFS filesystems) quota.user or quota.group quota file at the filesystem root (version 1 quota, non-XFS filesystems) /etc/mtab mounted filesystem table SEE ALSO
edquota(8), quota(1), quotactl(2), quotacheck(8), quotaon(8), repquota(8) SETQUOTA(8)
All times are GMT -4. The time now is 08:58 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy