What I would say is that the 80% rule still holds true, I have experienced several problems due to zpools being over this threshold. Although the problem seems to be highlighted in CPU utilisation, it is still a definite problem.
A possible resolution would be to limit the ZFS access to memory space, especially if the system is lightly used.
We are currently running version 29 and the patch level is;
Hi all
I plan to install Solaris 10U6 on some SPARC server using ZFS as root pool, whereas I would like to keep the current setup done by VxVM:
- 2 internal disks: c0t0d0 and c0t1d0
- bootable root-volume (mirrored, both disks)
- 1 non-mirrored swap slice
- 1 non-mirrored slices for Live... (1 Reply)
I created a pool the other day. I created a 10 gig files just for a test, then deleted it.
I proceeded to create a few files systems. But for some reason the pool shows 10% full, but the files systems are both at 1%? Both files systems share the same pool.
When I ls -al the pool I just... (6 Replies)
Hi guys,
We had created a pool as follows:
zpool create filing_pool raidz c1t2d0 c1t3d0 ........
Due to some requirement, we need to destroy the pool and re-create another one. We wish to know now which disks have been included in the filing_pool, how do we list the disks used to create... (2 Replies)
I need to migrate an existing raidz pool to a new raidz pool with larger disks. I need the mount points and attributes to migrate as well. What is the best procedure to accomplish this. The current pool is 6x36GB disks 202GB capacity and I am migrating to 5x 72GB disks 340GB capacity. (2 Replies)
Other than export/import, is there a cleaner way to rename a pool without unmounting de FS?
Something like, say "zpool rename a b"?
Thanks. (2 Replies)
I have a branded zone txdjintra that utilizes a pool named Pool_djintra that is no longer required. There is a 150 Gig Lun assigned to the pool that I need to reassign to another branded zone txpsrsrv07 with a pool named Pool_txpsrsrv07 on the same sun blade. What is the process to do this?
... (0 Replies)
Hi Guys,
I have a single ZFS pool with 2 disk which is mirrored if i create a new BE with lucreate should i specify which disk where the new BE should be created? (7 Replies)
I accidently added a disk in different zpool instead of pool, where I want.
root@prtdrd21:/# zpool status cvfdb2_app_pool
pool: cvfdb2_app_pool
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
cvfdb2_app_pool ONLINE 0 0 0... (1 Reply)
I have a newly created zpool, and I have set compression on, for the whole pool:
# zfs set compression=on newPool
Now I have zfs send | zfs receive lot of snapshots to my newPool, but the compression is gone. I was hoping that I would be able to send snapshots to the new pool (which is... (0 Replies)
Discussion started by: kebabbert
0 Replies
LEARN ABOUT REDHAT
rquotad
RQUOTAD(8) System Manager's Manual RQUOTAD(8)NAME
rquotad, rpc.rquotad - remote quota server
SYNOPSIS
rpc.rquotad
DESCRIPTION
rquotad is an rpc(3N) server which returns quotas for a user of a local filesystem which is mounted by a remote machine over the NFS. It
also allows setting of quotas on NFS mounted filesystem. The results are used by quota(1) to display user quotas for remote filesystems and
by edquota(8) to set quotas on remote filesystems. The rquotad daemon is normally started at boot time from the system startup scripts.
FILES
aquota.user or aquota.group
quota file at the filesystem root (version 2 quota, non-XFS filesystems)
quota.user or quota.group
quota file at the filesystem root (version 1 quota, non-XFS filesystems)
/etc/mtab default filesystems
SEE ALSO quota(1), rpc(3N), nfs(5), services(5), inetd(8)RQUOTAD(8)