Sponsored Content
Full Discussion: ZFS - overfilled pool
Operating Systems Solaris ZFS - overfilled pool Post 302626953 by RychnD on Friday 20th of April 2012 03:31:17 AM
Old 04-20-2012
Hi,

thank you both for your replies!

Here are the outputs when run on my machine:

Code:
uname -a

SunOS nas1 5.11 snv_151a i86pc i386 i86pc Solaris

Code:
cat /etc/release

Oracle Solaris 11 Express snv_151a X86
Copyright (c) 2010, Oracle and/or its affiliates.  All rights reserved.
Assembled 04 November 2010

Code:
zpool upgrade -v

This system is currently running ZFS pool version 31.

The following versions are supported:

VER  DESCRIPTION
---  --------------------------------------------------------
 1   Initial ZFS version
 2   Ditto blocks (replicated metadata)
 3   Hot spares and double parity RAID-Z
 4   zpool history
 5   Compression using the gzip algorithm
 6   bootfs pool property
 7   Separate intent log devices
 8   Delegated administration
 9   refquota and refreservation properties
 10  Cache devices
 11  Improved scrub performance
 12  Snapshot properties
 13  snapused property
 14  passthrough-x aclinherit
 15  user/group space accounting
 16  stmf property support
 17  Triple-parity RAID-Z
 18  Snapshot user holds
 19  Log device removal
 20  Compression using zle (zero-length encoding)
 21  Deduplication
 22  Received properties
 23  Slim ZIL
 24  System attributes
 25  Improved scrub stats
 26  Improved snapshot deletion performance
 27  Improved snapshot creation performance
 28  Multiple vdev replacements
 29  RAID-Z/mirror hybrid allocator
 30  Encryption
 31  Improved 'zfs list' performance

Thank you for helping me,
Dusan
 

10 More Discussions You Might Find Interesting

1. Solaris

ZFS Pool Mix-up

Hi all I plan to install Solaris 10U6 on some SPARC server using ZFS as root pool, whereas I would like to keep the current setup done by VxVM: - 2 internal disks: c0t0d0 and c0t1d0 - bootable root-volume (mirrored, both disks) - 1 non-mirrored swap slice - 1 non-mirrored slices for Live... (1 Reply)
Discussion started by: blicki
1 Replies

2. Infrastructure Monitoring

zfs - migrate from pool to pool

Here are the details. cnjr-opennms>root$ zfs list NAME USED AVAIL REFER MOUNTPOINT openpool 20.6G 46.3G 35.5K /openpool openpool/ROOT 15.4G 46.3G 18K legacy openpool/ROOT/rds 15.4G 46.3G 15.3G / openpool/ROOT/rds/var 102M ... (3 Replies)
Discussion started by: pupp
3 Replies

3. Solaris

ZFS pool question

I created a pool the other day. I created a 10 gig files just for a test, then deleted it. I proceeded to create a few files systems. But for some reason the pool shows 10% full, but the files systems are both at 1%? Both files systems share the same pool. When I ls -al the pool I just... (6 Replies)
Discussion started by: mrlayance
6 Replies

4. Solaris

ZFS - list of disks used in a pool

Hi guys, We had created a pool as follows: zpool create filing_pool raidz c1t2d0 c1t3d0 ........ Due to some requirement, we need to destroy the pool and re-create another one. We wish to know now which disks have been included in the filing_pool, how do we list the disks used to create... (2 Replies)
Discussion started by: frum
2 Replies

5. Solaris

zfs pool migration

I need to migrate an existing raidz pool to a new raidz pool with larger disks. I need the mount points and attributes to migrate as well. What is the best procedure to accomplish this. The current pool is 6x36GB disks 202GB capacity and I am migrating to 5x 72GB disks 340GB capacity. (2 Replies)
Discussion started by: jac
2 Replies

6. Solaris

Best way to rename a ZFS Pool?

Other than export/import, is there a cleaner way to rename a pool without unmounting de FS? Something like, say "zpool rename a b"? Thanks. (2 Replies)
Discussion started by: verdepollo
2 Replies

7. Solaris

reassign zfs pool lun

I have a branded zone txdjintra that utilizes a pool named Pool_djintra that is no longer required. There is a 150 Gig Lun assigned to the pool that I need to reassign to another branded zone txpsrsrv07 with a pool named Pool_txpsrsrv07 on the same sun blade. What is the process to do this? ... (0 Replies)
Discussion started by: jeffsr
0 Replies

8. Solaris

Using liveupgrade on single ZFS pool

Hi Guys, I have a single ZFS pool with 2 disk which is mirrored if i create a new BE with lucreate should i specify which disk where the new BE should be created? (7 Replies)
Discussion started by: batas
7 Replies

9. Solaris

Need to remove a disk from zfs pool

I accidently added a disk in different zpool instead of pool, where I want. root@prtdrd21:/# zpool status cvfdb2_app_pool pool: cvfdb2_app_pool state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM cvfdb2_app_pool ONLINE 0 0 0... (1 Reply)
Discussion started by: solaris_1977
1 Replies

10. Solaris

Zfs send to compressed pool?

I have a newly created zpool, and I have set compression on, for the whole pool: # zfs set compression=on newPool Now I have zfs send | zfs receive lot of snapshots to my newPool, but the compression is gone. I was hoping that I would be able to send snapshots to the new pool (which is... (0 Replies)
Discussion started by: kebabbert
0 Replies
ZFS-FUSE(8)							  [FIXME: manual]						       ZFS-FUSE(8)

NAME
zfs-fuse - ZFS filesystem daemon SYNOPSIS
zfs-fuse [--pidfile filename] [--no-daemon] [--no-kstat-mount] [--disable-block-cache] [--disable-page-cache] [--fuse-attr-timeout SECONDS] [--fuse-entry-timeout SECONDS] [--log-uberblocks] [--max-arc-size MB] [--fuse-mount-options OPT,OPT,OPT...] [--min-uberblock-txg MIN] [--stack-size=size] [--enable-xattr] [--help] DESCRIPTION
This manual page documents briefly the zfs-fuse command. zfs-fuse is a daemon which provides support for the ZFS filesystem, via fuse. Ordinarily this daemon will be invoked from system boot scripts. OPTIONS
This program follows the usual GNU command line syntax, with long options starting with two dashes (`-'). A summary of options is included below. For a complete description, see the Info files. -h --help Show summary of options. -p filename --pidfile filename Write the daemon's PID to filename after daemonizing. Ignored if --no-daemon is passed. filename should be a fully-qualified path. -n --no-daemon Stay in foreground, don't daemonize. --no-kstat-mount Do not mount kstats in /zfs-kstat --disable-block-cache Enable direct I/O for disk operations. Completely disables caching reads and writes in the kernel block cache. Breaks mmap() in ZFS datasets too. --disable-page-cache Disable the page cache for files residing within ZFS filesystems. Not recommended as it slows down I/O operations considerably. -a SECONDS --fuse-attr-timeout SECONDS Sets timeout for caching FUSE attributes in kernel. Defaults to 0.0. Higher values give a 40% performance boost. -e SECONDS --fuse-entry-timeout SECONDS Sets timeout for caching FUSE entries in kernel. Defaults to 0.0. Higher values give a 10000% performance boost but cause file permission checking security issues. --log-uberblocks Logs uberblocks of any mounted filesystem to syslog -m MB --max-arc-size MB Forces the maximum ARC size (in megabytes). Range: 16 to 16384. -o OPT... --fuse-mount-options OPT,OPT,OPT... Sets FUSE mount options for all filesystems. Format: comma-separated string of characters. -u MIN --min-uberblock-txg MIN Skips uberblocks with a TXG < MIN when mounting any fs -v MB --vdev-cache-size MB adjust the size of the vdev cache. Default : 10 --zfs-prefetch-disable Disable the high level prefetch cache in zfs. This thing can eat up to 150 Mb of ram, maybe more --stack-size=size Limit the stack size of threads (in kb). default : no limit (8 Mb for linux) -x --enable-xattr Enable support for extended attributes. Not generally recommended because it currently has a significant performance penalty for many small IOPS -h --help Show this usage summary. REMARKS ON PRECEDENCE
Note that the parameters passed on the command line take precedence over those supplied through /etc/zfs/zfsrc. BUGS
/CAVEATS The path to the configuration file (/etc/zfs/zfsrc) cannot at this time be configured. Most existing packages suggest settings can be set at the top of their init script. These get frequently overridden by a (distribution specific) /etc/default/zfs-fuse file, if it exists. Be sure to look at these places if you want your changes to options to take effect. The /etc/zfs/zfsrc is going to be the recommended approach in the future. So, packagers, please refrain from passing commandline parameters within the initscript (except for --pid-file). SEE ALSO
zfs (8), zpool (8), zdb(8), zstreamdump(8), /etc/zfs/zfsrc AUTHOR
This manual page was written by Bryan Donlan bdonlan@gmail.com for the Debian(TM) system (but may be used by others). Permission is granted to copy, distribute and/or modify this document under the terms of the GNU General Public License, Version 2 any later version published by the Free Software Foundation, or the Common Development and Distribution License. Revised by Seth Heeren zfs-fuse@sehe.nl On Debian systems, the complete text of the GNU General Public License can be found in /usr/share/common-licenses/GPL. The text of the Common Development and Distribution Licence may be found at /usr/share/doc/zfs-fuse/copyright COPYRIGHT
Copyright (C) 2010 Bryan Donlan [FIXME: source] 2010-06-09 ZFS-FUSE(8)
All times are GMT -4. The time now is 10:57 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy