ZFS pool question

Thread Tools Search this Thread
Operating Systems Solaris ZFS pool question
# 1  
Old 11-03-2009
ZFS pool question

I created a pool the other day. I created a 10 gig files just for a test, then deleted it.

I proceeded to create a few files systems. But for some reason the pool shows 10% full, but the files systems are both at 1%? Both files systems share the same pool.

When I ls -al the pool I just see the file systems.

How can I remove the file I created from the other day?

# 2  
Old 11-03-2009
What does
zfs list -t all

# 3  
Old 11-04-2009
Originally Posted by bartus11
What does
zfs list -t all

zfs list -t all
invalid type 'all'
        list [-rH] [-o property[,...]] [-t type[,...]] [-s property] ...
            [-S property] ... [filesystem|volume|snapshot] ...
The following properties are supported:
        available        NO       NO   <size>
        compressratio    NO       NO   <1.00x or higher if compressed>
        creation         NO       NO   <date>
        mounted          NO       NO   yes | no
        origin           NO       NO   <snapshot>
        referenced       NO       NO   <size>
        type             NO       NO   filesystem | volume | snapshot
        used             NO       NO   <size>
        aclinherit      YES      YES   discard | noallow | restricted | passthrough | passthrough-x
        aclmode         YES      YES   discard | groupmask | passthrough
        atime           YES      YES   on | off
        canmount        YES       NO   on | off | noauto
        casesensitivity NO      YES   sensitive | insensitive | mixed
        checksum        YES      YES   on | off | fletcher2 | fletcher4 | sha256
        compression     YES      YES   on | off | lzjb | gzip | gzip-[1-9]
        copies          YES      YES   1 | 2 | 3
        devices         YES      YES   on | off
        exec            YES      YES   on | off
        mountpoint      YES      YES   <path> | legacy | none
        nbmand          YES      YES   on | off
        normalization    NO      YES   none | formC | formD | formKC | formKD
        quota           YES       NO   <size> | none
        readonly        YES      YES   on | off
        recordsize      YES      YES   512 to 128k, power of 2
        refquota        YES       NO   <size> | none
        refreservation  YES       NO   <size> | none
        reservation     YES       NO   <size> | none
        setuid          YES      YES   on | off
        shareiscsi      YES      YES   on | off | type=<type>
        sharenfs        YES      YES   on | off | share(1M) options
        sharesmb        YES      YES   on | off | sharemgr(1M) options
        snapdir         YES      YES   hidden | visible
        utf8only         NO      YES   on | off
        version         YES       NO   1 | 2 | 3 | current
        volblocksize     NO      YES   512 to 128k, power of 2
        volsize         YES       NO   <size>
        vscan           YES      YES   on | off
        xattr           YES      YES   on | off
        zoned           YES      YES   on | off
Sizes are specified in bytes with standard units such as K, M, G, etc.
User-defined properties can be specified by using a name containing a colon (:).

If I just use zfs list all
glowpool          29.7G   238G  26.6G  /glowpool
glowpool/glows     234M   238G   234M  /glowpool/glows
glowpool/gorking  2.90G   238G  2.90G  /glowpool/gorking

Last edited by DukeNuke2; 11-04-2009 at 09:26 AM..
# 4  
Old 11-04-2009
If my previous command didn't work, try this:
zfs list -t filesystem,snapshot,volume

The thing is you probably have snapshot of that deleted file somewhere and that is what consumes space.
# 5  
Old 11-04-2009
What says:
df -k /glowpool /glowpool/glows /glowpool/gorking
ls -al /glowpool /glowpool/glows /glowpool/gorking

# 6  
Old 11-09-2009
No snapshoots at all.

Originally Posted by jlliagre
What says:
df -k /glowpool /glowpool/glows /glowpool/gorking
ls -al /glowpool /glowpool/glows /glowpool/gorking

df -k /glowpool /glowpool/glows /glowpool/gorking
glowpool 280756224 27845021 241270760 11% /glowpool
glowpool/glows 280756224 6304776 241270760 3% /glowpool/glows
glowpool/gorking 280756224 5335415 241270760 3% /glowpool/gorking
ls -al /glowpool
total 69
drwxrwxrwx 4 root sys 4 Nov 3 13:32 .
drwxr-xr-x 40 root root 1024 Nov 5 08:45 ..
drwxrwxrwx 18 webservd webservd 18 Nov 4 09:22 glows
drwxrwxrwx 4 webservd staff 245 Nov 6 09:13 gorking

/glowpool/glows /glowpool/gorking too much to post.

Last edited by pludi; 11-09-2009 at 10:30 AM.. Reason: codde tags please...
# 7  
Old 11-09-2009
It might be possible that the files you deleted are still open by some process. Alternatively, you might have the glows or gorking mounts being at the same time directories where files are hidden (overlay mount).
Login or Register to Ask a Question

Previous Thread | Next Thread

10 More Discussions You Might Find Interesting

1. UNIX for Beginners Questions & Answers

Opening up ZFS pool as writable

I have installed FreeBSD onto a raw image file using QEMU Emulator successfully. I have formatted the image file using the ZFS file system (ZFS POOL). Using the following commands below I have successfully mounted the image file ready to be opened by zpool sudo losetup /dev/loop0 .img sudo... (1 Reply)
Discussion started by: alphatron150
1 Replies

2. Solaris

Zfs send to compressed pool?

I have a newly created zpool, and I have set compression on, for the whole pool: # zfs set compression=on newPool Now I have zfs send | zfs receive lot of snapshots to my newPool, but the compression is gone. I was hoping that I would be able to send snapshots to the new pool (which is... (0 Replies)
Discussion started by: kebabbert
0 Replies

3. Solaris

reassign zfs pool lun

I have a branded zone txdjintra that utilizes a pool named Pool_djintra that is no longer required. There is a 150 Gig Lun assigned to the pool that I need to reassign to another branded zone txpsrsrv07 with a pool named Pool_txpsrsrv07 on the same sun blade. What is the process to do this? ... (0 Replies)
Discussion started by: jeffsr
0 Replies

4. Solaris

ZFS - Dataset / pool name are the same...cannot destroy

I messed up my pool by doing zfs send...recive So I got the following : zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT rpool 928G 17.3G 911G 1% 1.00x ONLINE - tank1 928G 35.8G 892G 3% 1.00x ONLINE - So I have "tank1" pool. zfs get all... (8 Replies)
Discussion started by: eladgrs
8 Replies

5. Solaris

ZFS - overfilled pool

installed Solaris 11 Express on my server machine a while ago. I created a Z2 RAID over five HDDs and created a few ZFS filesystems on it. Once I (unintentionally) managed to fill the pool completely with data and (to my surprise) the filesystems stopped working - I could not read/delete any... (3 Replies)
Discussion started by: RychnD
3 Replies

6. Solaris

Best way to rename a ZFS Pool?

Other than export/import, is there a cleaner way to rename a pool without unmounting de FS? Something like, say "zpool rename a b"? Thanks. (2 Replies)
Discussion started by: verdepollo
2 Replies

7. Solaris

zfs pool migration

I need to migrate an existing raidz pool to a new raidz pool with larger disks. I need the mount points and attributes to migrate as well. What is the best procedure to accomplish this. The current pool is 6x36GB disks 202GB capacity and I am migrating to 5x 72GB disks 340GB capacity. (2 Replies)
Discussion started by: jac
2 Replies

8. Solaris

ZFS - list of disks used in a pool

Hi guys, We had created a pool as follows: zpool create filing_pool raidz c1t2d0 c1t3d0 ........ Due to some requirement, we need to destroy the pool and re-create another one. We wish to know now which disks have been included in the filing_pool, how do we list the disks used to create... (2 Replies)
Discussion started by: frum
2 Replies

9. Infrastructure Monitoring

zfs - migrate from pool to pool

Here are the details. cnjr-opennms>root$ zfs list NAME USED AVAIL REFER MOUNTPOINT openpool 20.6G 46.3G 35.5K /openpool openpool/ROOT 15.4G 46.3G 18K legacy openpool/ROOT/rds 15.4G 46.3G 15.3G / openpool/ROOT/rds/var 102M ... (3 Replies)
Discussion started by: pupp
3 Replies

10. Solaris

ZFS Pool Mix-up

Hi all I plan to install Solaris 10U6 on some SPARC server using ZFS as root pool, whereas I would like to keep the current setup done by VxVM: - 2 internal disks: c0t0d0 and c0t1d0 - bootable root-volume (mirrored, both disks) - 1 non-mirrored swap slice - 1 non-mirrored slices for Live... (1 Reply)
Discussion started by: blicki
1 Replies
Login or Register to Ask a Question