unable to import zfs pool


 
Thread Tools Search this Thread
Operating Systems Solaris unable to import zfs pool
# 8  
Old 06-25-2009
Thanx to all for the efforts but i was able to import the zpool after disabling first HBA cards do not know the reason for this but now the pool is imported and there was not disk lost :-)
Login or Register to Ask a Question

Previous Thread | Next Thread

10 More Discussions You Might Find Interesting

1. Solaris

How to clear a removed single-disk pool from being listed by zpool import?

On an OmniOS server, I removed a single-disk pool I was using for testing. Now, when I run zpool import it will show it as FAULTED, since that single disk not available anymore. # zpool import pool: fido id: 7452075738474086658 state: FAULTED status: The pool was last... (11 Replies)
Discussion started by: priyadarshan
11 Replies

2. Solaris

Zfs send to compressed pool?

I have a newly created zpool, and I have set compression on, for the whole pool: # zfs set compression=on newPool Now I have zfs send | zfs receive lot of snapshots to my newPool, but the compression is gone. I was hoping that I would be able to send snapshots to the new pool (which is... (0 Replies)
Discussion started by: kebabbert
0 Replies

3. Solaris

Need to remove a disk from zfs pool

I accidently added a disk in different zpool instead of pool, where I want. root@prtdrd21:/# zpool status cvfdb2_app_pool pool: cvfdb2_app_pool state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM cvfdb2_app_pool ONLINE 0 0 0... (1 Reply)
Discussion started by: solaris_1977
1 Replies

4. BSD

Unable to create zfs zpool in FreeBSD 8.2: no such pool or dataset

I am trying to test simple zfs functionality on a FreeBSD 8.2 VM. When I try to run a 'zpool create' I receive the following error: # zpool create zfspool /dev/da0s1a cannot create 'zfspool': no such pool or dataset # zpool create zfspool /dev/da0 cannot create 'zfspool': no such pool or... (3 Replies)
Discussion started by: bstring
3 Replies

5. Solaris

ZFS - overfilled pool

installed Solaris 11 Express on my server machine a while ago. I created a Z2 RAID over five HDDs and created a few ZFS filesystems on it. Once I (unintentionally) managed to fill the pool completely with data and (to my surprise) the filesystems stopped working - I could not read/delete any... (3 Replies)
Discussion started by: RychnD
3 Replies

6. Solaris

Best way to rename a ZFS Pool?

Other than export/import, is there a cleaner way to rename a pool without unmounting de FS? Something like, say "zpool rename a b"? Thanks. (2 Replies)
Discussion started by: verdepollo
2 Replies

7. Solaris

zfs pool migration

I need to migrate an existing raidz pool to a new raidz pool with larger disks. I need the mount points and attributes to migrate as well. What is the best procedure to accomplish this. The current pool is 6x36GB disks 202GB capacity and I am migrating to 5x 72GB disks 340GB capacity. (2 Replies)
Discussion started by: jac
2 Replies

8. Solaris

ZFS pool question

I created a pool the other day. I created a 10 gig files just for a test, then deleted it. I proceeded to create a few files systems. But for some reason the pool shows 10% full, but the files systems are both at 1%? Both files systems share the same pool. When I ls -al the pool I just... (6 Replies)
Discussion started by: mrlayance
6 Replies

9. Infrastructure Monitoring

zfs - migrate from pool to pool

Here are the details. cnjr-opennms>root$ zfs list NAME USED AVAIL REFER MOUNTPOINT openpool 20.6G 46.3G 35.5K /openpool openpool/ROOT 15.4G 46.3G 18K legacy openpool/ROOT/rds 15.4G 46.3G 15.3G / openpool/ROOT/rds/var 102M ... (3 Replies)
Discussion started by: pupp
3 Replies

10. Solaris

ZFS Pool Mix-up

Hi all I plan to install Solaris 10U6 on some SPARC server using ZFS as root pool, whereas I would like to keep the current setup done by VxVM: - 2 internal disks: c0t0d0 and c0t1d0 - bootable root-volume (mirrored, both disks) - 1 non-mirrored swap slice - 1 non-mirrored slices for Live... (1 Reply)
Discussion started by: blicki
1 Replies
Login or Register to Ask a Question
THIN_METADATA_SIZE(8)					      System Manager's Manual					     THIN_METADATA_SIZE(8)

NAME
thin_metadata_size - thin provisioning metadata device/file size calculator. SYNOPSIS
thin_metadata_size [options] DESCRIPTION
thin_metadata_size calculates the size of the thin provisioning metadata based on the block size of the thin provisioned devices, the size of the thin provisioning pool and the maximum number of all thin prisioned devices and snapshots. Because thin provisioning pools are holding widely variable contents, this tool is needed to provide sensible initial default size. -b, --block-size BLOCKSIZE[bskKmMgGtTpPeEzZyY] Block size of thin provisioned devices in units of bytes,sectors,kilobytes,kibibytes,... respectively. Default is in sectors with- out a block size unit specifier. Size/number option arguments can be followed by unit specifiers in short one character and long form (eg. -b1m or -b1megabytes). -s, --pool-size POOLSIZE[bskKmMgGtTpPeEzZyY] Thin provisioning pool size in units of bytes,sectors,kilobytes,kibibytes,... respectively. Default is in sectors without a pool size unit specifier. -m, --max-thins #[bskKmMgGtTpPeEzZyY] Maximum sum of all thin provisioned devices and snapshots. Unit identifier supported to allow for convenient entry of large quanti- ties, eg. 1000000 = 1M. Default is absolute quantity without a number unit specifier. -u, --unit {bskKmMgGtTpPeEzZyY} Output unit specifier in units of bytes,sectors,kilobytes,kibibytes,... respectively. Default is in sectors without an output unit specifier. -n, --numeric-only [short|long] Limit output to just the size number with the optional unit specifier character/string. -h, --help Print help and exit. -V, --version Output version information and exit. EXAMPLES
Calculates the thin provisioning metadata device size for block size 64 kilobytes, pool size 1 terabytes and maximum number of thin provi- sioned devices and snapshots of 1000 in units of sectors with long output: thin_metadata_size -b64k -s1t -m1000 Or (using the long options instead) for block size 1 gigabyte, pool size 1 petabytes and maximum number of thin provisioned devices and snapshots of 1 million with numeric only output in units of gigabytes: thin_metadata_size --block-size=1g --pool-size=1p --max-thins=1M --unit=g --numeric-only Same as before (1g,1p,1M,numeric-only) but with unit specifier character appended: thin_metadata_size --block-size=1giga --pool-size=1petabytes --max-thins=1mebi --unit=g --numeric-only=short Or with unit specifier string appended: thin_metadata_size --block-size=1giga --pool-size=1petabytes --max-thins=1mebi --unit=g -nlong DIAGNOSTICS
thin_metadata_size returns an exit code of 0 for success or 1 for error. SEE ALSO
thin_dump(8) thin_check(8) thin_repair(8) thin_restore(8) thin_rmap(8) AUTHOR
Joe Thornber <ejt@redhat.com> Heinz Mauelshagen <HeinzM@RedHat.com> Red Hat, Inc. Thin Provisioning Tools THIN_METADATA_SIZE(8)