Sponsored Content
Full Discussion: unable to import zfs pool
Operating Systems Solaris unable to import zfs pool Post 302329003 by cy1972 on Thursday 25th of June 2009 05:06:19 PM
Old 06-25-2009
zpool import -D should show any exported pools on the system and may be able to help show you what the state of the pool is in with devices. If not, then a fmdump -eV on the system may show what type of "device" the pool was, plus some other stuff about the pool.

e.g

pool = vault
pool_guid = 0x2bb202be54c462e
pool_context = 2
pool_failmode = wait
vdev_guid = 0xaa3f2fd35788620b
vdev_type = mirror
parent_guid = 0x2bb202be54c462e
parent_type = root
prev_state = 0x7
__ttl = 0x1
__tod = 0x4a27c183 0x9d8492d

Looks to me like the pool was just a stripe of two RAID5 devices on the EMC and you've lost at least one of devices, hence loss of the pool.
 

10 More Discussions You Might Find Interesting

1. Solaris

ZFS Pool Mix-up

Hi all I plan to install Solaris 10U6 on some SPARC server using ZFS as root pool, whereas I would like to keep the current setup done by VxVM: - 2 internal disks: c0t0d0 and c0t1d0 - bootable root-volume (mirrored, both disks) - 1 non-mirrored swap slice - 1 non-mirrored slices for Live... (1 Reply)
Discussion started by: blicki
1 Replies

2. Infrastructure Monitoring

zfs - migrate from pool to pool

Here are the details. cnjr-opennms>root$ zfs list NAME USED AVAIL REFER MOUNTPOINT openpool 20.6G 46.3G 35.5K /openpool openpool/ROOT 15.4G 46.3G 18K legacy openpool/ROOT/rds 15.4G 46.3G 15.3G / openpool/ROOT/rds/var 102M ... (3 Replies)
Discussion started by: pupp
3 Replies

3. Solaris

ZFS pool question

I created a pool the other day. I created a 10 gig files just for a test, then deleted it. I proceeded to create a few files systems. But for some reason the pool shows 10% full, but the files systems are both at 1%? Both files systems share the same pool. When I ls -al the pool I just... (6 Replies)
Discussion started by: mrlayance
6 Replies

4. Solaris

zfs pool migration

I need to migrate an existing raidz pool to a new raidz pool with larger disks. I need the mount points and attributes to migrate as well. What is the best procedure to accomplish this. The current pool is 6x36GB disks 202GB capacity and I am migrating to 5x 72GB disks 340GB capacity. (2 Replies)
Discussion started by: jac
2 Replies

5. Solaris

Best way to rename a ZFS Pool?

Other than export/import, is there a cleaner way to rename a pool without unmounting de FS? Something like, say "zpool rename a b"? Thanks. (2 Replies)
Discussion started by: verdepollo
2 Replies

6. Solaris

ZFS - overfilled pool

installed Solaris 11 Express on my server machine a while ago. I created a Z2 RAID over five HDDs and created a few ZFS filesystems on it. Once I (unintentionally) managed to fill the pool completely with data and (to my surprise) the filesystems stopped working - I could not read/delete any... (3 Replies)
Discussion started by: RychnD
3 Replies

7. BSD

Unable to create zfs zpool in FreeBSD 8.2: no such pool or dataset

I am trying to test simple zfs functionality on a FreeBSD 8.2 VM. When I try to run a 'zpool create' I receive the following error: # zpool create zfspool /dev/da0s1a cannot create 'zfspool': no such pool or dataset # zpool create zfspool /dev/da0 cannot create 'zfspool': no such pool or... (3 Replies)
Discussion started by: bstring
3 Replies

8. Solaris

Need to remove a disk from zfs pool

I accidently added a disk in different zpool instead of pool, where I want. root@prtdrd21:/# zpool status cvfdb2_app_pool pool: cvfdb2_app_pool state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM cvfdb2_app_pool ONLINE 0 0 0... (1 Reply)
Discussion started by: solaris_1977
1 Replies

9. Solaris

Zfs send to compressed pool?

I have a newly created zpool, and I have set compression on, for the whole pool: # zfs set compression=on newPool Now I have zfs send | zfs receive lot of snapshots to my newPool, but the compression is gone. I was hoping that I would be able to send snapshots to the new pool (which is... (0 Replies)
Discussion started by: kebabbert
0 Replies

10. Solaris

How to clear a removed single-disk pool from being listed by zpool import?

On an OmniOS server, I removed a single-disk pool I was using for testing. Now, when I run zpool import it will show it as FAULTED, since that single disk not available anymore. # zpool import pool: fido id: 7452075738474086658 state: FAULTED status: The pool was last... (11 Replies)
Discussion started by: priyadarshan
11 Replies
RANDOM(4)						     Linux Programmer's Manual							 RANDOM(4)

NAME
random, urandom - kernel random number source devices DESCRIPTION
The character special files /dev/random and /dev/urandom (present since Linux 1.3.30) provide an interface to the kernel's random number generator. File /dev/random has major device number 1 and minor device number 8. File /dev/urandom has major device number 1 and minor device number 9. The random number generator gathers environmental noise from device drivers and other sources into an entropy pool. The generator also keeps an estimate of the number of bit of the noise in the entropy pool. From this entropy pool random numbers are created. When read, the /dev/random device will only return random bytes within the estimated number of bits of noise in the entropy pool. /dev/random should be suitable for uses that need very high quality randomness such as one-time pad or key generation. When the entropy pool is empty, reads to /dev/random will block until additional environmental noise is gathered. When read, /dev/urandom device will return as many bytes as are requested. As a result, if there is not sufficient entropy in the entropy pool, the returned values are theoretically vulnerable to a cryptographic attack on the algorithms used by the driver. Knowledge of how to do this is not available in the current non-classified literature, but it is theoretically possible that such an attack may exist. If this is a concern in your application, use /dev/random instead. CONFIGURING
If your system does not have /dev/random and /dev/urandom created already, they can be created with the following commands: mknod -m 644 /dev/random c 1 8 mknod -m 644 /dev/urandom c 1 9 chown root:root /dev/random /dev/urandom When a Linux system starts up without much operator interaction, the entropy pool may be in a fairly predictable state. This reduces the actual amount of noise in the entropy pool below the estimate. In order to counteract this effect, it helps to carry entropy pool informa- tion across shut-downs and start-ups. To do this, add the following lines to an appropriate script which is run during the Linux system start-up sequence: echo "Initializing kernel random number generator..." # Initialize kernel random number generator with random seed # from last shut-down (or start-up) to this start-up. Load and # then save 512 bytes, which is the size of the entropy pool. if [ -f /var/random-seed ]; then cat /var/random-seed >/dev/urandom fi dd if=/dev/urandom of=/var/random-seed count=1 Also, add the following lines in an appropriate script which is run during the Linux system shutdown: # Carry a random seed from shut-down to start-up for the random # number generator. Save 512 bytes, which is the size of the # random number generator's entropy pool. echo "Saving random seed..." dd if=/dev/urandom of=/var/random-seed count=1 FILES
/dev/random /dev/urandom AUTHOR
The kernel's random number generator was written by Theodore Ts'o (tytso@athena.mit.edu). SEE ALSO
mknod (1) RFC 1750, "Randomness Recommendations for Security" Linux 1997-08-01 RANDOM(4)
All times are GMT -4. The time now is 10:21 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy