How to clear a removed single-disk pool from being listed by zpool import?


 
Thread Tools Search this Thread
Operating Systems Solaris How to clear a removed single-disk pool from being listed by zpool import?
# 8  
Old 08-11-2018
Code:
# zpool clear fido

???
This User Gave Thanks to hicksd8 For This Post:
# 9  
Old 08-11-2018
Try :
Code:
zpool export fido
devfsadm -Cv

Now check out the if import complains.

Regards
Peasant.
These 2 Users Gave Thanks to Peasant For This Post:
# 10  
Old 08-12-2018
Thank you very much @hicksd8 and @Peasant for suggestions.

Unfortunately, both # zpool clear fido and
zpool export fido; devfsadm -Cv

did not help, I'm still getting same ghost entry with zpool import, same as first post.

I think that's because the single-device pool named "fido" isn't attached, and thus zpool commands won't be able to affect it.

I also tried zpool labelclear -f fido, but it doesn't work, I believe for same reason.

Last night I wondered, if the disk isn't even attached, where does zpool import get that ghost information?

I digged further with the zdb command, which revealed this "label" on disk c1t0025385971B16535d0, which is the server's boot disk:

Code:
# zdb -l /dev/rdsk/c1t0025385971B16535d0
------------------------------------
LABEL 0
------------------------------------
failed to unpack label 0
------------------------------------
LABEL 1
------------------------------------
failed to unpack label 1
------------------------------------
LABEL 2
------------------------------------
    version: 5000
    name: 'fido'
    state: 0
    txg: 30770
    pool_guid: 7452075738474086658
    hostid: 647188743
    hostname: ''
    top_guid: 7525102254531229074
    guid: 7525102254531229074
    vdev_children: 1
    vdev_tree:
        type: 'disk'
        id: 0
        guid: 7525102254531229074
        path: '/dev/nvd0p3'
        whole_disk: 1
        metaslab_array: 37
        metaslab_shift: 32
        ashift: 12
        asize: 509746872320
        is_log: 0
        create_txg: 4
    features_for_read:
        com.delphix:hole_birth
        com.delphix:embedded_data
------------------------------------
LABEL 3
------------------------------------
failed to unpack label 3

So, zpool import seems to read that stray information on the boot disk, related to a long-gone pool, and still believe it is available on the system.

One solution would be to perform a complete reinstall on that machine, wiping boot disk completely with dd before install.

Before doing that, would you know if it's possible at all to safely clear up such a stray zdb entry from a boot disk?

Last edited by priyadarshan; 08-12-2018 at 04:50 AM..
# 11  
Old 08-16-2018
I'm not sure i follow..

So the system is now installed on c1t0025385971B16535d0 (using whole disk), and zpool import complains of that same disk being part of fido zpool which contains the same device c1t0025385971B16535d0 ?
This is quite strange.

Can we see the output of :
Code:
zpool status rpool

Also format command and print partitions of that disk would be helpful.

Regards
Peasant.
# 12  
Old 08-18-2018
I completely reinstalled OmniOS, and I was able to replicate the issue with these steps:


- Install a second NVMe drive
- Install FreeBSD on that and boot from it
- Boot again, this time from OmniOS
- Import the FreeBSD pool (on second NVMe)
- Poweroff without exporting it
- Remove FreeBSD NVMe from server
- Boot from OmniOS

At that point, zpool import will show message as original post.

In that state, there is nothing one can do to remove the stray label from boot disk.

Although not a real bug, I reported that to Illumos devs as feature request, i.e., to be able to remove stray leftover labels caused by above steps.

I am adding the "solved" tag, and I will report back with updates as soon as I have them.


Thanks to all for the help and advice!
Login or Register to Ask a Question

Previous Thread | Next Thread

10 More Discussions You Might Find Interesting

1. Solaris

Replace zpool with another disk

issue, I had a zpool which was full pool_temp1 199G 197G 1.56G 99% ONLINE - pool_temp2 199G 196G 3.09G 98% ONLINE - as you can see, full so I replaced with a larger disk. zpool replace pool_temp1 c3t600144F0FF8BA036000058CC1DB80008d0s0... (2 Replies)
Discussion started by: rrodgers
2 Replies

2. Solaris

Zpool with 3 2-way mirrors in a pool

I have a single zpool with 3 2-way mirrors ( 3 x 2 way vdevs) it has a degraded disk in mirror-2, I know I can suffer a single drive failure, but looking at this how many drive failures can this suffer before it is no good? On the face of it, I thought that I could lose a further 2 drives in each... (4 Replies)
Discussion started by: fishface
4 Replies

3. Solaris

Zpool import/export error

A backup/clone script of ours was recently ran. It normally only clones the rpool and renames in rpoolA. Something must've changed as it found another one of our pools that it shouldn't have. It exported that pool unbeknownst to us. Later on when a coworker realized the other pool was missing he... (2 Replies)
Discussion started by: beantownmp
2 Replies

4. Solaris

Add disk to zpool

Hi, Quick question. I have a data zpool that consists of 1 disk. pool: data state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM data ONLINE 0 0 0 c0t50002AC0014B06BEd0 ONLINE... (2 Replies)
Discussion started by: general_lee
2 Replies

5. Solaris

Using liveupgrade on single ZFS pool

Hi Guys, I have a single ZFS pool with 2 disk which is mirrored if i create a new BE with lucreate should i specify which disk where the new BE should be created? (7 Replies)
Discussion started by: batas
7 Replies

6. Solaris

13 disk raidz2 pool lost

Hi guys, I appreciate any help in this regard, we have lost sensitive data in the company. One box with 2 disk mirrored and a 3ware controller handling 13 disks in a raidz2 pool. Suddenly the box restart and keeps "Reading ZFS config" for hours. Unplugging disk by disk we isolate the disk... (3 Replies)
Discussion started by: tatxo
3 Replies

7. BSD

Unable to create zfs zpool in FreeBSD 8.2: no such pool or dataset

I am trying to test simple zfs functionality on a FreeBSD 8.2 VM. When I try to run a 'zpool create' I receive the following error: # zpool create zfspool /dev/da0s1a cannot create 'zfspool': no such pool or dataset # zpool create zfspool /dev/da0 cannot create 'zfspool': no such pool or... (3 Replies)
Discussion started by: bstring
3 Replies

8. Solaris

Import zpool with missing slog device

Hello, I have a problem on my backup server. I lost my system hdd and my separate ZIL device while the system crashs and now I'm in trouble. The old system was running under the least version of osol/dev (snv_134) with zfs v22. After the server crashs I was very optimistic of solving the problems... (2 Replies)
Discussion started by: ron2105
2 Replies

9. Solaris

unable to import zfs pool

# zpool import pool: emcpool1 id: 5596268873059055768 state: UNAVAIL status: One or more devices are missing from the system. action: The pool cannot be imported. Attach the missing devices and try again. see: Sun Message ID: ZFS-8000-3C config: emcpool1 ... (7 Replies)
Discussion started by: fugitive
7 Replies

10. Solaris

( VxVM ) How to add the removed disk back to previous disk group

Previously , i remove the disk by #vxdg -g testdg -k rmdisk testdg02 But i got error when i -k adddisk bash-2.03# vxdisk list DEVICE TYPE DISK GROUP STATUS c0t0d0s2 auto:none - - online invalid c0t1d0s2 auto:none ... (1 Reply)
Discussion started by: waibabe
1 Replies
Login or Register to Ask a Question