Sponsored Content
Operating Systems Solaris Zpool with 3 2-way mirrors in a pool Post 303001305 by fishface on Tuesday 1st of August 2017 12:49:58 PM
Old 08-01-2017
Zpool with 3 2-way mirrors in a pool

I have a single zpool with 3 2-way mirrors ( 3 x 2 way vdevs) it has a degraded disk in mirror-2, I know I can suffer a single drive failure, but looking at this how many drive failures can this suffer before it is no good? On the face of it, I thought that I could lose a further 2 drives in each pool, or even 5, but then I read that if any vdev completely fails in any zpool the entire pool is toast.

Code:
pool1 DEGRADED 
mirror-0 ONLINE
c0t5000C5004835382300 ONLINE
c0t5000CCA02533A897d0 ONLINE
mirror-1 ONLINE 
c0t5000CCA03CA35A25d0 ONLINE
c0t5000C5004867DF01d0 ONLINE
mirror-2 DEGRADED
c0t5000CCA06E11B840d0 ONLINE
c0t5000C5004821F08Fd0 UNAVAIL


Moderator's Comments:
Mod Comment Please use CODE tags as required by forum rules!

Last edited by RudiC; 08-01-2017 at 02:12 PM.. Reason: Added CODE tags.
 

9 More Discussions You Might Find Interesting

1. Solaris

DiskSuite: Breaking mirrors.

Ok, so I have a remote system (7 states away) that's using SDS to manage the two 18 gig disks. /, swap, /var, /home, and /opt. The mirroring procedure I created uses installboot to ensure there's a bootblk on both disks of an SDS mirror. The system has a problem booting (can't write to... (21 Replies)
Discussion started by: BOFH
21 Replies

2. Solaris

both mirrors in needs maintenance state.

Hi, Ii am facing the belwo problem: d50: Mirror Submirror 0: d30 State: Needs maintenance Submirror 1: d40 State: Needs maintenance Pass: 1 Read option: roundrobin (default) Write option: parallel (default) Size: 212176648 blocks (101 GB) d30:... (3 Replies)
Discussion started by: sag71155
3 Replies

3. Infrastructure Monitoring

zfs - migrate from pool to pool

Here are the details. cnjr-opennms>root$ zfs list NAME USED AVAIL REFER MOUNTPOINT openpool 20.6G 46.3G 35.5K /openpool openpool/ROOT 15.4G 46.3G 18K legacy openpool/ROOT/rds 15.4G 46.3G 15.3G / openpool/ROOT/rds/var 102M ... (3 Replies)
Discussion started by: pupp
3 Replies

4. Linux

Additional mirrors on centos

How can I add additional mirrors to my CENTOS distro, according to this page AdditionalResources/Repositories - CentOS Wiki there are few fedora project repositories I'd like to add any of them but I don't know how? Thank you in advance (0 Replies)
Discussion started by: c0mrade
0 Replies

5. Linux

[Errno 256] No more mirrors to try.

Dear all, CentOS 6 After executing "yum update -y" command I am facing this error. Please help me out. thanks in advance. Full error & error code is given as follow: ... (7 Replies)
Discussion started by: saqlain.bashir
7 Replies

6. Solaris

Help with attaching mirrors

Hi Guys, I need a help with attaching the sub mirrors as it keep throwing errors. I have done solaris live upgrade and it was succesful but it keeps throwing error only for root (s0) and swap (s1)when i try to attach them. For rest of the partitions for slices 3,4,5 on target 1 are able to... (4 Replies)
Discussion started by: phanidhar6039
4 Replies

7. BSD

Unable to create zfs zpool in FreeBSD 8.2: no such pool or dataset

I am trying to test simple zfs functionality on a FreeBSD 8.2 VM. When I try to run a 'zpool create' I receive the following error: # zpool create zfspool /dev/da0s1a cannot create 'zfspool': no such pool or dataset # zpool create zfspool /dev/da0 cannot create 'zfspool': no such pool or... (3 Replies)
Discussion started by: bstring
3 Replies

8. Solaris

Oneway mirrors

All, One-way mirror. Elements of the concat in Last-errd state. What would be the best way to correct it? metastat -s db2test -pc db2test/d220 p 5.0GB db2test/d200 db2test/d219 p 5.0GB db2test/d200 db2test/d218 p 5.0GB db2test/d200 db2test/d217 p 30GB db2test/d200... (0 Replies)
Discussion started by: ossupport55
0 Replies

9. Solaris

How to clear a removed single-disk pool from being listed by zpool import?

On an OmniOS server, I removed a single-disk pool I was using for testing. Now, when I run zpool import it will show it as FAULTED, since that single disk not available anymore. # zpool import pool: fido id: 7452075738474086658 state: FAULTED status: The pool was last... (11 Replies)
Discussion started by: priyadarshan
11 Replies
GPTZFSBOOT(8)						    BSD System Manager's Manual 					     GPTZFSBOOT(8)

NAME
gptzfsboot -- GPT bootcode for ZFS on BIOS-based computers DESCRIPTION
gptzfsboot is used on BIOS-based computers to boot from a filesystem in a ZFS pool. gptzfsboot is installed in a freebsd-boot partition of a GPT-partitioned disk with gpart(8). IMPLEMENTATION NOTES
The GPT standard allows a variable number of partitions, but gptzfsboot only boots from tables with 128 partitions or less. BOOTING
gptzfsboot tries to find all ZFS pools that are composed of BIOS-visible hard disks or partitions on them. gptzfsboot looks for ZFS device labels on all visible disks and in discovered supported partitions for all supported partition scheme types. The search starts with the disk from which gptzfsboot itself was loaded. Other disks are probed in BIOS defined order. After a disk is probed and gptzfsboot determines that the whole disk is not a ZFS pool member, the individual partitions are probed in their partition table order. Currently GPT and MBR partition schemes are supported. With the GPT scheme, only partitions of type freebsd-zfs are probed. The first pool seen during probing is used as a default boot pool. The filesystem specified by the bootfs property of the pool is used as a default boot filesystem. If the bootfs property is not set, then the root filesystem of the pool is used as the default. zfsloader(8) is loaded from the boot filesystem. If /boot.config or /boot/config is present in the boot filesystem, boot options are read from it in the same way as boot(8). The ZFS GUIDs of the first successfully probed device and the first detected pool are made available to zfsloader(8) in the vfs.zfs.boot.primary_vdev and vfs.zfs.boot.primary_pool variables. USAGE
Normally gptzfsboot will boot in fully automatic mode. However, like boot(8), it is possible to interrupt the automatic boot process and interact with gptzfsboot through a prompt. gptzfsboot accepts all the options that boot(8) supports. The filesystem specification and the path to zfsloader(8) are different from boot(8). The format is [zfs:pool/filesystem:][/path/to/loader] Both the filesystem and the path can be specified. If only a path is specified, then the default filesystem is used. If only a pool and filesystem are specified, then /boot/zfsloader is used as a path. Additionally, the status command can be used to query information about discovered pools. The output format is similar to that of zpool status (see zpool(8)). The configured or automatically determined ZFS boot filesystem is stored in the zfsloader(8) loaddev variable, and also set as the initial value of the currdev variable. FILES
/boot/gptzfsboot boot code binary /boot.config parameters for the boot block (optional) /boot/config alternative parameters for the boot block (optional) EXAMPLES
gptzfsboot is typically installed in combination with a ``protective MBR'' (see gpart(8)). To install gptzfsboot on the ada0 drive: gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ada0 gptzfsboot can also be installed without the PMBR: gpart bootcode -p /boot/gptzfsboot -i 1 ada0 SEE ALSO
boot.config(5), boot(8), gpart(8), loader(8), zfsloader(8), zpool(8) HISTORY
gptzfsboot appeared in FreeBSD 7.3. AUTHORS
This manual page was written by Andriy Gapon <avg@FreeBSD.org>. BUGS
gptzfsboot looks for ZFS meta-data only in MBR partitions (known on FreeBSD as slices). It does not look into BSD disklabel(8) partitions that are traditionally called partitions. If a disklabel partition happens to be placed so that ZFS meta-data can be found at the fixed off- sets relative to a slice, then gptzfsboot will recognize the partition as a part of a ZFS pool, but this is not guaranteed to happen. BSD
September 15, 2014 BSD
All times are GMT -4. The time now is 10:00 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy