Sponsored Content
Full Discussion: Zpool mirroring
Operating Systems Solaris Zpool mirroring Post 303015188 by Peasant on Friday 30th of March 2018 12:12:18 AM
Old 03-30-2018
Yes, will hot spare added to pool and if autoreplace option is on in the zpool the following will happen.

1. FMA agent detects a fault and replaces the failed device with spare.
2. When the failed disk is replaced and resilvered, hot spare disk is detached and added back to the pool as hot spare.

Other then that, if the option is not defined, the administrator will need to do option 2 by hand.

This is documented quite well.

Regards
Peasant.
 

10 More Discussions You Might Find Interesting

1. Solaris

need zpool to revert...

hi i have created a pool using zpool command for my /dev/dsk/c1d0s3 disk. The poolname is qwertyuiopasdfghjklmnbvcxzzxcvbnmasdfghjklqwertyuiopoiuytrewqasdfghjklkjhgfdsazxcvbnmmnbnbcxczxzassd ddddvfhfghgjjgjhgkhkljfjlhohihiuyuioyguioyguiowyuiogwyuigwrigywuigyguiyuiogyugiyguioyuyguiowygiuygui... (1 Reply)
Discussion started by: SankarV
1 Replies

2. Solaris

ZPOOL help..

hi ... i have added a physical disk to the pool with ""zpool add <poolname> diskname"""... after that i realized that i have to mirror it instead..then i tried to take that disk out of the pool but i m not able to do that.. i have gone through many unix help sites , nothing worked , so please... (6 Replies)
Discussion started by: yrajendergoud
6 Replies

3. Solaris

Zpool backup and restore

hi, my requirement goes something like this: In current setup, we have SPARC server running Solaris10 5/08. Out of 3 HDD available, 2 HDD (other than root) were zpool-ed and 3 zones were created. Now, we have bought a new server with similar H/W config and planning to move the zones... (1 Reply)
Discussion started by: EmbedUX
1 Replies

4. Solaris

Amount of LUNs used for zpool

Hi folks, is there any rule or best practise for amount of LUNs user for zpool construction (from view of performance etc.)?? THX (4 Replies)
Discussion started by: brusell
4 Replies

5. Solaris

Zpool query

Hi, I have an X86pc with Solaris 10 and ZFS system. It has 8 similar disks. I need help in creating some zpools and changing the mount-point of a slice. Currently, the zpool in my system is like this: root@abcxxx>zpool status pool: rpool state: ONLINE scrub: none requested... (4 Replies)
Discussion started by: mystition
4 Replies

6. Solaris

How to tell what disks are used for a zpool?

Hello, Does anyone know how I can tell what disk are being not being used by a zpool? For example in Veritas Volume manager, I can run a "vxdisk list" and disks that are marked as "online invalid" are disk that are not used. I'm looking for a similar command in ZFS which will easily show... (5 Replies)
Discussion started by: robertinoau
5 Replies

7. Solaris

Add disk to zpool

Hi, Quick question. I have a data zpool that consists of 1 disk. pool: data state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM data ONLINE 0 0 0 c0t50002AC0014B06BEd0 ONLINE... (2 Replies)
Discussion started by: general_lee
2 Replies

8. UNIX for Advanced & Expert Users

R5 zpool in corrupted state

Hi All, I am getting zpool corrupted message under zpool status command.What could be the reason for this.I had observed this zpool was full this morning after this i am seeing this error. $PWD>zpool status -xv R5 pool: R5 state: DEGRADED status: One or more devices has experienced... (0 Replies)
Discussion started by: sahil_shine
0 Replies

9. Solaris

Shrinking zpool

Hello experts, I have a solaris 10 (SunOS 5.10 Generic_148888-05 sun4u sparc SUNW,SPARC-Enterprise) that by mistake I added a second san space of 700g to the pool. the whole pool is now 1.2T and, I need to take the space away from the pool and, make the pool 700g total. this is live oracle... (7 Replies)
Discussion started by: afadaghi
7 Replies

10. BSD

Zpool problem

Hi I have a problem with size on zfs filesystem on FreeBSD 9.2-RELEASE-p3. When I do this: free01# df -Th Filesystem Type Size Used Avail Capacity Mounted on /dev/ufs/FreeNASdde ufs 926M 826M 26M 97% / devfs devfs ... (1 Reply)
Discussion started by: primo102
1 Replies
metaclear(1M)						  System Administration Commands					     metaclear(1M)

NAME
metaclear - delete active metadevices and hot spare pools SYNOPSIS
/usr/sbin/metaclear -h /usr/sbin/metaclear [-s setname] -a [-f] /usr/sbin/metaclear component /usr/sbin/metaclear [-s setname] [-f] metadevice... hot_spare_pool... /usr/sbin/metaclear [-s setname] -r [-f] metadevice... hot_spare_pool... /usr/sbin/metaclear [-s setname] -p component /usr/sbin/metaclear [-s setname] -p metadevice DESCRIPTION
The metaclear command deletes the specified metadevice or hot_spare_pool., or purges all soft partitions from the designated component. Once a metadevice or hot spare pool is deleted, it must be re-created using metainit before it can be used again. Any metadevice currently in use (open) cannot be deleted. OPTIONS
Root privileges are required for all of the following options except -h. -a Deletes all metadevices and configured hot spare pools in the set named by -s, or the local set by default. -f Deletes (forcibly) a metadevice that contains a subcomponent in an error state. -h Displays usage message. -p Deletes (purges) all soft partitions from the specified metadevice or component. -r Recursively deletes specified metadevices and hot spare pools, but does not delete metadevices on which others depend. -s setname Specifies the name of the diskset on which metaclear will work. Using the -s option causes the command to perform its administrative function within the specified diskset. Without this option, the command performs its function on local metadevices and/or hot spare pools. OPERANDS
metadevice ... Specifies the name(s) of the metadevice(s) to be deleted. component Specifies the c*d*t*s* name(s) of the components containing soft partitions to be deleted. hot_spare_pool ... Specifies the name(s) of the hot spare pools to be deleted in the form hspnnn, where nnn is a number in the range 000-999. EXAMPLES
Example 1: Deleting Various Devices The following example deletes a metadevice named d10. # metaclear /dev/md/dsk/d10 The following example deletes all local metadevices and hot spare pools on the system. # metaclear -a The following example deletes a mirror, d20, with an submirror in an error state. # metaclear -f d20 The following example deletes a hot spare pool, hsp001. # metaclear hsp001 The following example deletes a soft partition, d23. # metaclear d23 The following example purges all soft partitions on the slice c2t3d5s2 if those partitions are not being used by other metadevices or are not open. # metaclear -p c2t3d5s2 The following example purges soft partitions from a metadevice. # metaclear -p d2 d3: Soft Partition is cleared d4: Soft Partition is cleared d5: Soft Partition is cleared EXIT STATUS
The following exit values are returned: 0 Successful completion. >0 An error occurred. ATTRIBUTES
See attributes(5) for descriptions of the following attributes: +-----------------------------+-----------------------------+ | ATTRIBUTE TYPE | ATTRIBUTE VALUE | +-----------------------------+-----------------------------+ |Availability |SUNWmdu | +-----------------------------+-----------------------------+ SEE ALSO
mdmonitord(1M), metadb(1M), metadetach(1M), metahs(1M), metainit(1M), metaoffline(1M), metaonline(1M), metaparam(1M), metarecover(1M), metarename(1M), metareplace(1M), metaroot(1M), metaset(1M), metassist(1M), metastat(1M), metasync(1M), metattach(1M), md.tab(4), md.cf(4), mddb.cf(4), md.tab(4), attributes(5), md(7D) Solaris Volume Manager Administration Guide SunOS 5.10 8 Aug 2003 metaclear(1M)
All times are GMT -4. The time now is 12:20 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy