Sponsored Content
Full Discussion: Zpool mirroring
Operating Systems Solaris Zpool mirroring Post 303015188 by Peasant on Friday 30th of March 2018 12:12:18 AM
Old 03-30-2018
Yes, will hot spare added to pool and if autoreplace option is on in the zpool the following will happen.

1. FMA agent detects a fault and replaces the failed device with spare.
2. When the failed disk is replaced and resilvered, hot spare disk is detached and added back to the pool as hot spare.

Other then that, if the option is not defined, the administrator will need to do option 2 by hand.

This is documented quite well.

Regards
Peasant.
 

10 More Discussions You Might Find Interesting

1. Solaris

need zpool to revert...

hi i have created a pool using zpool command for my /dev/dsk/c1d0s3 disk. The poolname is qwertyuiopasdfghjklmnbvcxzzxcvbnmasdfghjklqwertyuiopoiuytrewqasdfghjklkjhgfdsazxcvbnmmnbnbcxczxzassd ddddvfhfghgjjgjhgkhkljfjlhohihiuyuioyguioyguiowyuiogwyuigwrigywuigyguiyuiogyugiyguioyuyguiowygiuygui... (1 Reply)
Discussion started by: SankarV
1 Replies

2. Solaris

ZPOOL help..

hi ... i have added a physical disk to the pool with ""zpool add <poolname> diskname"""... after that i realized that i have to mirror it instead..then i tried to take that disk out of the pool but i m not able to do that.. i have gone through many unix help sites , nothing worked , so please... (6 Replies)
Discussion started by: yrajendergoud
6 Replies

3. Solaris

Zpool backup and restore

hi, my requirement goes something like this: In current setup, we have SPARC server running Solaris10 5/08. Out of 3 HDD available, 2 HDD (other than root) were zpool-ed and 3 zones were created. Now, we have bought a new server with similar H/W config and planning to move the zones... (1 Reply)
Discussion started by: EmbedUX
1 Replies

4. Solaris

Amount of LUNs used for zpool

Hi folks, is there any rule or best practise for amount of LUNs user for zpool construction (from view of performance etc.)?? THX (4 Replies)
Discussion started by: brusell
4 Replies

5. Solaris

Zpool query

Hi, I have an X86pc with Solaris 10 and ZFS system. It has 8 similar disks. I need help in creating some zpools and changing the mount-point of a slice. Currently, the zpool in my system is like this: root@abcxxx>zpool status pool: rpool state: ONLINE scrub: none requested... (4 Replies)
Discussion started by: mystition
4 Replies

6. Solaris

How to tell what disks are used for a zpool?

Hello, Does anyone know how I can tell what disk are being not being used by a zpool? For example in Veritas Volume manager, I can run a "vxdisk list" and disks that are marked as "online invalid" are disk that are not used. I'm looking for a similar command in ZFS which will easily show... (5 Replies)
Discussion started by: robertinoau
5 Replies

7. Solaris

Add disk to zpool

Hi, Quick question. I have a data zpool that consists of 1 disk. pool: data state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM data ONLINE 0 0 0 c0t50002AC0014B06BEd0 ONLINE... (2 Replies)
Discussion started by: general_lee
2 Replies

8. UNIX for Advanced & Expert Users

R5 zpool in corrupted state

Hi All, I am getting zpool corrupted message under zpool status command.What could be the reason for this.I had observed this zpool was full this morning after this i am seeing this error. $PWD>zpool status -xv R5 pool: R5 state: DEGRADED status: One or more devices has experienced... (0 Replies)
Discussion started by: sahil_shine
0 Replies

9. Solaris

Shrinking zpool

Hello experts, I have a solaris 10 (SunOS 5.10 Generic_148888-05 sun4u sparc SUNW,SPARC-Enterprise) that by mistake I added a second san space of 700g to the pool. the whole pool is now 1.2T and, I need to take the space away from the pool and, make the pool 700g total. this is live oracle... (7 Replies)
Discussion started by: afadaghi
7 Replies

10. BSD

Zpool problem

Hi I have a problem with size on zfs filesystem on FreeBSD 9.2-RELEASE-p3. When I do this: free01# df -Th Filesystem Type Size Used Avail Capacity Mounted on /dev/ufs/FreeNASdde ufs 926M 826M 26M 97% / devfs devfs ... (1 Reply)
Discussion started by: primo102
1 Replies
metahs(1M)						  System Administration Commands						metahs(1M)

NAME
metahs - manage hot spares and hot spare pools SYNOPSIS
/usr/sbin/metahs [-s setname] -a all component /usr/sbin/metahs [-s setname] -a hot_spare_pool [component] /usr/sbin/metahs [-s setname] -d hot_spare_pool [component] /usr/sbin/metahs [-s setname] -d all component /usr/sbin/metahs [-s setname] -e component /usr/sbin/metahs [-s setname] -r hot_spare_pool component-old /usr/sbin/metahs [-s setname] -r all component-old component-new /usr/sbin/metahs [-s setname] -i [hot_spare_pool...] DESCRIPTION
The metahs command manages existing hot spares and hot spare pools. It is used to add, delete, enable, and replace components (slices) in hot spare pools. Like the metainit command, the metahs command can also create an initial hot spare pool. The metahs command does not replace a component of a metadevice. This function is performed by the metareplace command. Hot spares are always in one of three states: available, in-use, or broken. Available hot spares are running and ready to accept data, but are not currently being written to or read from. In-use hot spares are currently being written to and read from. Broken hot spares are out of service and should be repaired. The status of hot spares is displayed when metahs is invoked with the -i option. Solaris Volume Manager supports storage devices and logical volumes, including hot spares, greater than 1 terabyte (TB) when Solaris 10 is running a 64-bit kernel. If a system with large volumes or hot spares is rebooted under a 32-bit Solaris 10 kernel, the large volumes are visible through metastat output, but they cannot be accessed, modified or deleted, and no new large volumes can be created. Any volumes or file systems on a large volume in this situation are also unavailable. If a system with large volumes is rebooted under a version of Solaris prior to Solaris 10, Solaris Volume Manager will not start. All large volumes must be removed before Solaris Volume Manager runs under another version of the Solaris Operating Environment. OPTIONS
Root privileges are required for any of the following options except -i. The following options are supported: -a all component Add component to all hot spare pools. all is not case sensitive. -a hot_spare_pool [component] Add the component to the specified hot_spare_pool. hot_spare_pool is created if it does not already exist. -d all component Delete component from all the hot spare pools. The component cannot be deleted if it is in the in-use state. -d hot_spare_pool [component] Delete hot_spare_pool, if the hot_spare_pool is both empty and not referenced by a metadevice. If component is specified, it is deleted from the hot_spare_pool. Hot spares in the in-use state cannot be deleted. -e component Enable component to be available for use as a hot spare. The component can be enabled if it is in the broken state and has been repaired. -i [hot_spare_pool...] Display the status of the specified hot_spare_pool or for all hot spare pools if one is not specified. -r all component-old component-new Replace component-old with component-new in all hot spare pools which have the component associated. Components cannot be replaced from any hot spare pool if the old hot spare is in the in-use state. -r hot_spare_pool component-old component-new Replace component-old with component-new in the specified hot_spare_pool. Components cannot be replaced from a hot spare pool if the old hot spare is in the in-use state. -s setname Specify the name of the diskset on which metahs works. Using the -s option causes the command to perform its administrative function within the specified diskset. Without this option, the command performs its function on local hot spare pools. OPERANDS
The following operands are supported: component The logical name for the physical slice (partition) on a disk drive, such as /dev/dsk/c0t0d0s2. hot_spare_pool Hot spare pools must be of the form hspnnn, where nnn is a number in the range 000-999. EXAMPLES
Example 1: Adding a Hot Spare to a Hot Spare Pool The following example adds a hot spare /dev/dsk/c0t0d0s7 to a hot spare pool hsp003: # metahs -a hsp003 c0t0d0s7 When the hot spare is added to the pool, the existing order of the hot spares already in the pool is preserved. The new hot spare is added at the end of the list of hot spares in the hot spare pool specified. Example 2: Adding a Hot Spare to All Currently Defined Pools This example adds a hot spare to the hot spare pools that are currently defined: # metahs -a all c0t0d0s7 The keyword all in this example specifies adding the hot spare, /dev/dsk/c0t0d0s7, to all the hot spare pools. Example 3: Deleting a Hot Spare This example deletes a hot spare, /dev/dsk/c0t0d0s7, from a hot spare pool, hsp003: # metahs -d hsp003 c0t0d0s7 When you delete a hot spare, the position of the remaining hot spares in the pool changes to reflect the new order. For instance, if in this example /dev/dsk/c0t0d0s7 were the second of three hot spares, after deletion the third hot spare would move to the second position. Example 4: Replacing a Hot Spare This example replaces a hot spare that was previously defined: # metahs -r hsp001 c0t1d0s0 c0t3d0s0 In this example, the hot spare /dev/dsk/c0t1d0s0 is replaced by /dev/dsk/c0t3d0s0. The order of the hot spares does not change. EXIT STATUS
The following exit values are returned: 0 Successful completion. >0 An error occurred. ATTRIBUTES
See attributes(5) for descriptions of the following attributes: +-----------------------------+-----------------------------+ | ATTRIBUTE TYPE | ATTRIBUTE VALUE | +-----------------------------+-----------------------------+ |Availability |SUNWmdu | +-----------------------------+-----------------------------+ SEE ALSO
mdmonitord(1M), metaclear(1M), metadb(1M), metadetach(1M), metainit(1M), metaoffline(1M), metaonline(1M), metaparam(1M), metarecover(1M), metarename(1M), metareplace(1M), metaroot(1M), metaset(1M), metassist(1M), metastat(1M), metasync(1M), metattach(1M), md.tab(4), md.cf(4), mddb.cf(4), md.tab(4), attributes(5), md(7D) Solaris Volume Manager Administration Guide WARNINGS
Do not create large (>1 TB) volumes if you expect to run the Solaris Operating Environment with a 32-bit kernel or if you expect to use a version of the Solaris Operating Environment prior to Solaris 10. SunOS 5.10 8 Aug 2003 metahs(1M)
All times are GMT -4. The time now is 09:13 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy