Sponsored Content
Operating Systems Solaris Cannot remove disk added to zpool Post 302930889 by LittleLebowski on Thursday 8th of January 2015 12:16:38 PM
Old 01-08-2015
Quote:
Originally Posted by Peasant
Hot spare will not protect your data in case of of the disk in raid 0 zpool fails.

Spares are used in raid protected setup (raid1,raidz etc.), when one disk fails it will rebuild a raid array using hot spare automatically or manually depending on the zpool autoreplace policy.

You will need to go with jlliagre suggestion or add two more disk in zpool as mirrors of two devices currently present or risk loosing data due to one of the disk in zpool failing.
Agreed and thanks to you both. Next week, I'll be adding one more disk and adding redundancy. Glad this is a test box Smilie
 

10 More Discussions You Might Find Interesting

1. Solaris

Remove the exported zpool

I had a pool which was exported and due to some issues on my SAN i was never able to import it again. Can anyone tell me how can i destroy the exported pool to free up the LUN. I tried to create a new pool on the same pool but it gives me following error # zpool create emcpool4 emcpower0c... (0 Replies)
Discussion started by: fugitive
0 Replies

2. Solaris

Can't remove a LUN from a Zpool!

I am not seeing anyway to remove a LUN from a Zpool... Am I missing something? or do i have to destroy the zpool and recreate it? (2 Replies)
Discussion started by: BG_JrAdmin
2 Replies

3. Red Hat

Partitioning newly added disk to Redhat

Hi Everyone, I have added new Virtual disk to OS. The main point is I need to bring this whole Disk into LVM control, is it necessary to partition the disk using fdisk command and assign partition type as '8e', or can I directly add that disk into LVM, by running pvcreate command with out... (2 Replies)
Discussion started by: bobby320
2 Replies

4. AIX

Remove the disk online

Hi I have one of the disk missing in my NIMVG. My doubt is can I remove this hdisk2 online ? few of the file systems seems to be spread over 7 PV's. that's why i'm worried. Can someone suggest if I can replace this disk online. Also how to check if there is some data present in hdisk2 alone... (2 Replies)
Discussion started by: newtoaixos
2 Replies

5. Solaris

Bad exchange descriptor : not able to remove files under zpool

Hi , One of my zone went down and when i booted it up i could see the pool in degraded state with some check sum errors . we have brought the pool online after scrubbing. But few files are showing this error Bad exchange descriptor Please let me know how to remove these files (2 Replies)
Discussion started by: chidori
2 Replies

6. Solaris

Add disk to zpool

Hi, Quick question. I have a data zpool that consists of 1 disk. pool: data state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM data ONLINE 0 0 0 c0t50002AC0014B06BEd0 ONLINE... (2 Replies)
Discussion started by: general_lee
2 Replies

7. AIX

LPAR cannot added disk

Dear All, I created a new partition through "Integrated Virtualization Manager" but there have an error when I added a new disk to the partition. The disk already created without any issue, Below error is to add the disk to the partition An error occured while modifying the assignments... (5 Replies)
Discussion started by: lckdanny
5 Replies

8. Solaris

Exporting zpool sitting on different disk partition

Hello, I need some help in recovering ZFS pool. Here is scenerio. There are two disks - c0t0d0 - This is good disk. I cloned it from other server and boot server from this disk. c0t1d0 - This is original disk of this server, having errors. I am able to mount it on /mnt. So that I can copy... (1 Reply)
Discussion started by: solaris_1977
1 Replies

9. Solaris

Replace zpool with another disk

issue, I had a zpool which was full pool_temp1 199G 197G 1.56G 99% ONLINE - pool_temp2 199G 196G 3.09G 98% ONLINE - as you can see, full so I replaced with a larger disk. zpool replace pool_temp1 c3t600144F0FF8BA036000058CC1DB80008d0s0... (2 Replies)
Discussion started by: rrodgers
2 Replies

10. Solaris

How to clear a removed single-disk pool from being listed by zpool import?

On an OmniOS server, I removed a single-disk pool I was using for testing. Now, when I run zpool import it will show it as FAULTED, since that single disk not available anymore. # zpool import pool: fido id: 7452075738474086658 state: FAULTED status: The pool was last... (11 Replies)
Discussion started by: priyadarshan
11 Replies
raidtab(5)							File Formats Manual							raidtab(5)

NAME
raidtab - configuration file for md (RAID) devices DESCRIPTION
/etc/raidtab is the default configuration file for the raid tools (raidstart and company). It defines how RAID devices are configured on a system. FORMAT
/etc/raidtab has multiple sections, one for each md device which is being configured. Each section begins with the raiddev keyword. The order of items in the file is important. Later raiddev entries can use earlier ones (which allows RAID-10, for example), and the parsing code isn't overly bright, so be sure to follow the ordering in this man page for best results. Here's a sample md configuration file: # # sample raiddev configuration file # 'old' RAID0 array created with mdtools. # raiddev /dev/md0 raid-level 0 nr-raid-disks 2 persistent-superblock 0 chunk-size 8 device /dev/hda1 raid-disk 0 device /dev/hdb1 raid-disk 1 raiddev /dev/md1 raid-level 5 nr-raid-disks 3 nr-spare-disks 1 persistent-superblock 1 parity-algorithm left-symmetric device /dev/sda1 raid-disk 0 device /dev/sdb1 raid-disk 1 device /dev/sdc1 raid-disk 2 device /dev/sdd1 spare-disk 0 Here is more information on the directives which are in raid configuration files; the options are listen in this file in the same order they should appear in the actual configuration file. raiddev device This introduces the configuration section for the stated device. nr-raid-disks count Number of raid devices in the array; there should be count raid-disk entries later in the file. (current maximum limit for RAID devices -including spares- is 12 disks. This limit is already extended to 256 disks in experimental patches.) nr-spare-disks count Number of spare devices in the array; there should be count spare-disk entries later in the file. Spare disks may only be used with RAID4 and RAID5, and allow the kernel to automatically build new RAID disks as needed. It is also possible to add/remove spares run- time via raidhotadd/raidhotremove, care has to be taken that the /etc/raidtab configuration exactly follows the actual configuration of the array. (raidhotadd/raidhotremove does not change the configuration file) persistent-superblock 0/1 newly created RAID arrays should use a persistent superblock. A persistent superblock is a small disk area allocated at the end of each RAID device, this helps the kernel to safely detect RAID devices even if disks have been moved between SCSI controllers. It can be used for RAID0/LINEAR arrays too, to protect against accidental disk mixups. (the kernel will either correctly reorder disks, or will refuse to start up an array if something has happened to any member disk. Of course for the 'fail-safe' RAID variants (RAID1/RAID5) spares are activated if any disk fails.) Every member disk/partition/device has a superblock, which carries all information necessary to start up the whole array. (for autodetection to work all the 'member' RAID partitions should be marked type 0xfd via fdisk) The superblock is not visible in the final RAID array and cannot be destroyed accidentally through usage of the md device files, all RAID data content is available for filesystem use. parity-algorithm which The parity-algorithm to use with RAID5. It must be one of left-asymmetric, right-asymmetric, left-symmetric, or right-symmetric. left-symmetric is the one that offers maximum performance on typical disks with rotating platters. chunk-size size Sets the stripe size to size kilobytes. Has to be a power of 2 and has a compilation-time maximum of 4M. (MAX_CHUNK_SIZE in the ker- nel driver) typical values are anything from 4k to 128k, the best value should be determined by experimenting on a given array, alot depends on the SCSI and disk configuration. device devpath Adds the device devpath to the list of devices which comprise the raid system. Note that this command must be followed by one of raid-disk, spare-disk, or parity-disk. Also note that it's possible to recursively define RAID arrays, ie. to set up a RAID5 array of RAID5 arrays. (thus achieving two-disk failure protection, at the price of more disk space spent on RAID5 checksum blocks) raid-disk index The most recently defined device is inserted at position index in the raid array. spare-disk index The most recently defined device is inserted at position index in the spare disk array. parity-disk index The most recently defined device is moved to the end of the raid array, which forces it to be used for parity. failed-disk index The most recently defined device is inserted at position index in the raid array as a failed device. This allows you to create raid 1/4/5 devices in degraded mode - useful for installation. Don't use the smallest device in an array for this, put this after the raid-disk definitions! NOTES
The raidtools are derived from the md-tools and raidtools packages, which were originally written by Marc Zyngier, Miguel de Icaza, Gadi Oxman, Bradley Ward Allen, and Ingo Molnar. SEE ALSO
raidstart(8), raid0run(8), mkraid(8), raidstop(8) raidtab(5)
All times are GMT -4. The time now is 03:41 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy