Sponsored Content
Full Discussion: Zpool mirroring
Operating Systems Solaris Zpool mirroring Post 303015120 by Peasant on Wednesday 28th of March 2018 11:26:48 AM
Old 03-28-2018
A single disk failure of c1d1 or c1d2 in that zpool will result in complete data loss.
Zpool consists of those above and a 6 way mirror vdev called mirror-6 between rest.

What worries me as well is that those are LUNs, and are protected on storage level (correct me if i'm wrong)
Adding additional raid layer is not advised or desired, and will surely impact performance of both storage system and host.

If those are not raid protected outside the system, and performance is required, one should configure using 8 x 100 gb as 4 top level vdevs configured as mirrored pairs of two disks.

This will give 400 GB zpool which consisting of 8 disks which can tolerate upto 4 devices failing, as long as the failed disks are not of the same top level vdev.

If both disks fail from one mirror (vdev), you will lose data from zpool.

One should introduce a hot spare disk in above configuration (the 9th disk) to minimize such probability to minimum.
Of course, a instant death of 2 disks (at the exact time and space) will cause zpool data loss.

This is why you backup for those statistically improbable but possible conditions.


Hope that helps.
Regards
Peasant.
This User Gave Thanks to Peasant For This Post:
 

10 More Discussions You Might Find Interesting

1. Solaris

need zpool to revert...

hi i have created a pool using zpool command for my /dev/dsk/c1d0s3 disk. The poolname is qwertyuiopasdfghjklmnbvcxzzxcvbnmasdfghjklqwertyuiopoiuytrewqasdfghjklkjhgfdsazxcvbnmmnbnbcxczxzassd ddddvfhfghgjjgjhgkhkljfjlhohihiuyuioyguioyguiowyuiogwyuigwrigywuigyguiyuiogyugiyguioyuyguiowygiuygui... (1 Reply)
Discussion started by: SankarV
1 Replies

2. Solaris

ZPOOL help..

hi ... i have added a physical disk to the pool with ""zpool add <poolname> diskname"""... after that i realized that i have to mirror it instead..then i tried to take that disk out of the pool but i m not able to do that.. i have gone through many unix help sites , nothing worked , so please... (6 Replies)
Discussion started by: yrajendergoud
6 Replies

3. Solaris

Zpool backup and restore

hi, my requirement goes something like this: In current setup, we have SPARC server running Solaris10 5/08. Out of 3 HDD available, 2 HDD (other than root) were zpool-ed and 3 zones were created. Now, we have bought a new server with similar H/W config and planning to move the zones... (1 Reply)
Discussion started by: EmbedUX
1 Replies

4. Solaris

Amount of LUNs used for zpool

Hi folks, is there any rule or best practise for amount of LUNs user for zpool construction (from view of performance etc.)?? THX (4 Replies)
Discussion started by: brusell
4 Replies

5. Solaris

Zpool query

Hi, I have an X86pc with Solaris 10 and ZFS system. It has 8 similar disks. I need help in creating some zpools and changing the mount-point of a slice. Currently, the zpool in my system is like this: root@abcxxx>zpool status pool: rpool state: ONLINE scrub: none requested... (4 Replies)
Discussion started by: mystition
4 Replies

6. Solaris

How to tell what disks are used for a zpool?

Hello, Does anyone know how I can tell what disk are being not being used by a zpool? For example in Veritas Volume manager, I can run a "vxdisk list" and disks that are marked as "online invalid" are disk that are not used. I'm looking for a similar command in ZFS which will easily show... (5 Replies)
Discussion started by: robertinoau
5 Replies

7. Solaris

Add disk to zpool

Hi, Quick question. I have a data zpool that consists of 1 disk. pool: data state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM data ONLINE 0 0 0 c0t50002AC0014B06BEd0 ONLINE... (2 Replies)
Discussion started by: general_lee
2 Replies

8. UNIX for Advanced & Expert Users

R5 zpool in corrupted state

Hi All, I am getting zpool corrupted message under zpool status command.What could be the reason for this.I had observed this zpool was full this morning after this i am seeing this error. $PWD>zpool status -xv R5 pool: R5 state: DEGRADED status: One or more devices has experienced... (0 Replies)
Discussion started by: sahil_shine
0 Replies

9. Solaris

Shrinking zpool

Hello experts, I have a solaris 10 (SunOS 5.10 Generic_148888-05 sun4u sparc SUNW,SPARC-Enterprise) that by mistake I added a second san space of 700g to the pool. the whole pool is now 1.2T and, I need to take the space away from the pool and, make the pool 700g total. this is live oracle... (7 Replies)
Discussion started by: afadaghi
7 Replies

10. BSD

Zpool problem

Hi I have a problem with size on zfs filesystem on FreeBSD 9.2-RELEASE-p3. When I do this: free01# df -Th Filesystem Type Size Used Avail Capacity Mounted on /dev/ufs/FreeNASdde ufs 926M 826M 26M 97% / devfs devfs ... (1 Reply)
Discussion started by: primo102
1 Replies
raidtab(5)							File Formats Manual							raidtab(5)

NAME
raidtab - configuration file for md (RAID) devices DESCRIPTION
/etc/raidtab is the default configuration file for the raid tools (raidstart and company). It defines how RAID devices are configured on a system. FORMAT
/etc/raidtab has multiple sections, one for each md device which is being configured. Each section begins with the raiddev keyword. The order of items in the file is important. Later raiddev entries can use earlier ones (which allows RAID-10, for example), and the parsing code isn't overly bright, so be sure to follow the ordering in this man page for best results. Here's a sample md configuration file: # # sample raiddev configuration file # 'old' RAID0 array created with mdtools. # raiddev /dev/md0 raid-level 0 nr-raid-disks 2 persistent-superblock 0 chunk-size 8 device /dev/hda1 raid-disk 0 device /dev/hdb1 raid-disk 1 raiddev /dev/md1 raid-level 5 nr-raid-disks 3 nr-spare-disks 1 persistent-superblock 1 parity-algorithm left-symmetric device /dev/sda1 raid-disk 0 device /dev/sdb1 raid-disk 1 device /dev/sdc1 raid-disk 2 device /dev/sdd1 spare-disk 0 Here is more information on the directives which are in raid configuration files; the options are listen in this file in the same order they should appear in the actual configuration file. raiddev device This introduces the configuration section for the stated device. nr-raid-disks count Number of raid devices in the array; there should be count raid-disk entries later in the file. (current maximum limit for RAID devices -including spares- is 12 disks. This limit is already extended to 256 disks in experimental patches.) nr-spare-disks count Number of spare devices in the array; there should be count spare-disk entries later in the file. Spare disks may only be used with RAID4 and RAID5, and allow the kernel to automatically build new RAID disks as needed. It is also possible to add/remove spares run- time via raidhotadd/raidhotremove, care has to be taken that the /etc/raidtab configuration exactly follows the actual configuration of the array. (raidhotadd/raidhotremove does not change the configuration file) persistent-superblock 0/1 newly created RAID arrays should use a persistent superblock. A persistent superblock is a small disk area allocated at the end of each RAID device, this helps the kernel to safely detect RAID devices even if disks have been moved between SCSI controllers. It can be used for RAID0/LINEAR arrays too, to protect against accidental disk mixups. (the kernel will either correctly reorder disks, or will refuse to start up an array if something has happened to any member disk. Of course for the 'fail-safe' RAID variants (RAID1/RAID5) spares are activated if any disk fails.) Every member disk/partition/device has a superblock, which carries all information necessary to start up the whole array. (for autodetection to work all the 'member' RAID partitions should be marked type 0xfd via fdisk) The superblock is not visible in the final RAID array and cannot be destroyed accidentally through usage of the md device files, all RAID data content is available for filesystem use. parity-algorithm which The parity-algorithm to use with RAID5. It must be one of left-asymmetric, right-asymmetric, left-symmetric, or right-symmetric. left-symmetric is the one that offers maximum performance on typical disks with rotating platters. chunk-size size Sets the stripe size to size kilobytes. Has to be a power of 2 and has a compilation-time maximum of 4M. (MAX_CHUNK_SIZE in the ker- nel driver) typical values are anything from 4k to 128k, the best value should be determined by experimenting on a given array, alot depends on the SCSI and disk configuration. device devpath Adds the device devpath to the list of devices which comprise the raid system. Note that this command must be followed by one of raid-disk, spare-disk, or parity-disk. Also note that it's possible to recursively define RAID arrays, ie. to set up a RAID5 array of RAID5 arrays. (thus achieving two-disk failure protection, at the price of more disk space spent on RAID5 checksum blocks) raid-disk index The most recently defined device is inserted at position index in the raid array. spare-disk index The most recently defined device is inserted at position index in the spare disk array. parity-disk index The most recently defined device is moved to the end of the raid array, which forces it to be used for parity. failed-disk index The most recently defined device is inserted at position index in the raid array as a failed device. This allows you to create raid 1/4/5 devices in degraded mode - useful for installation. Don't use the smallest device in an array for this, put this after the raid-disk definitions! NOTES
The raidtools are derived from the md-tools and raidtools packages, which were originally written by Marc Zyngier, Miguel de Icaza, Gadi Oxman, Bradley Ward Allen, and Ingo Molnar. SEE ALSO
raidstart(8), raid0run(8), mkraid(8), raidstop(8) raidtab(5)
All times are GMT -4. The time now is 05:40 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy