Sponsored Content
Operating Systems Solaris [Q] zpool mirror; read from preferred vdev(s) Post 302796385 by pressy on Friday 19th of April 2013 12:07:56 PM
Old 04-19-2013
Sun [Q] zpool mirror; read from preferred vdev(s)

Hallo,

Using a zpool mirror; is there a way to set something like a primary storage / preferred vdevs / read-first devices like it was in disksuite using "metaparam -r [roundrobin | geometric | first]" or the "vxvol rdpol [prefer | round | select | siteread]" to define a preferred plex in VxVM? Will ZFS always use round-robin?

In our example we want to use a brand-new fast storage with hot-data tiering (ssd-caching) mirrored/combined to an older/slower one... with round-robin we could assume that every second read will be slower?

Will the write acknowledgement be commited after both writes so we will have no advantage from the faster storage?

Thanks in advance

- pressy
 

9 More Discussions You Might Find Interesting

1. UNIX for Dummies Questions & Answers

which file manager is preferred?

When I was using windowsXP I encountered a file manager called UltraExplorer http://www.mustangpeak.net/ultraexplorer.html that appeared really impressive. Now in the Linux/UNIX world I was pointed towards a similar idea called 4Pane http://www.4pane.co.uk/ I'm surprised to do a search on... (3 Replies)
Discussion started by: catch22
3 Replies

2. Solaris

ZFS Mirror versus Hardware Mirror

I've looked a little but haven't found a solid answer, assuming there is one. What's better, hardware mirroring or ZFS mirroring? Common practice for us was to use the raid controllers on the Sun x86 servers. Now we've been using ZFS mirroring since U6. Any performance difference? Any other... (3 Replies)
Discussion started by: Lespaul20
3 Replies

3. Solaris

effect of change of mpd preferred path settings

Hello All, In the output of the command "mpdcontrol -no xlist", I found that, some of the preferred paths are marked as "err". You can see the output below: # mpdcontrol -noxlist Unit Dev# MPD_ID/Policy DeviceName FC_AL DevMajMin IOcnt State... (0 Replies)
Discussion started by: sundar3350
0 Replies

4. Solaris

What is mirror and sub mirror in RAID -1 SVM

Hi , I am new to SVM .when i try to learn RAID 1 , first they are creating two RAID 0 strips through metainit d51 1 1 c0t0d0s2 metainit d52 1 1 c1t0d0s2 In the next step metainit d50 -m d51 d50: Mirror is setup next step is metaattach d50 d52 d50 : submirror d52 is... (7 Replies)
Discussion started by: vr_mari
7 Replies

5. AIX

HACMP: set preferred adapter for service ip

Hi, lets say we have two boot interfaces en0 en1 and two resource groups, with two service ips, sip1 and sip2, one persistent ip pers1 both persistend and service ips are applied as ip-alias when I start the cluster, and bring the resource groups up, it looks like en0: sip1 and sip2 ... (4 Replies)
Discussion started by: funksen
4 Replies

6. IP Networking

preferred network connections

I rotate between a static lan, dhcp lan, and various wireless networks daily. Is there a way to set preferred network connections? I use some static ip's daily, some static ip's like once a month, and almost never use the dhcp lan. The same I do with my various wireless networks. Some I use daily... (0 Replies)
Discussion started by: cokedude
0 Replies

7. Solaris

Preferred Solaris 10 AMP stack

Hi all, After being dumped in a Solaris sysadmin role, i have been trying to tidy the unpatched mess I have inherited. Part of this task is to update the current AMP stack. The current stack is Webstack 1.5, which was current in 2009, and as far as I can see, no longer supported. Post the... (2 Replies)
Discussion started by: Sol-nova
2 Replies

8. HP-UX

What is the difference between DRD and Root Mirror Disk using LVM mirror ?

what is the difference between DRD and Root Mirror Disk using LVM mirror ? (3 Replies)
Discussion started by: maxim42
3 Replies

9. Solaris

How to split a zfs/zpool mirror for backout purposes?

Hello, I am upgrading Veritas from 5.1 to 6.0.1/6.0.5 in a Solaris 10 u8 server with OS mirrored (rpool) in zfs/zpool configuration. I need to split it to have a quick way to backout in case of failure (make splitted mirror side bootable for a quick revert, like booting from it). I remember... (3 Replies)
Discussion started by: feroccimx
3 Replies
GRAID3(8)						    BSD System Manager's Manual 						 GRAID3(8)

NAME
graid3 -- control utility for RAID3 devices SYNOPSIS
graid3 label [-Fhnrvw] [-s blocksize] name prov prov prov ... graid3 clear [-v] prov ... graid3 configure [-adfFhnrRvwW] name graid3 rebuild [-v] name prov graid3 insert [-hv] [-n number] name prov graid3 remove [-v] -n number name graid3 stop [-fv] name ... graid3 list graid3 status graid3 load graid3 unload DESCRIPTION
The graid3 utility is used for RAID3 array configuration. After a device is created, all components are detected and configured automati- cally. All operations such as failure detection, stale component detection, rebuild of stale components, etc. are also done automatically. The graid3 utility uses on-disk metadata (the provider's last sector) to store all needed information. The first argument to graid3 indicates an action to be performed: label Create a RAID3 device. The last given component will contain parity data, whilst the others will all contain regular data. The number of components must be equal to 3, 5, 9, 17, etc. (2^n + 1). Additional options include: -F Do not synchronize after a power failure or system crash. Assumes device is in consistent state. -h Hardcode providers' names in metadata. -n Turn off autosynchronization of stale components. -r Use parity component for reading in round-robin fashion. Without this option the parity component is not used at all for reading operations when the device is in a complete state. With this option specified random I/O read operations are even 40% faster, but sequential reads are slower. One cannot use this option if the -w option is also specified. -s Manually specify array block size. Block size will be set equal to least common multiple of all component's sector sizes and specified value. Note that array sector size calculated as multiple of block size and number of regular data components. Big values may decrease performance and compatibility, as all I/O requests have to be multiple of sector size. -w Use verify reading feature. When reading from a device in a complete state, also read data from the parity component and ver- ify the data by comparing XORed regular data with parity data. If verification fails, an EIO error is returned and the value of the kern.geom.raid3.stat.parity_mismatch sysctl is increased. One cannot use this option if the -r option is also speci- fied. clear Clear metadata on the given providers. configure Configure the given device. Additional options include: -a Turn on autosynchronization of stale components. -d Do not hardcode providers' names in metadata. -f Synchronize device after a power failure or system crash. -F Do not synchronize after a power failure or system crash. Assumes device is in consistent state. -h Hardcode providers' names in metadata. -n Turn off autosynchronization of stale components. -r Turn on round-robin reading. -R Turn off round-robin reading. -w Turn on verify reading. -W Turn off verify reading. rebuild Rebuild the given component forcibly. If autosynchronization was not turned off for the given device, this command should be unnecessary. insert Add the given component to the existing array, if one of the components was removed previously with the remove command or if one component is missing and will not be connected again. If no number is given, new component will be added instead of first missed component. Additional options include: -h Hardcode providers' names in metadata. remove Remove the given component from the given array and clear metadata on it. stop Stop the given arrays. Additional options include: -f Stop the given array even if it is opened. list See geom(8). status See geom(8). load See geom(8). unload See geom(8). Additional options include: -v Be more verbose. EXIT STATUS
Exit status is 0 on success, and 1 if the command fails. EXAMPLES
Use 3 disks to setup a RAID3 array (with the round-robin reading feature). Create a file system, mount it, then unmount it and stop device: graid3 label -v -r data da0 da1 da2 newfs /dev/raid3/data mount /dev/raid3/data /mnt ... umount /mnt graid3 stop data graid3 unload Create a RAID3 array, but do not use the automatic synchronization feature. Rebuild parity component: graid3 label -n data da0 da1 da2 graid3 rebuild data da2 Replace one data disk with a brand new one: graid3 remove -n 0 data graid3 insert -n 0 data da5 SEE ALSO
geom(4), geom(8), gvinum(8), mount(8), newfs(8), umount(8) HISTORY
The graid3 utility appeared in FreeBSD 5.3. AUTHORS
Pawel Jakub Dawidek <pjd@FreeBSD.org> BUGS
There should be a section with an implementation description. Documentation for sysctls kern.geom.raid3.* is missing. BSD
January 15, 2012 BSD
All times are GMT -4. The time now is 11:09 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy