04-19-2013
[Q] zpool mirror; read from preferred vdev(s)
Hallo,
Using a zpool mirror; is there a way to set something like a primary storage / preferred vdevs / read-first devices like it was in disksuite using "metaparam -r [roundrobin | geometric | first]" or the "vxvol rdpol [prefer | round | select | siteread]" to define a preferred plex in VxVM? Will ZFS always use round-robin?
In our example we want to use a brand-new fast storage with hot-data tiering (ssd-caching) mirrored/combined to an older/slower one... with round-robin we could assume that every second read will be slower?
Will the write acknowledgement be commited after both writes so we will have no advantage from the faster storage?
Thanks in advance
- pressy
9 More Discussions You Might Find Interesting
1. UNIX for Dummies Questions & Answers
When I was using windowsXP I encountered a file manager called UltraExplorer http://www.mustangpeak.net/ultraexplorer.html that appeared really impressive.
Now in the Linux/UNIX world I was pointed towards a similar idea called 4Pane
http://www.4pane.co.uk/
I'm surprised to do a search on... (3 Replies)
Discussion started by: catch22
3 Replies
2. Solaris
I've looked a little but haven't found a solid answer, assuming there is one.
What's better, hardware mirroring or ZFS mirroring? Common practice for us was to use the raid controllers on the Sun x86 servers. Now we've been using ZFS mirroring since U6. Any performance difference? Any other... (3 Replies)
Discussion started by: Lespaul20
3 Replies
3. Solaris
Hello All,
In the output of the command "mpdcontrol -no xlist", I found that, some of the preferred paths are marked as "err". You can see the output below:
# mpdcontrol -noxlist
Unit Dev# MPD_ID/Policy DeviceName FC_AL DevMajMin IOcnt State... (0 Replies)
Discussion started by: sundar3350
0 Replies
4. Solaris
Hi ,
I am new to SVM .when i try to learn RAID 1 , first they are creating two RAID 0 strips through
metainit d51 1 1 c0t0d0s2
metainit d52 1 1 c1t0d0s2
In the next step
metainit d50 -m d51
d50: Mirror is setup
next step is
metaattach d50 d52
d50 : submirror d52 is... (7 Replies)
Discussion started by: vr_mari
7 Replies
5. AIX
Hi,
lets say we have two boot interfaces en0 en1
and two resource groups, with two service ips, sip1 and sip2, one persistent ip pers1
both persistend and service ips are applied as ip-alias
when I start the cluster, and bring the resource groups up, it looks like
en0: sip1 and sip2
... (4 Replies)
Discussion started by: funksen
4 Replies
6. IP Networking
I rotate between a static lan, dhcp lan, and various wireless networks daily. Is there a way to set preferred network connections? I use some static ip's daily, some static ip's like once a month, and almost never use the dhcp lan. The same I do with my various wireless networks. Some I use daily... (0 Replies)
Discussion started by: cokedude
0 Replies
7. Solaris
Hi all,
After being dumped in a Solaris sysadmin role, i have been trying to tidy the unpatched mess I have inherited. Part of this task is to update the current AMP stack.
The current stack is Webstack 1.5, which was current in 2009, and as far as I can see, no longer supported. Post the... (2 Replies)
Discussion started by: Sol-nova
2 Replies
8. HP-UX
what is the difference between DRD and Root Mirror Disk using LVM mirror ? (3 Replies)
Discussion started by: maxim42
3 Replies
9. Solaris
Hello,
I am upgrading Veritas from 5.1 to 6.0.1/6.0.5 in a Solaris 10 u8 server with OS mirrored (rpool) in zfs/zpool configuration.
I need to split it to have a quick way to backout in case of failure (make splitted mirror side bootable for a quick revert, like booting from it). I remember... (3 Replies)
Discussion started by: feroccimx
3 Replies
LEARN ABOUT HPUX
vx_emerg_start
vx_emerg_start(1M) vx_emerg_start(1M)
NAME
vx_emerg_start - start Veritas Volume Manager from recovery media
SYNOPSIS
vx_emerg_start [-m] [-r root_daname] hostname
DESCRIPTION
The vx_emerg_start utility can be used to start Veritas Volume Manager (VxVM) when a system is booted from alternate media, or when a sys-
tem has been booted into Maintenance Mode Boot (MMB) mode. This allows a rootable VxVM configuration to be repaired in the event of a cat-
astrophic failure.
vx_emerg_start verifies that the /etc/vx/volboot file exists, and checks the command-line arguments against the contents of this file.
OPTIONS
-m Mounts the root file system contained on the rootvol volume after VxVM has been started. Prior to being mounted, the rootvol volume
is started and fsck is run on the root file system.
-r root_daname
Specifies the disk access name of one of the members of the root disk group that is to be imported. This option can be used to spec-
ify the appropriate root disk group when multiple generations of the same root disk group exist on the system under repair. If this
option is not specified, the desired root disk group may not be imported if multiple disk groups with the same name exist on the sys-
tem, and if one of these disk groups has a more recent timestamp.
ARGUMENTS
hostname
Specifies the system name (nodename) of the host system being repaired. This name is used to allow the desired root disk group to be
imported. It must match the name of the system being repaired, as it is unlikely to be recorded on the recovery media from which you
booted the system.
NOTES
HP-UX Maintenance Mode Boot (MMB) is intended for recovery from catastrophic failures that have prevented the target machine from booting.
If a VxVM root volume is mirrored, only one mirror is active when the system is in MMB mode. Any writes that are made to the root file sys-
tem in this mode can corrupt this file system when both mirrors are subsequently configured.
The vx_emerg_start script allows VxVM to be started while a system is in MMB mode, and marks the non-boot mirror plexes as stale. This
prevents corruption of the root volume or file system by forcing a subsequent recovery from the boot mirror to the non-boot mirrors to take
place.
USAGE
After VxVM has been started, various recovery options can be performed depending on the nature of the problem. It is recommended that you
use the vxprint command to determine the state of the configuration.
One common problem is when all the plexes of the root disk are stale as shown in the following sample output from vxprint:
v rootvol root DISABLED 393216 - ACTIVE -
pl rootvol-01 rootvol DISABLED 393216 - STALE -
sd rootdisk01-02 rootvol-01 ENABLED 393216 0 - -
pl rootvol-02 rootvol DISABLED 393216 - STALE -
pl rootvol-02 rootvol DISABLED 393216 - STALE -
sd rootdisk02-02 rootvol-02 ENABLED 393216 0 - -
In this case, the volume can usually be repaired by using the vxvol command as shown here:
vxvol -g 4.1ROOT -f start rootvol
If the volume is mirrored, it is put in read-write-back recovery mode. As the command is run in the foreground, it does not exit until the
recovery is complete. It is then recommended that you run fsck on the root file system, and mount it, before attempting to reboot the sys-
tem:
fsck -F vxfs -o full /dev/vx/rdsk/4.1ROOT/rootvol
mkdir /tmp_mnt
mount -F vxfs /dev/vx/dsk/4.1ROOT/rootvol /tmp_mnt
SEE ALSO
fsck(1M), mkdir(1M), mount(1M), vxintro(1M), vxprint(1M), vxvol(1M)
VxVM 5.0.31.1 24 Mar 2008 vx_emerg_start(1M)