Sponsored Content
Operating Systems AIX Rebooting redundant VIOs and mirroring of PVs they serve to client LPARs Post 302968033 by maraixadm on Thursday 3rd of March 2016 11:08:21 AM
Old 03-03-2016
Rebooting redundant VIOs and mirroring of PVs they serve to client LPARs

need to confirm:
we have a system with two VIOs each serving a partition on a local disk to a client LPAR. That client LPAR has them both in a VG which is mirrored (exact). So each disk has a copy of the client LV that the client VG supports. This is the setup that was bequeathed to us by the vendors who set up the system originally, and which we've replicated to further systems.

I'm doing an upgrade cycle on the VIOs. If I reboot one of them after installing the upgrade, I would like
  1. the other VIO to continue serving its instance of the mirrored LV - don't see a reason to anticipate trouble with this
  2. when the rebooted VIO comes back, I expect the client LVM to deal with incongruities between the two copies of the mirrored LV appropriately.

#2 is my concern. What I read about mirror write consistency is that active MWC records writes in the MWC log and reaps that when the VG is varied back on, while passive MWC records that an LV has been opened, and forces a syncvg.

I'm trying to dig up how that applies when I shut down one VIO and thus make one of the client PVs supporting the mirrored client LV disappear for a while, then turn it back on and have it reappear in the mirrored VG. MWC is set to on/active for the mirrored LV.

Should I not worry because this is the correct configuration and this is what it was designed to support ?

Conversely, should I save myself the concern and drive risk to 0 by shutting down the client LPAR ? We have a maintenance window to work with; on the other hand it seems to me this is the kind of scenario that VIO redundancy was meant to address, so I shouldn't have to shut down client LPARs.

TIA...

---------- Post updated 03-03-16 at 11:08 ---------- Previous update was 03-02-16 at 18:09 ----------

well, it worked fine - no bobbles in the filesys

still interested in any views you may have of the above, tx
 

10 More Discussions You Might Find Interesting

1. IP Networking

to serve or be served??

I have two machines on my network - one OSX mac and one linux box. The mac is my main workhorse, and the linux box does occasional chores and webserving. Currently the mac shares (via NFS) files with the Liinux box. Would it be less demanding on the mac if I made it a client, and moved my files... (2 Replies)
Discussion started by: mistafeesh
2 Replies

2. AIX

rebooting vio client

Hi, I would like to reboot vio client but I am not able to access vio client(I am not able to get putty) , I am able to get putty of vio server, is there any command by using which from vio server I can reboot vio client? (3 Replies)
Discussion started by: manoj.solaris
3 Replies

3. AIX

DUAL VIOS & Client LPAR hangs at 25b3

I have a DUAL VIO ( IBM Virtual I/O ) setup on p 570. Two Vio server ( VIOS ) and many LPAR clients. VIO ( latest version + service pack + applied the fix ) and AIX 6.1 ML2 When both VIOs are running, and if I turn on a Client LPAR, the LPAR hangs at LED 25b3 for more than 1 hour then it... (2 Replies)
Discussion started by: filosophizer
2 Replies

4. HP-UX

How to Mirror LV and umirror after to change PVs...

Greetings Im running HP UX B 11.11 and Im not sure on how to do this request to "mirror current 5 LVs on vgSPAN to the new LUNs assigned to the VG and unmirror the LVs and finally return the 12 LUNs to SAN storage" The existing LVs were extended to accommodate a user request to extend 2 FS on... (3 Replies)
Discussion started by: hedkandi
3 Replies

5. AIX

VIOS IP address - separate vlan for vios servers ?

Hello, Lets say for simplicity that I do not use any vlan config inside my server - one lpar group use hea physical port1, another group hea physical port2. Physical port1 configured as vlan1 on external switch, physical port2 as vlan2. What is the common practice - should I isolate my vios... (0 Replies)
Discussion started by: vilius
0 Replies

6. AIX

Shared Disk in VIOS between two LPARs ?

is there any way to create shared virtual disk between two LPARs like how you can do it using Storage through Fiber on two servers ? Trying to stimulate HACMP between two LPARs (1 Reply)
Discussion started by: filosophizer
1 Replies

7. HP-UX

PVS command in HP-UX

Dear Engineer, Is there any command in HP-UX work similiar to PVS command in Linux? With Best Regards, Md. Abdullah-Al Kauser (4 Replies)
Discussion started by: makauser
4 Replies

8. UNIX for Advanced & Expert Users

Unable to install client AIX LPAR to vscsi hdisk provided from VIOS

Hi everybody, I have Power5 server with 4 internal hdisks each of 70Gb. VIOS server was installed via Virtual I/O Server Image Repository on the HMC. HMC release - 7.7.0 VIOS rootvg installed on 2 disk(these disks merged to one storage pool during VIOS install process),and 2 others hdisks... (2 Replies)
Discussion started by: Ravil Khalilov
2 Replies

9. AIX

Chef client on VIOs? How do you manage your VIO configs?

I know the VIOs are generally to be treated as an appliance and one should never drop down to oem_setup_env. In reality however, oem is a very useful tool to get the job done. So that leads me into the question of using the Chef client on a VIO. Currently a big push to manage all our *nix... (4 Replies)
Discussion started by: RecoveryOne
4 Replies

10. AIX

Need to replace a broken PV in a VIO VG used for client LPARs (and it won't release the old one)

I have a broken PV in a VIO VG that's used to support client LPARs using LVs. On the client LPAR, I reduced all PVs from the relevant client VG and thus deleted it. I.e. there is no client LPAR using the VIO VG. Yet when I try to reducevg the VIO VG, it complains that the LV hosted on the PV is... (2 Replies)
Discussion started by: maraixadm
2 Replies
lvsplit(1M)															       lvsplit(1M)

NAME
lvsplit - split mirrored LVM logical volume into two logical volumes SYNOPSIS
autobackup] suffix] PhysicalVolumeGroup] lv_path ... Remarks If the logical volume input arguments belong to a combination of volume groups version 1.0 and 2.0 or higher, the arguments might not be processed in the order they are listed on the command line. This command requires the installation of the optional HP MirrorDisk/UX software (not included in the standard HP-UX operating system) before it can be used. DESCRIPTION
The command splits a mirrored logical volume, lv_path, into two logical volumes. A second logical volume is created containing one copy of the data. The original logical volume is appropriately reset to have one fewer mirror copies. If the option is specified, the new logical volume name has the form lv_pathsuffix. If is not specified, suffix defaults to as in If more than one lv_path is specified on the command line, ensures that all logical volumes are brought offline together in one system call, ensuring predictable results among the logical volumes. All logical volumes must belong to the same volume group. The current num- ber of logical volumes in the volume group added to the number of logical volumes specified on the command line must not exceed the maximum number of logical volumes allowed in the volume group. If PhysicalVolumeGroup is specified, the offline logical volumes are created using the mirror copies on the physical volumes contained in the specified physical volume group. When a mirrored logical volume of a non-shared volume group is split into two logical volumes, a bit map is stored that keeps track of all writes to either logical volume in the split pair. If the volume group is shared, this bit map is not created. When the two logical vol- umes are subsequently merged using the bit map, if present, is used to decide which areas of the logical volumes need to be resynchronized (see lvmerge(1M)). This bit map remains in existence until one of the following conditions occurs: o The merge is completed. o One of the logical volumes is extended, reduced, or split again. o The volume group is cross-activated to shared mode (see vgchange(1M)). o The system is rebooted. The new logical volume must be checked with the command before it is mounted (see fsck(1M)). flushes the file system to a consistent state except for pipes and unlinked but open files. To rejoin two split copies of a logical volume, use the command (see lvmerge(1M)). Options and Arguments recognizes the following options and arguments: lv_path The block device path name of a logical volume. Multiple logical volumes in the same volume group can be speci- fied. The current number of logical volumes in the volume group added to the number of logical volumes speci- fied on the command line must not exceed the maximum number of logical volumes allowed in the volume group. Set automatic backup for invocation of this command. autobackup can have one of the following values: Automatically back up configuration changes made to the logical volume. This is the default. After this command executes, the command (see vgcfgbackup(1M)) is executed for the volume group to which the logical volume belongs. Do not back up configuration changes this time. The offline logical volumes will be created using the mirror copies on the physical volumes in the specified PhysicalVolumeGroup. Specify the suffix to use to identify the new logical volume. The new logical volume name has the form lv_pathsuffix. If is omitted, suffix defaults to as in Shared Volume Group Considerations For volume group version 1.0 and 2.0, cannot be used if the volume group is activated in shared mode. For volume groups version 2.1 (or higher), can be performed when activated in either shared, exclusive, or standalone mode. Note that the daemon must be running on all the nodes sharing a volume group activated in shared mode. See lvmpud(1M). When is issued, it creates the new logical volume device special files on all the nodes sharing the volume group. The device special files are created with the same name on the nodes sharing the volume group. When a node wants to share the volume group, the user must first execute a if logical volumes were split at the time the volume group was not activated on that node. The logical volumes device special files should have the same name on all the nodes sharing the volume group. LVM shared mode is currently only available in Serviceguard clusters. EXTERNAL INFLUENCES
Environment Variables determines the language in which messages are displayed. If is not specified or is null, it defaults to "C" (see lang(5)). If any internationalization variable contains an invalid setting, all internationalization variables default to "C" (see environ(5)). EXAMPLES
Split the mirrored logical volume into two copies. Call the new logical volume Split the mirrored logical volume into two copies. The offline logical volume will be created using the mirror copy on the physical vol- umes contained in the physical volume group Split an online logical volume which is currently mounted on so that a backup can take place: Perform a backup operation, then: Split two logical volumes at the same time: Perform a backup operation on the split logical volumes, then rejoin them: SEE ALSO
lvcreate(1M), lvextend(1M), lvmerge(1M), lvmpud(1M). Requires Optional HP MirrorDisk/UX Software lvsplit(1M)
All times are GMT -4. The time now is 10:26 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy