Sponsored Content
Operating Systems AIX Rebooting redundant VIOs and mirroring of PVs they serve to client LPARs Post 302968200 by MichaelFelt on Saturday 5th of March 2016 04:52:42 AM
Old 03-05-2016
What was not clear to me is whether the storage was physically located.

It sounds like it was using local storage (local, i.e., physical, not from a SAN) and the LPAR (aka virtual machine) was using LVM mirroring.

This was the preferred (read almost only) solution in 2005 before SAN storage was common. HOWEVER, if you have SAN backed storage what is 'common' when you have local (i.e., in the CEC aka Compute node) storage is to use the local storage for rootvg, dvd repositories, secure logs, etc. and SAN and MPIO for hosting VM (or client LPAR) storage.

re: active/passive LVM mirroring: the consistency check was to help 'guess' which mirror was last written when recovering from a crash. In your case, while the VIOS was down, one path was down and it is clear which disk is stale and needs to be synced.

Over time, the disks will sync automatically - but you can speed the process by running syncvg (lsvg vgName will tell you if there are stale disks and if so, how many stale PP)
This User Gave Thanks to MichaelFelt For This Post:
 

10 More Discussions You Might Find Interesting

1. IP Networking

to serve or be served??

I have two machines on my network - one OSX mac and one linux box. The mac is my main workhorse, and the linux box does occasional chores and webserving. Currently the mac shares (via NFS) files with the Liinux box. Would it be less demanding on the mac if I made it a client, and moved my files... (2 Replies)
Discussion started by: mistafeesh
2 Replies

2. AIX

rebooting vio client

Hi, I would like to reboot vio client but I am not able to access vio client(I am not able to get putty) , I am able to get putty of vio server, is there any command by using which from vio server I can reboot vio client? (3 Replies)
Discussion started by: manoj.solaris
3 Replies

3. AIX

DUAL VIOS & Client LPAR hangs at 25b3

I have a DUAL VIO ( IBM Virtual I/O ) setup on p 570. Two Vio server ( VIOS ) and many LPAR clients. VIO ( latest version + service pack + applied the fix ) and AIX 6.1 ML2 When both VIOs are running, and if I turn on a Client LPAR, the LPAR hangs at LED 25b3 for more than 1 hour then it... (2 Replies)
Discussion started by: filosophizer
2 Replies

4. HP-UX

How to Mirror LV and umirror after to change PVs...

Greetings Im running HP UX B 11.11 and Im not sure on how to do this request to "mirror current 5 LVs on vgSPAN to the new LUNs assigned to the VG and unmirror the LVs and finally return the 12 LUNs to SAN storage" The existing LVs were extended to accommodate a user request to extend 2 FS on... (3 Replies)
Discussion started by: hedkandi
3 Replies

5. AIX

VIOS IP address - separate vlan for vios servers ?

Hello, Lets say for simplicity that I do not use any vlan config inside my server - one lpar group use hea physical port1, another group hea physical port2. Physical port1 configured as vlan1 on external switch, physical port2 as vlan2. What is the common practice - should I isolate my vios... (0 Replies)
Discussion started by: vilius
0 Replies

6. AIX

Shared Disk in VIOS between two LPARs ?

is there any way to create shared virtual disk between two LPARs like how you can do it using Storage through Fiber on two servers ? Trying to stimulate HACMP between two LPARs (1 Reply)
Discussion started by: filosophizer
1 Replies

7. HP-UX

PVS command in HP-UX

Dear Engineer, Is there any command in HP-UX work similiar to PVS command in Linux? With Best Regards, Md. Abdullah-Al Kauser (4 Replies)
Discussion started by: makauser
4 Replies

8. UNIX for Advanced & Expert Users

Unable to install client AIX LPAR to vscsi hdisk provided from VIOS

Hi everybody, I have Power5 server with 4 internal hdisks each of 70Gb. VIOS server was installed via Virtual I/O Server Image Repository on the HMC. HMC release - 7.7.0 VIOS rootvg installed on 2 disk(these disks merged to one storage pool during VIOS install process),and 2 others hdisks... (2 Replies)
Discussion started by: Ravil Khalilov
2 Replies

9. AIX

Chef client on VIOs? How do you manage your VIO configs?

I know the VIOs are generally to be treated as an appliance and one should never drop down to oem_setup_env. In reality however, oem is a very useful tool to get the job done. So that leads me into the question of using the Chef client on a VIO. Currently a big push to manage all our *nix... (4 Replies)
Discussion started by: RecoveryOne
4 Replies

10. AIX

Need to replace a broken PV in a VIO VG used for client LPARs (and it won't release the old one)

I have a broken PV in a VIO VG that's used to support client LPARs using LVs. On the client LPAR, I reduced all PVs from the relevant client VG and thus deleted it. I.e. there is no client LPAR using the VIO VG. Yet when I try to reducevg the VIO VG, it complains that the LV hosted on the PV is... (2 Replies)
Discussion started by: maraixadm
2 Replies
FENCE_NODE(8)							      cluster							     FENCE_NODE(8)

NAME
fence_node - a utility to run fence agents SYNOPSIS
fence_node [OPTIONS] nodename DESCRIPTION
This utility runs a fence agent against nodename. The agent and args are taken from the running cluster configuration based on clus- ter.conf(5). fence_node is a wrapper around the libfence functions: fence_node() and unfence_node(). These libfence functions use libccs to read the node fencing configuration, which means that corosync (with cman and ccs) must be running to use fence_node(8). The fenced(8) daemon is the main user of libfence:fence_node(), and the configuration details for that function are given in the fenced(8) man page. Fencing vs. Unfencing The main use for unfencing is with storage/SAN (non-power) agents. When using power-based fencing agents, the fencing action itself is supposed to turn a node back on after first turning the power off (this happens automatically with a "reboot" action, and needs to be configured explicitly as "off" + "on" otherwise.) When using storage-based fencing agents, the fencing action is not allowed to re-enable a node after disabling it. Re-enabling a fenced node is only safe once the node has been rebooted. A natural way to re-enable a fenced node's access to storage, is for that node to re- enable the access itself during its startup process. The cman init script calls fence_node -U (nodename defaults to local nodename when unfencing). Unfencing a node without an <unfence> configuration (see below) is a no-op. The basic differences between fencing and unfencing: Fencing 1. libfence: fence_node(), command line: fence_node nodename 2. Turns off or disables a node. 3. Agents run with the default action of "off", "disable" or "reboot". 4. Performed by a cluster node against another node that fails (by the fenced daemon). Unfencing 1. libfence: unfence_node(), command line: fence_node -U nodename 2. Turns on or enables a node. 3. Agents run with the explicit action of "on" or "enable". 4. Performed by a cluster node "against" itself during startup (by the cman init script). OPTIONS
-U Unfence the node, default local node name. -v Show fence agent results, -vv to also show agent args. -h Print a help message describing available options, then exit. -V Print program version information, then exit. FILES
The Unfencing/unfence_node() configuration is very similar to the Fencing/fence_node() configuration shown in fenced(8). Unfencing is only performed for a node with an <unfence> section: <clusternode name="node1" nodeid="1"> <fence> </fence> <unfence> </unfence> </clusternode> The <unfence> section does not contain <method> sections like the <fence> section does. It contains <device> references directly, which mirror the corresponding device sections for <fence>, with the notable addition of the explicit action of "on" or "enable". The same <fencedevice> is referenced by both fence and unfence <device> lines, and the same per-node args should be repeated. <clusternode name="node1" nodeid="1"> <fence> <method name="1"> <device name="myswitch" foo="x"/> </method> </fence> <unfence> <device name="myswitch" foo="x" action="on"/> </unfence> </clusternode> SEE ALSO
fenced(8) cluster 2009-12-21 FENCE_NODE(8)
All times are GMT -4. The time now is 09:03 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy