08-01-2014
Quote:
Originally Posted by
ibmtech
I am wondering why in the world are you using vxvm with HACMP? veritas has is own clustering software.
Since its not using native logical volume manager, I doubt it will work smoothly.
+1 from me. First, i cannot understand why one would want to replace the arguably best LVM in the industry on its native system with something else. Second, if you really have to, why son't you use the complete Veritas suite, Veritas Volume Manager and Verita Cluster, iinstead of coupling the volume manager of one company with the HA-software of the other.
This is in itself the recipe for disaster, even if it miraculously works. Whenever something goes wrong, you will have the IBM support pointing to Veritas' VM and the Veritas support pointing to IBMs HACMP.
Quote:
You may need to install some additional software packages for HACMP to undertake vxvm as its local LVM.
After some asking around: knowbody has ever seen it, but "in theory" this should work and, yes, there is some special software needed. My suggestion is to remove VXVM and use AIX native LVM, though.
I hope this helps.
bakunin
10 More Discussions You Might Find Interesting
1. AIX
hi all
we upgraded hacmp(powerha) from 5.2 to 5.4 recently .
during the failover testing, we found a strange network issue. after standby node took service ip address (172.15.100.8) online at standby NIC, we were able to log in the standby node by telnet 172.15.100.8 which stays at... (1 Reply)
Discussion started by: rs6000er
1 Replies
2. AIX
I've configured multiple clusters using PowerHA 5.5, but never using the 'Two-node cluster configuration assistant'. I removed all the clusters, clean setup, and gave the assistant a try. I've filled in all the necessary options:
Communication Path to Takeover Node
Application Server
... (11 Replies)
Discussion started by: Koen87
11 Replies
3. AIX
I am new in AIX and please forgive my poor english.
I know that AIX allow same subnet IPs for different interfaces, which result in multipath routing / route striping.
My question is,
is there any best practice for the persistent and service IP with same subnet to stay on same interface, or... (5 Replies)
Discussion started by: skeyeung
5 Replies
4. AIX
Hello AIX GURU's
Can anybody tell me the steps to crate shared VG (enhanced concurent) for my Lpars from VIO server?
my questions are:
1.
Should I crate Enhanced Concurent VG in VIO and map it using virtual Scsi to Lpar?
or
2. Can I just create virtual SCSI in VIO and map to Lpar and... (1 Reply)
Discussion started by: Vit0_Corleone
1 Replies
5. AIX
Looking to find PowerHA with 3rd node used for communication (voting node) only. Does anyone use such configuration with IBM products? (2 Replies)
Discussion started by: gogogo
2 Replies
6. AIX
Few questions regarding Power HA ( previously known as HACMP) and VIOS POWERVM IVM ( IBM Virtualization I/O Server )
Is it possible to create HACMP cluster between two VIOS servers
Physical Machine_1
VIOS_SERVER_1
LPAR_1
SHARED_DISK_XX
VIOS_SERVER_2
Physical Machine_2
LPAR_2... (6 Replies)
Discussion started by: filosophizer
6 Replies
7. AIX
hello
ive installed powerha 7.1.3 on two servers aix 6.1.9 6100-09-03-1415
work with dmx4 EMC storgae.
after sync cluster ( terminate with OK ) ive see that the repository disk upper only in one machine :
hdiskpower60 00c7f6b59fc60d9d caavg_private active... (1 Reply)
Discussion started by: ariec
1 Replies
8. AIX
Hey guys, wondering if this is possible to accomplish.
PowerHA SystemMirror7.1 active/passive cluster. Restored a clustered system in test and upgraded to the latest version of PowerHA. Things are working great. However in the interest of time during cut over to the new system, I was hoping to... (4 Replies)
Discussion started by: j_aix
4 Replies
9. AIX
Hello Guys,
I was trying to upgrade the PowerHA from 6.1.0.9 to 7.1.3 in prod cluster. Have snapshot migration and run climgcheck.
1. Run climgcheck on node1(kul1pqcmur01) and it success
2. Upgrade the cluster sw
3. Run climgcheck on node2(kul1pqcmur02) and it failed with below errors
... (6 Replies)
Discussion started by: novaothers
6 Replies
10. AIX
Hi All,
As per the IBM upgrade/support matrix
IBM Techdocs Technote: PowerHA for AIX Version Compatibility Matrix
we can't do online upgrade or rolling migration from powerha v7.1.0 to v7.1.3. on AIX61_TL9_SP4, So we are following the steps as below ...
1 ) Bring down the cluster
2 )... (2 Replies)
Discussion started by: linux.amrit
2 Replies
vxiod(1M) vxiod(1M)
NAME
vxiod - start, stop, and report on Veritas Volume Manager I/O threads
SYNOPSIS
vxiod
vxiod [-f ] [set count]
DESCRIPTION
The vxiod utility starts, stops, or reports on Veritas Volume Manager (VxVM) I/O kernel threads. An I/O thread provides a process context
for performing I/O in VxVM.
When the vxio module is loaded, 16 I/O threads are created, plus 2 threads per additional CPU for a system with more than 8 CPUs, up to a
maximum of 64 threads. At least one I/O thread must be running while the vxio module is loaded, and the number of I/O threads cannot be
forced to zero.
When invoked with no arguments, vxiod prints the current number of I/O threads to the standard output.
The number of threads that is required for handling I/O requests depends on the system load and usage. If volume recovery seems to proceed
more slowly at times, it may be possible to improve its performance by increasing the number of I/O threads up to a maximum of 64.
KEYWORDS
set When invoked with the set keyword, vxiod creates the number of I/O threads specified by count. If more volume I/O threads exist
than are specified by count, the excess processes terminate. If more than the maximum number(64) are specified, the requested
number is silently truncated to that maximum.
OPTIONS
-f This option has no effect from release 5.0 onward. The number of I/O threads cannot be reduced to zero.
EXIT CODES
The vxiod utility prints a diagnostic on the standard error, and exits if an error is encountered. If an I/O request occurs within a I/O
thread, the state of that I/O request is not reflected in the exit status for vxiod. Otherwise, vxiod returns a non-zero exit status on
error.
Usage errors result in an exit status of 1 and a usage message. If the requested number of threads cannot be created, the exit status is
2, and the number of threads that were successfully started is reported. If any other error occurs, the exit status is 3.
FILES
/dev/vx/iod The device used to report on and start volume I/O threads.
NOTES
Veritas Volume Manager I/O threads cannot be killed directly through the use of signals.
Depending on the operating system, VxVM I/O threads may not appear in the list of processes that is output by the ps command. The number
of I/O threads that is currently running can be determined by running vxiod.
SEE ALSO
fork(2), ps(1), vxconfigd(1M), vxdctl(1M), vxintro(1M), vxio(7), vxiod(7)
VxVM 5.0.31.1 24 Mar 2008 vxiod(1M)