Sponsored Content
Full Discussion: VCS - 3 node - IP change
Operating Systems Solaris VCS - 3 node - IP change Post 302921049 by rbatte1 on Tuesday 14th of October 2014 12:44:01 PM
Old 10-14-2014
The address that you publish for applications/people to use should be part of your cluster definition (and a DNS entry, of course) so that should be straightforward. The problem will come with how the nodes know each other. These will be the IP addresses you put on the cards as part of the OS and it depends how the cluster was created. These may be both the IP addresses that could be used if people knew about it and any that are non-routeable more usually used for backups or iSCSI access (if you use that) because your cluster nodes may all be set up to see each other across all the various paths.

In the extreme, you may need to do something like this for all nodes in the cluster in parallel:-
  • Stop cluster services (and prevent start on boot)
  • Edit cluster configurations for all heartbeats and node definitions
  • Set new static IP addresses on cards
  • Boot
  • Start cluster services2 (and enable start on boot if required)
  • Test
It may be better to edit the cluster configurations after the boot, and that is where their advice will be the best.

Can you list off the cluster configurations? They will be quite large so don't post them here, but your support will almost certainly want them. If you were AIX with IBM HA I'd have more advice. Sorry about that Smilie

I hope that this is useful though,
Robin
 

9 More Discussions You Might Find Interesting

1. High Performance Computing

What does the 'probe' do in VCS?

Weird question I know, but I'm intrigued. Say for instance you have an Application in an SG with the usual start/stop/monitor. To successfully probe does it just check for existence of the script/file ? Thanks... (1 Reply)
Discussion started by: itsupplies
1 Replies

2. Solaris

SVM metaset on 2 node Solaris cluster storage replicated to non-clustered Solaris node

Hi, Is it possible to have a Solaris cluster of 2 nodes at SITE-A using SVM and creating metaset using say 2 LUNs (on SAN). Then replicating these 2 LUNs to remote site SITE-B via storage based replication and then using these LUNs by importing them as a metaset on a server at SITE-B which is... (0 Replies)
Discussion started by: dn2011
0 Replies

3. Solaris

VCS Clusters

:)Hi, can someone please explain VCS clustering and where do we need VCS clusters ..? :o:)Thanks in advance :o:) (1 Reply)
Discussion started by: amitbisht9
1 Replies

4. Solaris

VCS on Solaris: VCS ERROR V-16-2-13077 (host2) Agent is unable to offline resource(DiskReservation)

hi, dear all I get a problem "VCS ERROR V-16-2-13077 " on VCS 4.1 for Solaris 10. I can not offline the host2 when the raid is bad. I don't know the reason and how to offline host2 and switch to host1. please help me, thank you! the message of engine_A.log is : ... (2 Replies)
Discussion started by: ForgetChen
2 Replies

5. UNIX for Advanced & Expert Users

VCS triggerring panic on 1 node, root disk under SVM

We have two node cluster with OS disk mirrored under SVM. There is slight disk problem on one of the mirror disk causing cluster to panic. Failure of one mirror disk causing VCS to panic the node. Why VCS is not able to write /var filesystem, as one of the disk is healthy. ... (1 Reply)
Discussion started by: amlanroy
1 Replies

6. Homework & Coursework Questions

Accessing one UNIX node from another node of the same server

Hi Experts, I am in need of running a script from one node say node 1 via node 2. My scheduling tool dont have access to node2 , so i need to invoke the list file from node1 but the script needs to run from node2. because the server to which i am hitting, is having access only for the node... (5 Replies)
Discussion started by: arun1377
5 Replies

7. HP-UX

Mount FIle systems from node-1 onto node-2

Hi, We have HP UX service guard cluster on OS 11.23. Recently 40+ LUNs presented to both nodes by SAN team but I was asked to mount them on only one node. I created required VGs/LVs, created VxFS and mounted all of them and they are working fine. Now client requested those FS on 2nd node as... (4 Replies)
Discussion started by: prvnrk
4 Replies

8. AIX

Change the ID of a SVC Node

Hi Folks, I'm currently in a position where I am building the seed equipment for a Data Centre Migration, I'm familiar with some of the equipment - all of which I've listed below. SAN Storage - EMC VNX5800 SAN Switches - Brocade DCX 8510-4 SAN Management - IBM SVC 2145-DH8 four node cluster... (2 Replies)
Discussion started by: gull04
2 Replies

9. Solaris

VCS not active

Dear All, My current system run Solaris 9 SPARC 64 bit , and VXVM , VCS 4.1. I check all the node information . ls -l /etc/vx/*.exclude /etc/vx/*.exclude: No such file or directory root@devuardbs01 # vxdctl license All features are available: Mirroring Root Mirroring Concatenation... (0 Replies)
Discussion started by: linux_user
0 Replies
vxclustadm(1M)															    vxclustadm(1M)

NAME
vxclustadm - start, stop, and reconfigure a cluster SYNOPSIS
vxclustadm abortnode vxclustadm nidmap vxclustadm [-v] nodestate vxclustadm -m {hpsg|vcs} reinit vxclustadm -m hpsg -C cluster_name -t hpsg [-j join_timeout] startnode vxclustadm -m vcs -t gab startnode vxclustadm -m vcs -C cluster_name -t gab [-j join_timeout] startnode vxclustadm stopnode DESCRIPTION
The vxclustadm utility activates and deactivates cluster functionality on a node in a cluster. Caution: Use of the clustering functionality of VxVM without a cluster monitor is not supported. Cluster reconfiguration problems may occur if there is no cluster monitor or if GAB is used as the cluster monitor. Ensure that you completely understand the functionality of this command before using it. KEYWORDS
abortnode Stops clustering activity on a node and abandons all uncompleted I/O on shared volumes. This command is for emergency shutdown. Note: This operation is not allowed in the HP Serviceguard environment. nidmap Prints a table showing the mapping between node IDs in VxVM's cluster-support subsystem and node IDs in the cluster monitor. nodestate Displays the state of a node in the cluster and the reason for last abort of the node on the standard output. Valid states are: cluster aborting The node is being aborted from the cluster. cluster member The node is a member of the cluster. All shared volumes in the cluster are accessible. joining The node is in the process of joining a cluster. It has been initialized but is not yet completely in the cluster. The node goes into this state after vxclustadm is executed with the startnode keyword. out of cluster The node is not joined to the cluster. Refer to the Veritas Volume Manager Administrator's Guide for more information about reasons why a node may leave a cluster. For debugging purposes the -v option can be specified to display the node ID, master ID, neighbor ID, current state, and reason for a node leaving the cluster (if appropriate). reinit The reinit keyword allows nodes to be added to or removed from a cluster dynamically without stopping the cluster. The command causes vxclustadm to re-read the cluster configuration file, and implement any required changes to the membership of the cluster. The -m vcs option specifies the VCS cluster monitor, which implies the existence of the cluster configuration file, /etc/VRTSvcs/conf/config/main.cf. The -m hpsg option specifies the HP Serviceguard environment, which implies the existence of the cluster configuration file, /etc/llthosts. startnode The startnode keyword initiates cluster functionality on a node using the information supplied in the cluster configuration file. This is the first command that must be issued on a node to bring it into the cluster. The argument to the -m option specifies the cluster monitor, which implies the existence of a cluster configuration file: hpsg The cluster is running in the HP Serviceguard environment, and the cluster configuration file is /etc/llthosts. Caution: Use the HP Serviceguard administration commands to update the /etc/llthosts file. Do not edit this file by hand. The argument to the -C option specifies the name of the cluster. The -j option is used to specify the cluster reconfiguration timeout in seconds. See the FILES section for more infor- mation about this timeout. Note: The -C and -j are only applicable to the HP Serviceguard environment. vcs The cluster is running in the VCS environment. The cluster configuration file is /etc/VRTSvcs/conf/config/main.cf. Caution: Use VCS commands to edit the main.cf file. Do not edit this file by hand. startnode passes the information in the cluster configuration file to the VxVM kernel. In response to this command, the kernel and the VxVM configuration daemon, vxconfigd, perform the initialization. The argument to the -t option specifies the protocol to be used for messaging: gab Use GAB as the transport agent for messaging in addition to using GAB as a cluster monitor. If you try to use GAB as a transport agent with a cluster monitor other than GAB (or outside the VCS or HP Serviceguard environment), the kernel issues a warning message and changes the transport agent to UDP. When the cluster is running in the VCS or HP Serviceguard environment, the clustering functionality of VxVM should use GAB as the transport agent for messaging. stopnode Stops cluster functionality on a node, and waits for all outstanding I/O to complete and for all applications to close shared volumes or devices. EXIT CODES
vxclustadm returns the following exit values: 2 Invalid state. 101 Node is not in cluster. 102 Node is joining the cluster, or is involved in reconfiguration. 103 Node is a cluster member. 104 Node is aborting from cluster. FILES
For a cluster that is operating without a cluster monitor, or that is using GAB as the cluster monitor outside the VCS or HP Serviceguard environment, and which is using UDP as its transport agent for messaging, the cluster configuration file, /etc/vx/cvmtab, contains the fol- lowing fields: clustername cluster_name port vxconfigd port_number port vxkmsgd port_number node node_ID name name_on_local_net timeout timeout_value ... The recommended port numbers for the vxconfigd and vxkmsgd daemons are 4500 and 4501, but any available port numbers greater than 1024 are also acceptable. name_on_local_net is the node's IP address or resolvable host name on the cluster's private network. timeout_value is the timeout value in seconds. The clustering functionality of VxVM uses this value during cluster reconfiguration. The appropriate value to use depends on the number of nodes in the cluster and on the size of the shared disk group configuration. In most cases the value of 200 seconds is sufficient but this may need to be increased for larger configurations. Comment lines in the file start with a #. If GAB is being used as the transport agent for messaging, fields relating to port numbers and local network names are not required: clustername cluster_name node node_ID name timeout timeout_value ... For a cluster running in the VCS environment, VxVM obtains information about the cluster from the VCS cluster configuration file (/etc/VRTSvcs/conf/config/main.cf). Cluster-specific information may be appended to this file by running the vxcvmconfig command. For more information refer to the Veritas Cluster File System Installation and Configuration Guide. For a cluster running in the HP Serviceguard environment, VxVM obtains information about the cluster from the /etc/llthosts file. Cluster- specific information may be appended to this file by running the HP Serviceguard administrative command, cfscluster config. EXAMPLES
A cluster consisting of four nodes, named node0, node1, node2 and node3, operates without a cluster monitor, and has the following cvmtab file when UDP is used as the transport agent for messaging: # ClusterName clustername CVM1 # Daemon port numbers port vxconfigd 4500 port vxkmsgd 4501 # NodeID Nodename Localname node 0 node0 node0_p node 1 node1 node1_p node 2 node2 node2_p node 3 node3 node3_p # Timeout value timeout 200 If GAB is used as the transport agent for messaging, the cvmtab file only needs to contain the following information: # ClusterName clustername CVM1 # NodeID Nodename node 0 node0 node 1 node1 node 2 node2 node 3 node3 # Timeout value timeout 200 If node1 is the first node to join the cluster, it becomes the master node. The following command confirms that node1 is the master node: vxdctl -c mode To determine if reconfiguration of node3 is complete, examine the value returned from running the following command on node3: vxclustadm -v nodestate To confirm that node3 is a slave node, the following command is run on node3: vxdctl -c mode node1 remains as the master node for its lifetime in the cluster. To remove node1 from the cluster, the following command is run on node1: vxclustadm stopnode NOTES
vxclustadm does not ensure the consistency of cluster membership information. SEE ALSO
vxconfigd(1M), vxdctl(1M), vxintro(1M) Veritas Volume Manager Administrator's Guide Veritas Cluster File System Installation and Configuration Guide VxVM 5.0.31.1 24 Mar 2008 vxclustadm(1M)
All times are GMT -4. The time now is 06:48 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy