cmquerycl - query cluster or node configuration information
cmquerycl [-k] [-v] [-f format] [-l limit] [-w probe_type] [-h ipv4|ipv6] [-c cluster_name] [-C cluster_ascii_file] [-q quorum_server
[qs_ip2] | -L lock_lun_device] [-n node_name [-L lock_lun_device]]...
cmquerycl searches all specified nodes for cluster configuration and Logical Volume Manager (LVM) information. Cluster configuration
information includes network information such as LAN interface, IP addresses, bridged networks and possible heartbeat networks. LVM infor-
mation includes volume group (VG) interconnection and file system mount point information. This command does not perform automatic discov-
ery of lock LUN devices. (HP-UX only) To prevent cmquerycl from probing hardware devices, list them in /etc/cmcluster/cmnotdisk.conf, giv-
ing the device file name in the DEVICE_FILE section. These could be CD-ROM, DVD-ROM, CD-RW or other peripheral devices that should not be
probed, or whose description string does not match the possible TYPEs listed in /etc/cmcluster/cmignoretypes.conf.
This command should be run as the first step in preparing for cluster configuration. It may also be used as a troubleshooting tool to
identify the current configuration of a cluster.
If neither node_name nor cluster_name is specified, this command will search all nodes that are accessible over the system's networks and
return a list of machines running Serviceguard products. Also in this situation, the -l and -C options are ignored.
If node_name is specified, it cannot contain the full domain name but should match only the system's hostname.
The -C option may be used to create a cluster ASCII file which can be customized for the desired configuration. This file can then be ver-
ified by using the cmcheckconf command and distributed to all the cluster nodes by using the cmapplyconf command. If the -C option is not
specified, output will be directed to stdout.
cmquerycl supports the following options:
Cluster configuration information will be saved in cluster_ascii_file, which can then be verified by using the
cmcheckconf(1m) command and distributed to all other cluster nodes with the cmapplyconf(1m) command. The -C option is
only valid, and cluster_ascii_file will only be created, when -C is used together with the -c and/or -n options.
Cluster information will be queried from cluster cluster_name. This option may be used in conjunction with the -n and
-C options to generate a new ASCII file for use in adding nodes to or removing nodes from an existing cluster. This
option also creates commented-out entries in the new ASCII file for any additional networks configured on existing
cluster nodes; you can add these networks to a running cluster by uncommenting the entries. When -n is used, the
heartbeat network(s) and cluster lock device(s) that are currently configured for the cluster will be included in the
new ASCII file. Please refer to the manual for more detail.
-f format Select the output format to display. The format parameter may be one of the following values:
table This option displays a human readable tabular output format. This is the default output format if no -f is
specified on the command line.
line This option displays a machine parsable output format. Data items are displayed, one per line, in a way that
makes them easy to manipulate with tools such as grep(1) and awk(1).
-k To speed up the disk query process, this option eliminates some disk probing, and does not return information about
potential cluster lock volume groups and lock physical volumes.
When the -k option is used with the -C option, a list of cluster-aware volume groups is provided in the ASCII file,
but a suggested cluster lock volume group and physical volume are not provided. The ASCII file can be edited to
include lock disk information if necessary.
-l limit Limit the information included to type limit. Legal values for limit are:
lvm Include logical volume information only.
When -l lvm is used with -w local or -w full, -l lvm takes precedence, and only logical volume information is
provided. When -l lvm is used with both -w local or -w full and -C, the effect is to provide local (or full) net-
work information in addition to logical volume information in the ASCII file.
When -l lvm is used with -k, the effect is to provide logical volume information, but no cluster lock informa-
net Include network information only
When -l net is used with -k, the effect is to provide local network information only, with no disk probing.
Specifies the block device file name for the lock LUN device to be used as the cluster lock. The -L option can be used
in one of two forms. In the first form, when the device file name is the same across all nodes, the -L option can be
specified once before any -n options. Alternatively, in the second form, multiple -L options can be used to specify
the node specific device file names. In this form, there must be one -L option for each -n option used. -L and -q are
mutually exclusive. Without -C this option does not have an affect.
-n node_name Specifies node_name should be included in the set of nodes to query. node_name must be valid and must match the node
name returned by the hostname(1) command. cmquerycl may be executed without any arguments to return a list of valid
node names that can be used with this option.
-q quorum_server [qs_ip2]
Specifies the host name or the IP address of the quorum server. Two IP addresses or hostnames can be specified, using
space as a separator. The IP addresses must be IPv4 addresses or hostnames that resolve to IPv4 addresses. Follow
instructions in the Managing Serviceguard manual to configure IPv6 quorum server addresses. The quorum server must be
running and reachable from all the configured nodes through all the quorum server IP addresses when the cmquerycl com-
mand is executed. A maximum of two IP addresses are supported for the quorum server. The -q option may only be used
in conjunction with the -c or -n options. All of the nodes being queried, as specified by -c and -n options, must be
authorized to access the specified quorum server. -L and -q are mutually exclusive.
-v Verbose output will be displayed.
Specifies the heartbeat IP address family. If both IPv4 and IPv6 addresses are configured on the LAN, this option
specifies which address family to use for the heartbeat for cluster heartbeat communication. Without -C and -n this
option does not have an affect. options -h and -c are mutually exclusive. The best available configuration will be
chosen when cmquerycl is used without -h option. IPv4 takes precedence over IPv6 if both address families are avail-
able. The legal values of address_family are:
ipv4 Only IPv4 addresses will be chosen for heartbeat if available, otherwise
error message will be shown.
ipv6 Only IPv6 addresses will be chosen for heartbeat if available, otherwise
error message will be shown.
Specifies the type of network probing performed. The legal values of probe_type are:
none No network probing is done.
local LAN connectivity is verified between interfaces within each node only. Bridged networks information is not complete when the
local option is used. This is the default behavior if the -w option is not specified and -C option is used.
full Actual connectivity is verified among all LAN interfaces on all nodes in the cluster. Note that you must use this option to
discover cross-subnet connectivity or route connectivity between the nodes in the cluster. You must also use this option to
discover gateways for possible polling targets for IP monitored subnets.
Tabular output format
cmquerycl writes information to stdout that can be used in both configuration and troubleshooting.
Groups of network interfaces. These groupings represent link level network connections indicating that the inter-
faces are connected either by a network segment or via a bridge.
IP subnets IP subnet information based on the bridged networks. The subnet is a masking of the IP address by the subnet mask,
which is specified by ifconfig(1). The netstat(1) command will also show this information. This subnet name can be
used as a parameter in the package configuration file created via cmmakepkg(1m).
Possible heartbeat IPs
List of IP subnets and addresses which are connected to all nodes specified. Both IPv4 and IPv6 addresses are sup-
ported for heartbeat.
Groups of IP subnets. These groupings represent IP-level connectivity indicating that these subnets are routed to
each other (potentially by a router).
IP Monitor Subnets
List of IP subnets and possible polling targets for each subnet. The possible polling targets are gateways of each
subnet that cmquerycl detected. Note that gateways are only detected with the -w full option.
LVM volume groups
Names of volume groups listed that are listed by node. If a volume group has been imported to one or more systems,
all systems which are connected to that volume group will be displayed. See vgimport(1).
LVM physical volumes
Names of the physical volumes in each volume group including block device file and hardware path. This information
is node specific since a physical volume may have a different hardware path or device file name on each node.
LVM logical volumes
Names of logical volumes in each volume group including information on use, such as filesystem and mount point loca-
tion if it is currently mounted. This information is node specific.
cmquerycl optionally creates a cluster_ascii_file which contains configuration information for the specified nodes or cluster. This
file can be verified by the cmcheckconf command. The cluster_ascii_file contains the following fields with default values which can
CLUSTER_NAME Name of the cluster. This name will be used to identify the cluster when viewing or manipulating it. This name must
be unique across other Serviceguard clusters.
This parameter determines the Internet Protocol address family to which Serviceguard will attempt to resolve cluster
node names and quorum server host names. The default value is IPV4. Setting this parameter to IPV4 will cause clus-
ter node names and quorum server host names to resolve to IPv4 address only even if IPv6 addresses are configured for
the node names or quorum server host names in addition to the IPv4 addresses.
Setting this parameter to ANY will cause cluster node names and quorum server host names to resolve to both IPv4 and
IPv6 addresses. The /etc/hosts file on each node must contain entries for all IPv4 and IPv6 addresses used through-
out the cluster including all STATIONARY_IP and HEARTBEAT_IP addresses as well as any other addresses before ANY can
be used. There must be at least one IPv4 address in this file.
FIRST_CLUSTER_LOCK_VG (HP-UX Only)
Volume group that holds the cluster lock. The cluster lock is used to break a cluster formation tie where two sepa-
rate groups of nodes are trying to form a cluster and each group would have 50% of the nodes. To break the tie, the
cluster lock disk is used. The winning cluster remains running; the losing nodes will be halted.
QS_HOST The quorum server that acts as a cluster membership tie-breaker, alternative to volume group (disk device) cluster
QS_ADDR It can be used to specify an alternate IP address for the quorum server. The two specified quorum server parameters
QS_HOST and QS_ADDR values should not resolve to same IP address. Otherwise the cmquerycl command will fail.
The interval, specified in microseconds, at which quorum server health is checked. Default is 300000000 (5 minutes).
Minimum value is 10000000 (10 seconds). Maximum value is 2147483647 (approx. 35 minutes).
This is an optional parameter (in microseconds) that is used to increase the time interval for quorum server
response. The default quorum server timeout is calculated from the Serviceguard cluster parameter MEMBER_TIMEOUT.
For clusters of 2 nodes, it is 0.1 * MEMBER_TIMEOUT, and for more than 2 nodes it is 0.2 * MEMBER_TIMEOUT.
If you are experiencing quorum server polling timeouts (see the system log), if your quorum server is on a busy net-
work, or if your quorum server is serving many clusters, the default quorum server timeout may not be sufficient.
You can use QS_TIMEOUT_EXTENSION to allocate more time for quorum server requests. You should also consider using
this parameter if you want to use small MEMBER_TIMEOUT values (under 14 seconds).
The value of QS_TIMEOUT_EXTENSION is added directly to the quorum server timeout. This, in turn, directly increases
the amount of time it takes for cluster reformation in the event of a node failure. For example, if QS_TIME-
OUT_EXTENSION is set to 10 seconds, the cluster reformation will take 10 seconds longer than if the QS_TIMEOUT_EXTEN-
SION was set to 0. This delay applies even when there is no delay in contacting the Quorum Server. The recommended
value for QS_TIMEOUT_EXTENSION is 0, which is used as the default. The maximum supported value is 300000000 (5 min-
NODE_NAME Name of a node participating in the cluster. The following NETWORK_INTERFACE , HEARTBEAT_IP , STATIONARY_IP , CLUS-
TER_LOCK_LUN , FIRST_CLUSTER_LOCK_PV , CAPACITY_NAME , and CAPACITY_VALUE variables are associated with this node
until the next NODE_NAME entry occurs.
LAN interface (HP-UX examples are lan0, lan1; Linux examples are eth0, eth1). This may be specified repeatedly for
all applicable LAN interfaces.
HEARTBEAT_IP The heartbeat IP address. This is either an IPv4 or an IPv6 address to be used for sending heartbeat messages.
STATIONARY_IP This is the IP address dedicated to the node. This IP address will stay with the node and will not be moved. This
IP address can be either IPv4 or IPv6 address.
CAPACITY_VALUE Tne CAPACITY_NAME and CAPACITY_VALUE parameters are used to define a capacity for the node. Node capacities corre-
spond to package weights; node capacity is checked against the corresponding package weight to determine if the pack-
age can run on that node.
CAPACITY_NAME specifies a name for the capacity. The capacity name can be any string that starts and ends with an
alphanumeric character, and otherwise contains only alphanumeric characters, dot (.), dash (-), or underscore (_).
Maximum string length is 39 characters. Duplicate capacity names are not allowed.
CAPACITY_VALUE specifies a value for the CAPACITY_NAME that precedes it. This is a floating point value between 0 and
1000000. Capacity values are arbitrary as far as Serviceguard is concerned; they have meaning only in relation to the
corresponding package weights.
Node capacity definition is optional, but if CAPACITY_NAME is specified, CAPACITY_VALUE must also be specified;
CAPACITY_NAME must come first. To specify more than one capacity, repeat this process for each capacity. NOTE: If a
given capacity is not defined for a node, Serviceguard assumes that capacity is infinite on that node. For example,
if pkgA, pkgB, and pkgC each specify a weight of 1000000 for WEIGHT_NAME "memory", and CAPACITY_NAME "memory" is not
defined for node1, then all three packages are eligible to run at the same time on node1, assuming all other require-
ments are met.
Cmapplyconf will fail if any node defines a capacity and any package has min_package_node as the failover policy or
has automatic as the failback policy.
You can define a maximum of four capacities.
NOTE: Serviceguard supports a capacity with the reserved name "package_limit". This can be used to limit the number
of packages that can run on a node. If you use "package_limit", you cannot define any other capacities for this clus-
ter, and the default weight for all packages is one.
For all capacities other than "package_limit", the default weight for all packages is zero.
The lock LUN acts as a cluster membership tie-breaker. It is an alternative to the quorum server.
FIRST_CLUSTER_LOCK_PV (HP-UX Only)
The physical volume path to the disk holding the cluster lock for this node. This disk must be part of the
FIRST_CLUSTER_LOCK_VG on all nodes in the cluster, but the device path to this disk may be different on all nodes in
MEMBER_TIMEOUT Number of microseconds to wait for a heartbeat message before declaring a node failure. The cluster will reform to
remove the failed node from the cluster.
Heartbeat messages are sent at a regular interval which is 0.25 * MEMBER_TIMEOUT, up to a maximum of 1 second. Quo-
rum server timeout is also a factor of MEMBER_TIMEOUT (see QS_TIMEOUT_EXTENSION, above, for details). Timeouts for
Cluster Lock and Lock Lun are both 0.2 * MEMBER_TIMEOUT, with Dual Cluster Lock set to a fixed 13 seconds.
MEMBER_TIMEOUT defaults to 14000000 (14 seconds). A value of 10 to 25 seconds is appropriate for most installations.
For installations in which the highest priority is to reform the cluster as fast as possible in case of node failure,
this value can be set as low as 3 seconds. When a single heartbeat network with standby interfaces is configured,
this value cannot be set below 14 seconds if the network interface type is Ethernet, or 22 seconds if the network
interface type is InfiniBand (HP-UX only). Note that a system hang or a network load spike whose duration exceeds
MEMBER_TIMEOUT will result in one or more node failures. A system hang or network load spike whose duration exceeds
0.1 * MEMBER_TIMEOUT, and which happens to occur during cluster reformation, can also result in one or more node
failures. Hangs of this duration are logged in the system log, which should be monitored so MEMBER_TIMEOUT can be
increased when necessary. See the Managing Serviceguard manual for more guidance on setting MEMBER_TIMEOUT.
The maximum value recommended for MEMBER_TIMEOUT is 60000000 (60 seconds).
Number of microseconds to wait for a new cluster to form after a failure, before giving up. Default is 600000000 (10
Number of microseconds between network polling messages. Default is 2000000 (2 seconds).
The optional CONFIGURED_IO_TIMEOUT_EXTENSION parameter (microseconds) is used to increase the time interval after the
detection of one or more node failures that all application activity including pending I/O on the failed node is
guaranteed to have ceased. This parameter is required to be set for Extended Distance Clusters using iFCP intercon-
nects between sites. See the manual "Understanding and Designing Serviceguard Disaster Tolerant Architectures" for
more information. Default is 0.
NETWORK_FAILURE_DETECTION (HP-UX Only)
This parameter determines which network failure detection method will be used for the cluster. The default value is
INOUT , when this value is applied, network interface will be marked as down only when both inbound and outbound
traffic to and from the interface have stopped.
By setting the value to INONLY_OR_INOUT , the enhanced inbound failure detection method will be applied throughout
the cluster. With this method, when inbound traffic to a network interface stops, Serviceguard Network Manager will
start a mechanism to determine if inbound traffic to the card actually has stopped. If it is, the card will be marked
as down. The card will also be marked down when both inbound and outbound traffic to and from the interface stop.
Thorough testing should be done when setting the parameter value to INONLY_OR_INOUT before running application on the
cluster to ensure the cluster network configuration is suitable for this option. There is considerable impact on Ser-
viceguard from the network environment where the enhanced inbound failure detection method is applied. Make sure that
the following conditions are met:
o All bridged nets in the cluster have more than two interfaces each.
o Each primary interface needs to have at least a standby interface which is connected to a standby switch.
o Primary switch should be directly connected to its standby.
o No single point of failure anywhere in all bridged nets.
Please refer to the Managing Serviceguard Manual for more details and examples.
NETWORK_AUTO_FAILBACK (HP-UX Only)
This parameter determines how the cluster will handle the recovery of the primary LAN interface after it has failed
over to the standby interface because of a link level failure. The default value is YES. Setting this parameter to
YES will cause the IP address(es) to failback to the primary LAN interface from the standby when the primary LAN
interface recovers at link level.
Setting this value to NO will cause the IP address(es) to failback to the primary LAN interface only when a user uses
cmmodnet(1m) to re-enable the interface.
This feature does not affect how failback is handled if IP level failures are detected with IP monitoring. However,
if a link level failure happens after an IP-level failure, this parameter's setting will be ignored.
SUBNET Name of a subnet in the cluster that is to be configured with or without being IP Monitored. All entries for IP_MONI-
TOR and POLLING_TARGET are associated with this subnet until the next SUBNET entry occurs.
IP_MONITOR This parameter specifies whether or not the subnet specified in the SUBNET entry will be monitored at IP layer. To
enable IP monitoring for the subnet, set IP_MONITOR to ON , to disable it, set the value to OFF a network interface
in that subnet fails at IP-level the interface will be marked down. If the IP Monitoring is disabled, failures only
at IP-level will not be detected.
POLLING_TARGET The IP address to which polling messages are sent from all network interfaces in this SUBNET ,if IP_MONITOR is set to
ON to determine the health of network interfaces at the IP-level. The POLLING_TARGET entry can be repeated as needed
as each subnet can have multiple polling targets. When IP_MONITOR is , but no POLLING_TARGET is specified, polling
messages are sent between network interfaces in the same subnet.
Maximum number of packages which can be configured in the cluster. The legal values are from 0 to 300. Default is
300. The parameter can be changed online. This parameter may be deprecated in a future release.
WEIGHT_DEFAULT Optional WEIGHT_NAME and WEIGHT_DEFAULT parameters are used to define a default value for this weight for all pack-
ages (except system multi-node packages). Package weights correspond to node capacities; node capacity is checked
against the corresponding package weight to determine if the package can run on that node.
WEIGHT_NAME specifies a name for a weight that corresponds to a capacity specified earlier in this file. Weight is
defined for a package, whereas capacity is defined for a node. For any given weight/capacity pair, WEIGHT_NAME,
CAPACITY_NAME (and weight_name in the package configuration file) must be the same. The rules for forming all three
are the same. See the discussion of the capacity parameters earlier in this file.
NOTE: A weight (WEIGHT_NAME/WEIGHT_DEFAULT) has no meaning on a node unless a corresponding capacity (CAPAC-
ITY_NAME/CAPACITY_VALUE) is defined for that node. For example, if CAPACITY_NAME "memory" is not defined for node1,
then node1's "memory" capacity is assumed to be infinite. Now even if pkgA, pkgB, and pkgC each specify the maximum
weight of 1000000 for WEIGHT_NAME "memory", all three packages are eligible to run at the same time on node1, assum-
ing all other requirements are met.
WEIGHT_DEFAULT specifies a default weight for this WEIGHT_NAME for all packages. This is a floating point value
between 0 and 1000000. Package weight default values are arbitrary as far as Serviceguard is concerned; they have
meaning only in relation to the corresponding node capacities.
The package weight default parameters are optional. If they are not specified, a default value of zero will be
assumed. If defined, WEIGHT_DEFAULT must follow WEIGHT_NAME. To specify more than one package weight, repeat this
process for each weight.
Note: for the reserved weight "package_limit", the default weight is always one. This default cannot be changed in
the cluster configuration file, but it can be overriden in the package configuration file.
For any given package and WEIGHT_NAME, you can override the WEIGHT_DEFAULT set here by setting weight_value to a dif-
ferent value for the corresponding weight_name in the package configuration file.
Cmapplyconf will fail if you define a default for a weight and no node in the cluster specifies a capacity of the
same name. You can define a maximum of four weight defaults.
The next three entries are used to set access control policies for the cluster. Access policies control non-root user access to the
cluster. The first line of each policy must be USER_NAME, second USER_HOST, and third USER_ROLE. NOTE: When this command is run
by an authorized user who is not the superuser(UID=0), only those entries that match the user running the command and the node where
the command is run are displayed.
USER_NAME Specifies the name of the user to whom the access needs to be granted. The user name value can either be or a maxi-
mum of 8 login names from the /etc/passwd file on If you misspell keyword it will be interpreted as the name of a
specific user, and the resultant access policy will not be what you expect.
USER_HOST Specifies the hostname from where the user can issue Serviceguard commands. If using Serviceguard Manager, it is the
COM server hostname. It can be set to or or a specific hostname. If hostname is specified, it cannot contain the
full domain name or an IP address. If you misspell keywords or it will be interpreted as the name of a specific node,
and the resultant access policy will not be what you expect.
USER_ROLE Specifies the access granted to the user. It must be one of these three values:
This role grants permission to view information about the entire cluster.
This role is granted by default to any Serviceguard user when a new cluster is created; the following entry is writ-
ten to the output cluster configuration file the first time it is generated:
"USER_NAME=ANY_USER" "USER_HOST=ANY_SERVICEGUARD_NODE" "USER_ROLE=MONITOR"
NOTE: If you remove or change this entry, you may not be able to manage the cluster with some versions of HP SIM or
HP VSE Manager. Consult the documentation for those products before making such a change.
This role grants
MONITOR permission, plus permission to issue administrative commands for all packages in the cluster.
This role grants
MONITOR and PACKAGE_ADMIN permission, plus permission to issue administrative commands for the cluster.
VOLUME_GROUP (HP-UX Only)
Name of volume group to be marked cluster-aware. The volume group will be used by clustered applications via the
vgchange -a e command that marks the volume group for exclusive access. Multiple VOLUME_GROUP keywords may be speci-
fied. By default, cmquerycl will specify each VOLUME_GROUP that is accessible by two or more nodes within the clus-
ter. This volume group will be initialized to be part of the cluster such that the volume group can only be acti-
vated via the vgchange -a e option.
The following values may also be used in the cluster_ascii_file for accessing a second cluster disk lock: SECOND_CLUSTER_LOCK_VG,
SECOND_CLUSTER_LOCK_PV. These values are only recommended for the 2 node configuration using only internal disks, therefore a pos-
sibility of both the first cluster lock pv failing at the same instant a node fails.
In addition to the fields already listed, the following fields are used in the cluster_ascii_file when the Serviceguard Extension
for RAC (HP-UX only) product is installed:
Name of volume group to be marked OPS or RAC cluster-aware. The volume group will be used by OPS or RAC cluster applications
via the vgchange -a s command, which marks the volume group for shared access. Multiple OPS_VOLUME_GROUP keywords may be
In addition, the following fields can be used in the cluster_ascii_file as part of an HP approved, site aware disaster tolerant
solution (HP-UX only). In the cluster_ascii_file, SITE_NAME must precede any NODE_NAME entries and SITE must follow the NODE_NAME
it is being associated with.
SITE_NAME Defines a logical site name to which nodes may be associated. Each SITE_NAME must be assocated with at least one
NODE_NAME (see SITE)
SITE Associates a node with a site name. Modifying the SITE entry requires the node to be offline. All nodes in the cluster
must be site-aware, or none; a cluster must not consist of some nodes that are associated with sites and some that are
not associated with any.
Line output format
The line output format is designed for simple machine parsing using tools such as grep(1), cut(1) or awk(1).
Each line of output is composed of the form:
The value field is a string that describes a single piece of configuration or status data related to name.
The name field uniquely identifies a single configuration item and is composed of one or more elements, separated by a pipe character. In
cases where the output contains multiple lines with the same type of name element, the name element is further qualified by an element
The element identifier is appended to the name element and separated by a colon. The element identifier is used to distinguish which object
the configuration information corresponds to.
In the following example, the name element path for IP address information is node.interface.ip_address. Since multiple IP address config-
uration elements are being displayed, the node and interface elements are further qualified with the element identifiers cnode1 (the name
of the node containing the interface) and lan1 or lan3 (the interfaces containing the IP address).
# cmquerycl -f line -c cluster | grep ip_address
cmquerycl returns the following value:
0 Successful completion.
1 Command failed.
To poll the configurations for node1 and node2 and to save the results in file clusterA.config :
cmquerycl -v -n node1 -n node2 -C clusterA.config
To poll the configurations for node1 and node2 , to use IPv6 addresses for the Heartbeat, and save the results in file clusterA.config :
cmquerycl -h ipv6 -C clusterA.config -n node1 -n node2
To poll the configurations for node1 and node2 , to use a physical lock LUN device (which shares the same file name across nodes) as a
cluster lock, and to save the results in file clusterA.config :
cmquerycl -C clusterA.config -L lock_lun_device -n node1 -n node2
To display a list of monitored subnets in clusterA
cmquerycl -f line -c clusterA |
grep '^node.*subnet=' | cut -d= -f2 | sort -u
To poll the configurations for node1 and node2 when the device name is different on different nodes to use a physical lock LUN device
(which does not share the same file name across nodes) as a cluster lock, and to save the results in file config :
cmquerycl -C config -n node1 -L lock_dev1 -n node2 -L lock_dev2
To check node1 access and authorization to quorum server qs_host , with the addtional IP address qs_ip2 :
cmquerycl -n node1 -q qs_host qs_ip2
To poll the configurations for node1 and node2 , to use qs_host as cluster lock, and save the results in file clusterA qs_ip2
cmquerycl -c clusterA -q qs_host qs_ip2 -C clusterA.config
To create a configuration file that can be used to change or add qs_host as the cluster lock to clusterA :
cmquerycl -c clusterA -q qs_host -C clusterA.config
To query using the -k option
cmquerycl -v -k -C clusterA.config
To query using the -w option
cmquerycl -v -w local -C clusterA.config
To query using the -k and -w option
cmquerycl -v -k -w local -C clusterA.config
To query the configuration for
cmquerycl -v -c clusterA
To display the LVM configuration for clusterA :
cmquerycl -v -c clusterA -l lvm
To create an ASCII file that can be used to remove node2 and add node3 to clusterA : (assuming clusterA contains node1 and node2 to begin
cmquerycl -c clusterA -n node1 -n node3 -C clusterA.config
To create an ASCII file that can be used to add new networks or remove existing ones from nodes configured in clusterA : (assuming clusterA
contains node1 and node2)
cmquerycl -c clusterA -C clusterA.config
Note: clusterA.config will contain commented-out entries of the new networks, if any, for node1 and node2. You can uncomment the
entries for networks you want to include in the new configuration or comment-out entries for networks you want to remove. When you
use the -n option, commented-out entries are created (in the ASCII file) only for existing cluster node(s); in this case, given the
previous example, entries would be created only for node1.
This command is part of the cluster configuration process. Following is an example of configuring a cluster with two nodes and two pack-
cmquerycl -C clusterA.config -n node1 -n node2
cmmakepkg -p pkg1.config
cmmakepkg -p pkg2.config
cmmakepkg -s pkg1.control.script
cmmakepkg -s pkg2.control.script
< customize clusterA.config >
< customize pkg1.config >
< customize pkg2.config >
< customize pkg1.control.script >
< customize pkg2.control.script >
cmcheckconf -C clusterA.config -P pkg1.config -P pkg2.config
cmapplyconf -C clusterA.config -P pkg1.config -P pkg2.config
cmquerycl was developed by HP.
cmapplyconf(1m), cmcheckconf(1m), cmmakepkg(1m), cmruncl(1m), netstat(1), lanscan(1) (HP-UX only), vgimport(1), vgchange(1).
Requires Optional Serviceguard Software cmquerycl(1m)