Sponsored Content
Full Discussion: AIX high availability 1-3/69
Operating Systems AIX AIX high availability 1-3/69 Post 302702797 by walterchang100 on Tuesday 18th of September 2012 10:10:41 PM
Old 09-18-2012
AIX high availability 1-3/69

Hi,

Can someone help and give the answer for the following questions:

1. When PowerHA SystemMirror 7.1 is installed on AIX 7.1, what RSCT component does Cluster Aware AIX (CAA) replace?

A. Group Services
B. Resource Manager
C. Topology Services
D. Resource Monitoring and Control (RMC)


2. When implementing PowerHA in a cross site LVM configuration, which logical volume option is required?

A. Serialize IO=yes
B. Mirror Write Consistency=on
C. Scheduling Policy=sequential
D. Allocation Policy=superstrict


3. An administrator is using PowerHA 7 to define a new cluster using SMIT option "Setup a Cluster, Nodes and Networks" and encountered the following error message:

<please see the attachment>

What is the root cause of the problem?

A. The nodes were not defined in the DNS
B. The /etc/cluster/rhosts are not populated correctly
C. The CAA repository disk is not accessible on all nodes
D. The CAA cluster was not defined before defining the PowerHA Cluster
AIX high availability 1-3/69-3gif
 

10 More Discussions You Might Find Interesting

1. AIX

AIX and port trunking / high availability

Hi all I was just wondering what modes AIX supports for port trunking ( bonding, etherchannel, link aggregation or whatever you want to call it ) I'm in particular looking for a high availability mode ( other than 802.3ad ) (2 Replies)
Discussion started by: art
2 Replies

2. UNIX for Advanced & Expert Users

Unix high availability and scalability survey

we're in the process of reviewing of unix infrastructure main objective is to consolidate on the less versions possible key decision factors are scalability and high availability options given our multi-datacenter infrastructure, features like HP's continental cluster are top on our wish list... (9 Replies)
Discussion started by: iacopet
9 Replies

3. UNIX for Advanced & Expert Users

High availability/Load balancing

Hi folks, (Sorry I don't know what its technology is termed exactly. High Availability OR load balancing) What I'm going to explore is as follows:- For example, on Physical Servers; Server-1 - LAMP, a working server Server-2 - LAMP, for redundancy While Server-1 is working all... (3 Replies)
Discussion started by: satimis
3 Replies

4. UNIX for Dummies Questions & Answers

iscsi high availability

Hi, I want to set up a iscsi high availability with sheepdog distributed storage. Here is my system set up. Four nodes with sheepdog distributed storage and i am sharing this storage through iscsi using two nodes as well as using a virtual ip set up using ucarp.Two nodes using same iqn. And... (0 Replies)
Discussion started by: jobycxa
0 Replies

5. Red Hat

Redhat 5 High Availability Add-on

Hello Experts, I have a question about Redhat HA Add-On, how can i setup an Active/Active Cluster using Redhat 5.7 64Bit, with Round-Robin technique. Each server will run an application and oracle database without RAC. Thanks (0 Replies)
Discussion started by: karmellove
0 Replies

6. Red Hat

Red Hat High Availability (HA) Cluster

How can we implement a service in HA, which in not available in HA. like sldap or customize application. Requirement Details. NODE1 service slapd is running.(Require) NODE2 service slapd is running.(Require) on both the node replication is happening. Now here requirement is need... (2 Replies)
Discussion started by: Priy
2 Replies

7. Solaris

High availability

hi guys I posted problem last time I didn't find answer to my issue. my problem is as below: I have two servers which work as an actif/standby in high availability system. but when i use command HASTAT -a i have the following message: couldn' find actif node. the servers are sun... (1 Reply)
Discussion started by: zineb06
1 Replies

8. Red Hat

Redhat: High Availability

Hi, I want to create gfs storage. But getting error as below: --> Finished Dependency Resolution Error: Package: pacemaker-1.1.12-22.el7_1.2.x86_64 (rhel-ha-for-rhel-7-server-eus-rpms) Requires: sbd You could try using --skip-broken to work around the problem You could try... (1 Reply)
Discussion started by: mzainal
1 Replies

9. UNIX for Advanced & Expert Users

Email Server High Availability

Hello, We are planning to setup a Email server with High Availability for email services so that if SMTP/POP/IMAP goes down on one server, the services switch to second server. We are planning to use a Linux machines from a hosting provider and will do it using DNS with multiple MX records with... (0 Replies)
Discussion started by: sunnysthakur
0 Replies

10. UNIX for Beginners Questions & Answers

New to AIX: How do I setup high availability on an AIX System

I am new to AIX but not new to unix. I have an interview for an AIX systems admin position and I know they want someone who has knowledge of High Availability, Failover and LPARs From my research so far, It appear powerha is used to setup high availability and failover on Power systems but is... (2 Replies)
Discussion started by: mathisecure
2 Replies
sgmgr(1M)																 sgmgr(1M)

NAME
sgmgr - Serviceguard Manager SYNOPSIS
filename ] | COMserver username password cluster_name ...]] COMserver2 Remarks Serviceguard Manager is the graphical user interface for Serviceguard or Serviceguard Extension for RAC software, Version A.11.12 or later, Serviceguard products are not included in the standard HP-UX operating system. DESCRIPTION
The command starts Serviceguard Manager, the graphical user interface for Serviceguard clusters. Serviceguard Manager can be installed on HP-UX, Linux, or Windows. Serviceguard Manager can be used to view saved data files of a single cluster, or to see running clusters. To see the "live" cluster map, Serviceguard Manager connects to a Serviceguard node on the same subnet as those clusters, specifically to a part of Serviceguard called the Cluster Object Manager (COM). Options supports the options listed below. No options are required. If any option is not specified, the user will be prompted to supply it after the interface opens. Open a previously saved or example object data file. The file will have the .sgm extension. It can display only one cluster. This option cannot be used with any other options. Specify the Serviceguard node that will be the server. This node's COM will query cluster objects running on its subnets, and will report their status and configuration. Servers with Serviceguard Version A.11.12 or later can monitor clusters. Servers with Ser- viceguard Version A.11.14 or later can also perform administrative actions. Servers with Version A.11.16 or later can also configure clusters. To specify multiple sessions, repeat the -s option. The user login name for the COMserver node. Valid only if COMserver is specified. The user password on the COMserver node. Valid only if username is specified. In creating the map, the COMserver will include the cluster where it is a member. In creating the map, the COMserver will report information about the specified cluster_name(s). Specify clusters with the following cluster software installed: MC/Serviceguard Version A.10.10 or later, MC/LockManager Version A.11.02 or later, Serviceguard OPS or Extension for RAC Version A.11.08 or later, and all versions of MetroClusters and ContinentalClusters. To see several clusters, repeat the -c option. If you specify this unused nodes option all COMservers will report information about nodes that have Serviceguard installed, but are not currently configured in any cluster. To connect to another COMserver for another session, repeat the -s options. AUTHOR
was developed by HP. SEE ALSO
See documents at http://docs.hp.com/hpux/ha including: Managing Serviceguard. Configuring OPS Clusters with Serviceguard OPS Edition. Using Serviceguard Extension for Real Application Cluster (RAC). Series 700 or 800 Works with Optional Serviceguard or Serviceguard Extension for RAC Software sgmgr(1M)
All times are GMT -4. The time now is 10:59 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy