@bartus
Ad 1) Yes, for the past 2 weeks
Ad 2) We removed an empty OVM repository and the associated LUN
@radoulov
I'll try to get the contents of the messages file from around the incident(s) (it's currently around 650M, so I won't post it all)
EDIT: Added the contents of syslog from the three cluster nodes.
Hi:
I am in the process of configuring the SAN for Solaris to host 6 oracle 9i databases. We have 30 -146 GB disks stiped with RAID 10 for SAN. Of which 11 are dedicated for databsaes related things. Then we have 2 v880 Sun Servers with 16 -73 GB disks and 24 GB memory.
The questions... (1 Reply)
Hi all,
Sorry if this is in the wrong place but needed to make sure lots of people saw this so that hopefully someone will be able to help.
Basically i've upgraded a test server from 4.3 to 5.3 TL04.
The server has hdisk0 and 1 as rootvg locally but then has another vg setup on our ESS... (5 Replies)
Hi,
I have a strange problem. we're trying to connect an IBM pseries, to a Brocade switch, for SAN acess, using a badged emulex card, (IBM FC6239) WE can configure the device to see the fabric. The only problem we have is that the Brocade sees the HBA as storage, and not as a HBA.
We've zoned... (1 Reply)
Sorry for my english.
We have a IBM BLADES JS21. AIX 5.3 update to 6.
Our JS21 has 2 FC (fcs0 and fcs1).
We have one DS4072, one Disk system with 2 controllers and 2 FC by controllers.
This means, all AIX FC see all Disk systems controllers by 2 FC switchs. (one fc two roads)
FC AIX... (4 Replies)
Hi,
We are moving the RMAN oracle backup to tape on daily basis.After they went for SAN change in the solaris server , we started receiveing ora-600 errors and backup is getting fails.
I am from Oracle DBA team.I need to know whether any command for check SAN change or root cause for this... (2 Replies)
I have some T3's we just purchased and we are looking to carve these up into LDOMS's.
Just wondering if anyone can give me a quick run down or pro / cons on SAN vs local internal for the LDOM itself. These will external SAN storage for their applications. (0 Replies)
Need attach Oracle Linux 7.1 to IBM Storage DS5300
in AIX use sddpcm package for storage driver, but in oracle linux i can not find similar driver in IBM site. Someone confronted with a like case ? (1 Reply)
Hi everybody,
i am working on the new script , its job discover the Storage Env ( especially MultiPATH ) and FC cards for solaris 11 sparc systems for now..
script is seem working ( but may contain any mistakes or bug ) on the oracle Qlogic fc cards on Emc_VMAx systems and Solaris 11 Sparc_64... (0 Replies)
Hello,
We have few critical databases running on T series servers. Setup consists of LDOMS and zonemanager running on them and databases are running on their zones. We want to get away from this setup and looking for alternate solution to run our Oracle databases.
One option is OVM from Oracle.... (7 Replies)
Discussion started by: solaris_1977
7 Replies
LEARN ABOUT HPUX
cmhaltnode
cmhaltnode(1m)cmhaltnode(1m)NAME
cmhaltnode - halt a node in a high availability cluster
SYNOPSIS
cmhaltnode [-f] [-v] [-t] [node_name...]
DESCRIPTION
cmhaltnode causes a node to halt its cluster daemon and remove itself from the existing cluster.
To halt cluster on the node, a user must either be superuser(UID=0), or have an access policy of FULL_ADMIN allowed in the cluster configu-
ration file. See access policy in cmquerycl.
When cmhaltnode is run on a node, the cluster daemon is halted and, optionally, all packages that were running on that node are moved to
other nodes if possible.
If node_name is not specified, the cluster daemon running on the local node will be halted and removed from the existing cluster.
If you issue this command while a cluster is still in the process of forming, the command will fail with the message "Unable to connect to
daemon." If this happens, wait for the cluster to form successfully, then issue the command again.
Options
cmhaltnode supports the following options:
-f Force the node to halt even if there are packages or group members running on it. The group members on the node will be
terminated. The halt scripts for all packages running on the node will be run; based on priority or dependency relation-
ships, this may affect packages on other nodes. In other words, packages on other nodes may either start or halt based on
this package halting. If the package configuration and current cluster membership permit, and if the package halt script
succeeds, the packages will be started on other nodes. Without this option, if packages are running on the given node,
the command will fail. If a package fails to halt, the node halt will also fail.
-v Verbose output will be displayed.
-t Test only. Provide an assessment of the package placement without affecting the current state of the nodes or packages.
This option validates the node's eligibility with respect to the package dependencies as well as the external dependencies
such as EMS resources, package subnets, and storage before predicting any package placement decisions. If there is a pack-
age in maintenance mode running on the nodes being halted, the package will always be halted and not failover to another
node; the report will not display an assessment for that package.
node_name...
The name of the node(s) to halt.
RETURN VALUE
cmhaltnode returns the following value:
0 Successful completion.
1 Command failed.
EXAMPLES
Halt the cluster daemon on two other nodes:
cmhaltnode node2 node3
AUTHOR
cmhaltnode was developed by HP.
SEE ALSO cmquerycl(1m), cmhaltcl(1m), cmruncl(1m), cmrunnode(1m), cmviewcl(1m), cmeval(1m).
Requires Optional Serviceguard Software cmhaltnode(1m)