Veritas I/O fencing issue on Solaris10


 
Thread Tools Search this Thread
Operating Systems Solaris Veritas I/O fencing issue on Solaris10
# 1  
Old 01-28-2011
Veritas I/O fencing issue on Solaris10

I have 2 clusters running on solaris10 servers. VCS is working fine but when i configure I/O fencing with co-ordinated disks only one node have the keys on the disks at time where as both the nodes shuld have keys there what could be the reason for this ?

like in the following o/p only Node2's reservation are seen .. and if i will restart the vxfencing on node1 .. it will show node1's keys


Code:
vxfenadm -s all -f /etc/vxfentab
 
Device Name: /dev/vx/rdmp/emc_clariion0_17s2
Total Number Of Keys: 1
key[0]:
        [Numeric Format]:  86,70,48,48,48,49,48,49
        [Character Format]: VF000101
   *    [Node Format]: Cluster ID: 1     Node ID: 1   Node Name: Node2
 
Device Name: /dev/vx/rdmp/emc_clariion0_18s2
Total Number Of Keys: 1
key[0]:
        [Numeric Format]:  86,70,48,48,48,49,48,49
        [Character Format]: VF000101
   *    [Node Format]: Cluster ID: 1     Node ID: 1   Node Name: Node2
 
Device Name: /dev/vx/rdmp/emc_clariion0_19s2
Total Number Of Keys: 1
key[0]:
        [Numeric Format]:  86,70,48,48,48,49,48,49
        [Character Format]: VF000101
   *    [Node Format]: Cluster ID: 1     Node ID: 1   Node Name: Node2


P.S:Mods ,If this is not the rt place to post my question for Veritas plz delete this thread.
Login or Register to Ask a Question

Previous Thread | Next Thread

10 More Discussions You Might Find Interesting

1. UNIX for Beginners Questions & Answers

How to extend a disk in veritas volume manager in veritas cluster?

Hi Experts, I wanted to extend a veritas file system which is running on veritas cluster and mounted on node2 system. #hastatus -sum -- System State Frozen A node1 running 0 A node2 running 0 -- Group State -- Group System Probed ... (1 Reply)
Discussion started by: Skmanojkum
1 Replies

2. Solaris

VNC issue on solaris10

M running solaris 10 u8 my vncserver is running on :0 .. and when i try to connect it through tight vncview i can see just see the screen .. with no terminal .. what could be the issue for it ? and what i need to check for it ? (2 Replies)
Discussion started by: fugitive
2 Replies

3. Solaris

Swap issue on a Solaris10 box

One of our system is running 3 oracle db instances. And as per prstat o/p the system is approximately using 78G of swap memory # prstat -J -n 2,15 PROJID NPROC SWAP RSS MEMORY TIME CPU PROJECT 4038 557 31G 29G 22% 113:23:43 10% proj1 4036 466 20G 19G... (2 Replies)
Discussion started by: fugitive
2 Replies

4. Solaris

/tmp issue on solaris10 box

I have a T5240 server with following swap configuration $ grep tmp /etc/vfstab swap - /tmp tmpfs - yes - $ swap -l swapfile dev swaplo blocks free /dev/swap 4294967295,4294967295 16 213909472 213909472 ... (4 Replies)
Discussion started by: fugitive
4 Replies

5. Solaris

Memory issue on solaris10 box

my system has 128G of installed memory. top, vmstat shows the system has just over 10G of free memory on the system. but as per prstat o/p the usage is just 50-55G is there anyway i can find which process/zone is using more memory ? System has 3 zones and all running application servers. ... (1 Reply)
Discussion started by: fugitive
1 Replies

6. Solaris

Solaris10

Hi All How can we verify if any of the parameters we have change in Solaris10 after reboot. Like is there any command? Please advice Thanks (3 Replies)
Discussion started by: imran721
3 Replies

7. Solaris

Modify the swap on Solaris10 on a volume VERITAS

Hi Community, Do you know a procedure to modify the swap on Solaris10 on a volume VERITAS? Please help me I'm currently working on this issue. Thank you for your availability! (1 Reply)
Discussion started by: Sunb3
1 Replies

8. Shell Programming and Scripting

portability issue linux(2.6) solaris10

the following simple scripts work fine on linux but fail on solaris: #!/bin/bash eval /usr/bin/time -f \'bt=\"%U + %S\"\' ./JUNK >> ./LOG 2>&1 cp ./LOG ./LOG_joe LC_joe=`cat ./LOG | wc -l` LC_joe=`echo $LC_joe-1|bc` tail -1 ./LOG > ./tmp head -$LC_joe ./LOG_joe > ./LOG rm ./LOG_joe ... (1 Reply)
Discussion started by: joepareti
1 Replies

9. Red Hat

Redhat Cluster Fencing failed

Hello; I have 2 node Redhat Cluster (RHEL4 U4 and Cluster Suite) I'm using mc_data fiber channel switch for fencing when I want to fence manually using fence_mcdata -a x.x.x. -l xxx -p xxxx -n 5 -o disable following messages appears fencing node "test1" agent "fence_mcdata" reports:... (0 Replies)
Discussion started by: sakir19
0 Replies

10. Solaris

help,win2003 and solaris10 in a pc?

sorry,my english is poor. who can install win2003 and solaris10 in one pc ? my win2000server in hda1 so frist install win2003 in hda5 second install solaris10 in hda2 but after install over,the win2003 can't logon in. alway let me press<ctrl>+<alt>+<del>. why? (1 Reply)
Discussion started by: keyi
1 Replies
Login or Register to Ask a Question
DLM.CONF(5)								dlm							       DLM.CONF(5)

NAME
dlm.conf - dlm_controld configuration file DESCRIPTION
The configuration options in dlm.conf mirror the dlm_controld command line options. The config file additionally allows advanced fencing and lockspace configuration that are not supported on the command line. Command line equivalents If an option is specified on the command line and in the config file, the command line setting overrides the config file setting. See dlm_controld(8) for descriptions and dlm_controld -h for defaults. Format: key=val Example: log_debug=1 post_join_delay=10 protocol=tcp Options: daemon_debug log_debug protocol debug_logfile enable_plock plock_debug plock_rate_limit plock_ownership drop_resources_time drop_resources_count drop_resources_age post_join_delay enable_fencing enable_concurrent_fencing enable_startup_fencing enable_quorum_fencing enable_quorum_lockspace Fencing A fence device definition begins with a device line, followed by a number of connect lines, one for each node connected to the device. A blank line separates device definitions. Devices are used in the order they are listed. The device key word is followed by a unique dev_name, the agent program to be used, and args, which are agent arguments specific to the device. The connect key word is followed by the dev_name of the device section, the node ID of the connected node in the format node=nodeid and args, which are agent arguments specific to the node for the given device. The format of args is key=val on both device and connect lines, each pair separated by a space, e.g. key1=val1 key2=val2 key3=val3. Format: device dev_name agent [args] connect dev_name node=nodeid [args] connect dev_name node=nodeid [args] connect dev_name node=nodeid [args] Example: device foo fence_foo ipaddr=1.1.1.1 login=x password=y connect foo node=1 port=1 connect foo node=2 port=2 connect foo node=3 port=3 device bar fence_bar ipaddr=2.2.2.2 login=x password=y connect bar node=1 port=1 connect bar node=2 port=2 connect bar node=3 port=3 Parallel devices Some devices, like dual power or dual path, must all be turned off in parallel for fencing to succeed. To define multiple devices as being parallel to each other, use the same base dev_name with different suffixes and a colon separator between base name and suffix. Format: device dev_name:1 agent [args] connect dev_name:1 node=nodeid [args] connect dev_name:1 node=nodeid [args] connect dev_name:1 node=nodeid [args] device dev_name:2 agent [args] connect dev_name:2 node=nodeid [args] connect dev_name:2 node=nodeid [args] connect dev_name:2 node=nodeid [args] Example: device foo:1 fence_foo ipaddr=1.1.1.1 login=x password=y connect foo:1 node=1 port=1 connect foo:2 node=2 port=2 connect foo:3 node=3 port=3 device foo:2 fence_foo ipaddr=5.5.5.5 login=x password=y connect foo:2 node=1 port=1 connect foo:2 node=2 port=2 connect foo:2 node=3 port=3 Unfencing A node may sometimes need to "unfence" itself when starting. The unfencing command reverses the effect of a previous fencing operation against it. An example would be fencing that disables a port on a SAN switch. A node could use unfencing to re-enable its switch port when starting up after rebooting. (Care must be taken to ensure it's safe for a node to unfence itself. A node often needs to be cleanly rebooted before unfencing itself.) To specify that a node should unfence itself for a given device, the unfence line is added after the connect lines. Format: device dev_name agent [args] connect dev_name node=nodeid [args] connect dev_name node=nodeid [args] connect dev_name node=nodeid [args] unfence dev_name Example: device foo fence_foo ipaddr=1.1.1.1 login=x password=y connect foo node=1 port=1 connect foo node=2 port=2 connect foo node=3 port=3 unfence foo Simple devices In some cases, a single fence device is used for all nodes, and it requires no node-specific args. This would typically be a "bridge" fence device in which an agent is passing a fence request to another subsystem to handle. (Note that a "node=nodeid" arg is always auto- matically included in agent args, so a node-specific nodeid is always present to minimally identify the victim.) In such a case, a simplified, single-line fence configuration is possible, with format: fence_all agent [args] Example: fence_all dlm_stonith A fence_all configuration is not compatible with a fence device configuration (above). Unfencing can optionally be applied with: fence_all agent [args] unfence_all Lockspace configuration A lockspace definition begins with a lockspace line, followed by a number of master lines. A blank line separates lockspace definitions. Format: lockspace ls_name [ls_args] master ls_name node=nodeid [node_args] master ls_name node=nodeid [node_args] master ls_name node=nodeid [node_args] Disabling resource directory Lockspaces usually use a resource directory to keep track of which node is the master of each resource. The dlm can operate without the resource directory, though, by statically assigning the master of a resource using a hash of the resource name. To enable, set the per- lockspace nodir option to 1. Example: lockspace foo nodir=1 Lock-server configuration The nodir setting can be combined with node weights to create a configuration where select node(s) are the master of all resources/locks. These master nodes can be viewed as "lock servers" for the other nodes. Example of nodeid 1 as master of all resources: lockspace foo nodir=1 master foo node=1 Example of nodeid's 1 and 2 as masters of all resources: lockspace foo nodir=1 master foo node=1 master foo node=2 Lock management will be partitioned among the available masters. There can be any number of masters defined. The designated master nodes will master all resources/locks (according to the resource name hash). When no masters are members of the lockspace, then the nodes revert to the common fully-distributed configuration. Recovery is faster, with little disruption, when a non-master node joins/leaves. There is no special mode in the dlm for this lock server configuration, it's just a natural consequence of combining the "nodir" option with node weights. When a lockspace has master nodes defined, the master has a default weight of 1 and all non-master nodes have weight of 0. An explicit non-zero weight can also be assigned to master nodes, e.g. lockspace foo nodir=1 master foo node=1 weight=2 master foo node=2 weight=1 In which case node 1 will master 2/3 of the total resources and node 2 will master the other 1/3. SEE ALSO
dlm_controld(8), dlm_tool(8) dlm 2012-04-09 DLM.CONF(5)