Veritas I/O fencing issue on Solaris10


 
Thread Tools Search this Thread
Operating Systems Solaris Veritas I/O fencing issue on Solaris10
# 1  
Old 01-28-2011
Veritas I/O fencing issue on Solaris10

I have 2 clusters running on solaris10 servers. VCS is working fine but when i configure I/O fencing with co-ordinated disks only one node have the keys on the disks at time where as both the nodes shuld have keys there what could be the reason for this ?

like in the following o/p only Node2's reservation are seen .. and if i will restart the vxfencing on node1 .. it will show node1's keys


Code:
vxfenadm -s all -f /etc/vxfentab
 
Device Name: /dev/vx/rdmp/emc_clariion0_17s2
Total Number Of Keys: 1
key[0]:
        [Numeric Format]:  86,70,48,48,48,49,48,49
        [Character Format]: VF000101
   *    [Node Format]: Cluster ID: 1     Node ID: 1   Node Name: Node2
 
Device Name: /dev/vx/rdmp/emc_clariion0_18s2
Total Number Of Keys: 1
key[0]:
        [Numeric Format]:  86,70,48,48,48,49,48,49
        [Character Format]: VF000101
   *    [Node Format]: Cluster ID: 1     Node ID: 1   Node Name: Node2
 
Device Name: /dev/vx/rdmp/emc_clariion0_19s2
Total Number Of Keys: 1
key[0]:
        [Numeric Format]:  86,70,48,48,48,49,48,49
        [Character Format]: VF000101
   *    [Node Format]: Cluster ID: 1     Node ID: 1   Node Name: Node2


P.S:Mods ,If this is not the rt place to post my question for Veritas plz delete this thread.
Login or Register to Ask a Question

Previous Thread | Next Thread

10 More Discussions You Might Find Interesting

1. UNIX for Beginners Questions & Answers

How to extend a disk in veritas volume manager in veritas cluster?

Hi Experts, I wanted to extend a veritas file system which is running on veritas cluster and mounted on node2 system. #hastatus -sum -- System State Frozen A node1 running 0 A node2 running 0 -- Group State -- Group System Probed ... (1 Reply)
Discussion started by: Skmanojkum
1 Replies

2. Solaris

VNC issue on solaris10

M running solaris 10 u8 my vncserver is running on :0 .. and when i try to connect it through tight vncview i can see just see the screen .. with no terminal .. what could be the issue for it ? and what i need to check for it ? (2 Replies)
Discussion started by: fugitive
2 Replies

3. Solaris

Swap issue on a Solaris10 box

One of our system is running 3 oracle db instances. And as per prstat o/p the system is approximately using 78G of swap memory # prstat -J -n 2,15 PROJID NPROC SWAP RSS MEMORY TIME CPU PROJECT 4038 557 31G 29G 22% 113:23:43 10% proj1 4036 466 20G 19G... (2 Replies)
Discussion started by: fugitive
2 Replies

4. Solaris

/tmp issue on solaris10 box

I have a T5240 server with following swap configuration $ grep tmp /etc/vfstab swap - /tmp tmpfs - yes - $ swap -l swapfile dev swaplo blocks free /dev/swap 4294967295,4294967295 16 213909472 213909472 ... (4 Replies)
Discussion started by: fugitive
4 Replies

5. Solaris

Memory issue on solaris10 box

my system has 128G of installed memory. top, vmstat shows the system has just over 10G of free memory on the system. but as per prstat o/p the usage is just 50-55G is there anyway i can find which process/zone is using more memory ? System has 3 zones and all running application servers. ... (1 Reply)
Discussion started by: fugitive
1 Replies

6. Solaris

Solaris10

Hi All How can we verify if any of the parameters we have change in Solaris10 after reboot. Like is there any command? Please advice Thanks (3 Replies)
Discussion started by: imran721
3 Replies

7. Solaris

Modify the swap on Solaris10 on a volume VERITAS

Hi Community, Do you know a procedure to modify the swap on Solaris10 on a volume VERITAS? Please help me I'm currently working on this issue. Thank you for your availability! (1 Reply)
Discussion started by: Sunb3
1 Replies

8. Shell Programming and Scripting

portability issue linux(2.6) solaris10

the following simple scripts work fine on linux but fail on solaris: #!/bin/bash eval /usr/bin/time -f \'bt=\"%U + %S\"\' ./JUNK >> ./LOG 2>&1 cp ./LOG ./LOG_joe LC_joe=`cat ./LOG | wc -l` LC_joe=`echo $LC_joe-1|bc` tail -1 ./LOG > ./tmp head -$LC_joe ./LOG_joe > ./LOG rm ./LOG_joe ... (1 Reply)
Discussion started by: joepareti
1 Replies

9. Red Hat

Redhat Cluster Fencing failed

Hello; I have 2 node Redhat Cluster (RHEL4 U4 and Cluster Suite) I'm using mc_data fiber channel switch for fencing when I want to fence manually using fence_mcdata -a x.x.x. -l xxx -p xxxx -n 5 -o disable following messages appears fencing node "test1" agent "fence_mcdata" reports:... (0 Replies)
Discussion started by: sakir19
0 Replies

10. Solaris

help,win2003 and solaris10 in a pc?

sorry,my english is poor. who can install win2003 and solaris10 in one pc ? my win2000server in hda1 so frist install win2003 in hda5 second install solaris10 in hda2 but after install over,the win2003 can't logon in. alway let me press<ctrl>+<alt>+<del>. why? (1 Reply)
Discussion started by: keyi
1 Replies
Login or Register to Ask a Question
vgmove(1M)																vgmove(1M)

NAME
vgmove - move data from an old set of disks in a volume group to a new set of disks SYNOPSIS
autobackup] diskmapfile vg_name autobackup] diskfile diskmapfile vg_name DESCRIPTION
The command migrates data from the existing set of disks in a volume group to a new set of disks. After the command completes successfully, the new set of disks will belong to the same volume group. The command is intended to migrate data on a volume group from old storage to new storage. The diskmapfile specifies the list of source disks to move data from, and the list of destination disks to move data to. The user may choose to list only a subset of the existing physical volumes in the volume group that need to be migrated to a new set of disks. The format of the diskmapfile file is shown below: source_pv_1 destination_pv_1_1 destination_pv_1_2 .... source_pv_2 destination_pv_2_1 destination_pv_2_2 .... .... source_pv_n destination_pv_n_1 destination_pv_n_2 .... If a destination disk is not already part of the volume group, it will be added using see vgextend(1M). Upon successful completion of the source disk will be automatically removed from the volume group using see vgreduce(1M). After successful migration, the destination disks are added to the LVM configuration files; namely, or The source disks along with their alternate links are removed from the LVM configuration files. A sample diskmapfile is shown below: /dev/disk/disk1 /dev/disk/disk51 /dev/disk/disk52 /dev/disk/disk2 /dev/disk/disk51 /dev/disk/disk3 /dev/disk/disk53 The diskmapfile can be manually created, or it can be automatically generated using the diskfile and diskmapfile options. The argument diskfile contains a list of destination disks, one per line such as the sample file below: /dev/disk/disk51 /dev/disk/disk52 /dev/disk/disk53 When the option is given, reads a list of destination disks from diskfile, generates the source to destination mapping, and saves it to diskmapfile. The volume group must be activated before running the command. If the command is interrupted before it completes, the volume group is in the same state it was at the beginning of the command. The migration can be continued by running the command with the same options and disk mapping file. Options and Arguments The command recognizes the following options and arguments: vg_name The path name of the volume group. Set automatic backup for this invocation of autobackup can have one of the following values: Automatically back up configuration changes made to the volume group. This is the default. After this command executes, the command is executed for the volume group; see vgcfgbackup(1M). Do not back up configuration changes this time. Specify the name of the file containing the source to destination disk mapping. If the option is also given, will generate the disk mapping and save it to this filename. (Note that if the diskmapfile already exists, the file will be overwritten). Otherwise, will perform the data migration using this diskmapfile. Specify the name of the file containing the list of destination disks. This option is used with the option to generate the diskmapfile. When the option is used, no volume group data is moved. Preview the actions to be taken but do not move any volume group data. Shared Volume Group Considerations For volume group version 1.0 and 2.0, cannot be used if the volume group is activated in shared mode. For volume groups version 2.1 (or higher), can be performed when activated in either shared, exclusive, or standalone mode. Note that the daemon must be running on all the nodes sharing a volume group activated in shared mode. See lvmpud(1M). When a node wants to share the volume group, the user must first execute a if physical volumes were moved in or out of the volume group at the time the volume group was not activated on that node. LVM shared mode is currently only available in Serviceguard clusters. EXTERNAL INFLUENCES
Environment Variables determines the language in which messages are displayed. If is not specified or is null, it defaults to "C" (see lang(5)). If any internationalization variable contains an invalid setting, all internationalization variables default to "C" (see environ(5)). EXAMPLES
Move data in volume group from to After the migration, remove from the volume group: Generate a source to destination disk map file for where the destination disks are and SEE ALSO
lvmpud(1M), pvmove(1M), vgcfgbackup(1M), vgcfgrestore(1M), vgextend(1M), vgreduce(1M), intro(7), lvm(7). vgmove(1M)