Sponsored Content
Top Forums UNIX for Advanced & Expert Users How to grow vxfs directory but the server is in Veritas Cluster environment? Post 302678223 by sunnychen98 on Friday 27th of July 2012 12:39:08 PM
Old 07-27-2012
How to grow vxfs directory but the server is in Veritas Cluster environment?

Hello,

Usually I use "vxresize" to grow vxfs directory in a stand-alone server without any problems, but I am just told to grow vxfs directorys in Veritas Cluster nodes.

Since I never done it before, would like to ask all the experts here to make sure the concept and steps will be fine before I pursue.

Example-1:

2 nodes cluster ( server-a and server-b ) in VCS control

Request: grow /opt/oracle with additional 10GB

What I notice is: the cluster is not using Veritas Cluster File System to mange its file systems =>

Which means, I can't grow file system in master node, then expect slave node will grow too .... I need to grow file systems separately on both cluster nodes --> is this concept correct?

server-a# df -h | grep /opt/oracle
/dev/vx/dsk/serveradg/optoracle 20G 13G 6.7G 66% /opt/oracle

server-b# df -h | grep /opt/oracle
/dev/vx/dsk/serverbdg/optoracle 20G 13G 6.8G 66% /opt/oracle

The reason I want to make sure is that I don't have clear Veritas Cluster knowledge, so I always thought that to grow filesystem in one node, the other nodes in the cluster should see the same change automatically, but while reasing online, I noticed this seems only works if file systems are under Veritas Cluster Filesystem control, so I would like to be careful before I go to run "vxresize" ... to prevent I am not going to mess up the cluster.

Can anyone help to share/confirm that I should still go to each node in the cluster to grow file systems even though those nodes are in the same cluster, but not in Veritas Cluster Filesystem control?

Thank you very much,

SC
 

10 More Discussions You Might Find Interesting

1. Solaris

veritas vxfs and ufsdump

hi guys, I am studying for the solaris scsa and i want to practice with ufsdump and restores, fssnaps etc... my question is, i finally found a server with a tape drive attached that i can mess arouund with because its not a critical box. its running solaris 8, and i want to do a ufsdump, but... (2 Replies)
Discussion started by: BG_JrAdmin
2 Replies

2. High Performance Computing

newbie in veritas cluster server

Hello, This might not be the right place to post my questions. - I installed VCS 5.0 on the 2 nodes. What's next? I want to test the HA of NFS: i.e. the shared disk always accessible if one node goes down. How to do that? - The management console was not installed. This is the GUI to manage... (2 Replies)
Discussion started by: melanie_pfefer
2 Replies

3. High Performance Computing

SUN Cluster Vs Veritas Cluster

Dear All, Can anyone explain about Pros and Cons of SUN and Veritas Cluster ? Any comparison chart is highly appreciated. Regards, RAA (4 Replies)
Discussion started by: RAA
4 Replies

4. High Performance Computing

Veritas Cluster Server Management Console IP Failover

I have just completed a first RTFM of "Veritas Cluster Server Management Console Implementation Guide" 5.1, with a view to assessing it to possibly make our working lives easier. Unfortunately, at my organisation, getting a test installation would be worse than pulling teeth, so I can't just go... (2 Replies)
Discussion started by: Beast Of Bodmin
2 Replies

5. Solaris

Veritas Cluster Server Question

Is it possible to configure veritas cluster server using 2 Ldoms on same host? I just want to test and learn VCS. We can do a cluster (sun cluster3.2 ) in a box using 2 Ldoms but i 'm not sure if thats possible with veritas cluster or not ? (1 Reply)
Discussion started by: fugitive
1 Replies

6. Solaris

Sun cluster and Veritas cluster question.

Yesterday my customer told me to expect a vcs upgrade to happen in the future. He also plans to stop using HDS and move to EMC. Am thinking how to migrate to sun cluster setup instead. My plan as follows leave the existing vcs intact as a fallback plan. Then install and build suncluster on... (5 Replies)
Discussion started by: sparcguy
5 Replies

7. UNIX for Advanced & Expert Users

Veritas VxVM & Vxfs to SVM and UFS

Hi, I have 2 hosts with Veritas VxVM and VxFS (5.0 MP3_RP2). I need to use the application filesystem's luns from these 2 hosts and mount it on another 2 hosts that are running Solaris 9 09/05 and SVM. Is there resources online or has someone tried this? (0 Replies)
Discussion started by: xor
0 Replies

8. Solaris

Veritas VxVM & Vxfs to SVM and UFS

Hi, I have 2 hosts with Veritas VxVM and VxFS (5.0 MP3_RP2). I need to use the application filesystem's luns from these 2 hosts and mount it on another 2 hosts that are running Solaris 9 09/05 and SVM. Is there resources online or has someone tried this? (4 Replies)
Discussion started by: xor
4 Replies

9. Solaris

Using ZFS with Veritas Cluster Server

Until I really began to explore the practical implications of using ZFS with VCS, I would not have necessarily realised the obstacles that would be put in my path. Data integrity is a must-have for storage in a shared host environment, so it surprised me to learn as I opened this particular... (0 Replies)
Discussion started by: cambridge
0 Replies

10. UNIX for Beginners Questions & Answers

How to extend a disk in veritas volume manager in veritas cluster?

Hi Experts, I wanted to extend a veritas file system which is running on veritas cluster and mounted on node2 system. #hastatus -sum -- System State Frozen A node1 running 0 A node2 running 0 -- Group State -- Group System Probed ... (1 Reply)
Discussion started by: Skmanojkum
1 Replies
cmdisklock(1m)															    cmdisklock(1m)

NAME
cmdisklock - manage Serviceguard cluster lock devices. SYNOPSIS
cmdisklock check path cmdisklock [-f] reset path DESCRIPTION
cmdisklock is a tool to check the current state of a Serviceguard cluster lock device. It can also be used to reset the state of the clus- ter lock device. The need to reset the cluster lock device state could arise if the cluster lock device is replaced or becomes corrupt. A cluster lock device can be either an HP-UX LVM cluster lock or a cluster lock LUN device. HP-UX LVM cluster locks exist only on a disk in an LVM volume group. Cluster lock LUNs exist only on disks dedicated to cluster lock. cmdisklock is useful for checking either type of cluster lock and for re-initializing cluster lock LUN devices after a failure or corruption. NOTE To restore an HP-UX LVM cluster lock, use vgcfgrestore. cmdisklock will fail until vgcfgrestore is run, and cmdisklock is unnecessary as long as vgcfgbackup was done after the cluster lock was initialized. See the Managing Serviceguard manual for details. The syntax of the path option depends on the type of lock. For HP-UX LVM cluster lock disks, the syntax is VG:PV (for example: /dev/vglock:/dev/dsk/c0t0d2). For cluster lock LUN disks, the path is the disk device path. For example, /dev/sdd1 (on Linux) or /dev/dsk/c0t1d2 (on HP-UX). Options cmdisklock supports the following options: check Check the current state of the cluster lock device and report the results. reset Reset (initialize) the state of the cluster lock device. This operation should only be performed on a cluster lock LUN device. For HP-UX LVM cluster lock, use vgcfgrestore as documented in the Managing Serviceguard manual. After performing a reset, a check can be used to verify that the lock is cleared. EXAMPLES
If the cluster lock LUN device becomes corrupted and the cluster is up, messages like the following will appear in syslog. Mar 15 12:20:41 usb cmdisklockd[17599]: WARNING: Cluster lock LUN /dev/dsk/c0t1d2 is corrupt: bad label. Until this situation is cor- rected, a single failure could cause all nodes in the cluster to crash. Mar 15 12:20:41 usb cmdisklockd[17599]: After ensuring that all active nodes in the cluster have logged this message, run 'cmdisklock reset /dev/dsk/c0t1d2' to repair Mar 15 12:20:41 usb cmdisklockd[17599]: Cluster lock disk /dev/dsk/c0t1d2 is inaccessible Once the above messages appear in syslog on all running nodes, the following command will re-initialize the cluster lock LUN: ucd:/> cmdisklock reset /dev/dsk/c0t1d2 WARNING: Cluster lock LUN /dev/dsk/c0t1d2 is corrupt: bad label. Until this situation is corrected, a single failure could cause all nodes in the cluster to crash. After ensuring that all active nodes in the cluster have logged this message, run 'cmdisklock reset /dev/dsk/c0t1d2' to repair /dev/dsk/c0t1d2 is inaccessible Resetting cluster lock device /dev/dsk/c0t1d2 Cluster lock reset completed /dev/dsk/c0t1d2 is accessible cleared After the lock is restored, a message like the following appears in syslog: Mar 15 12:23:11 usb cmdisklockd[17599]: Cluster lock disk /dev/dsk/c0t1d2 is accessible WARNINGS
CAUTION For cluster lock LUN, reset is a potentially destructive operation. While cmdisklock checks for known volume manager and file system use (overridden by -f), it does not validate that the device to be reset is actually used by any cluster. If -f is used on the wrong device file, loss of data may result. CAUTION Care should be taken when doing a reset when the cluster is active as there is a remote possibility that the cluster will partition right when this command is run and both nodes could end up thinking they have successfully acquired the lock. To avoid this situation, make sure cmcld has logged a message in syslog on all running nodes saying the device is inaccessble, before performing a reset. Note that it is safe to run cmdisklock when the cluster is down. RETURN VALUE
cmdisklock returns the following values: 0 Successful completion. 1 The disk is inaccessible or is not recognized as a cluster lock. AUTHOR
cmdisklock was developed by HP. SEE ALSO
cmapplyconf(1m), cmviewcl(1m), vgcfgbackup(1m), vgcfgrestore(1m) Requires Optional Serviceguard Software cmdisklock(1m)
All times are GMT -4. The time now is 06:58 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy