AIX Cluster Show shared file systems.


 
Thread Tools Search this Thread
Operating Systems AIX AIX Cluster Show shared file systems.
# 1  
Old 11-04-2014
AIX Cluster Show shared file systems.

Hello,
I am working on applications on an AIX 6.1 two-node cluster, with an active and passive node. Is there a command that will show me which mount points / file systems are shared and 'swing' from one node to the other when the active node changes, and which mount points are truly local to each node and not shared?

I can run commands with superuser access, but obviously I do not want to change/break anything, this is a production environment. Thanks for any help you can give.
# 2  
Old 11-04-2014
OK,
That goes with VG, see which VG is shared, when you run
lspv
Few VG's show concurrent and few active, the concurrent one's are shared between the nodes in cluster.

The VGs (single or multiple) are combined to form a Resource Group (RG) and that is defined in the cluster.

You cannot failover a single filesystem, it has to be the RG.

If this is a production server DO NOT play around doing failover or fallback.
This User Gave Thanks to ibmtech For This Post:
# 3  
Old 11-05-2014
Thanks very much, ibmtech.

When I try out the lspv command on the active and inactive nodes of the cluster, I see:

Code:
lihbidbp12pf:/home/lc25487 $ lspv
hdisk0          0005493a0a9eb0e2                    rootvg          active      
hdisk1          0005493a0a9eb1dd                    vg_scratch      active      
hdisk2          0005493a0a9eb3c4                    vg_fs1          active      
hdisk3          0005493a0a9eb4bf                    vg_fs2          active      
hdisk4          0005493a0a9eb6ab                    vg_fs3          active      
hdisk5          0005493a0a9eb7a6                    vg_fs4          active      
hdisk6          0005493a0a9eb98d                    vg_fs5          active      
lihbidbp12pf:/home/lc25487 $

on the actove node and I see:

Code:
lc25487@lihbidbp11pf: $ lspv
hdisk0          0005493a0a9eaef8                    rootvg          active      
hdisk1          0005493a0a9eb1dd                    vg_scratch                  
hdisk2          0005493a0a9eb3c4                    vg_fs1                      
hdisk3          0005493a0a9eb4bf                    vg_fs2                      
hdisk4          0005493a0a9eb6ab                    vg_fs3                      
hdisk5          0005493a0a9eb7a6                    vg_fs4                      
hdisk6          0005493a0a9eb98d                    vg_fs5                      
/home/lc25487

on the inactive node. I guess this means I do not have any concurrent filesystems, and that hdisk1-hdisk6 (with the same value in the 2nd field on each line) are attached to whichever node is active. The '-l' option on hdisk1 gives me:

Code:
lihbidbp12pf:/home/lc25487 $ lspv -l hdisk1
hdisk1:
LV NAME               LPs     PPs     DISTRIBUTION          MOUNT POINT
lv_3500               16      16      00..16..00..00..00    /3500
home_lv               48      48      00..48..00..00..00    /3500/home
lv_sca                96      96      96..00..00..00..00    /SCA
scr_lv                96      96      00..45..51..00..00    /3500/scratch
lv_orsyp              96      96      00..00..57..39..00    /3500/orsyp
download_lv           96      96      00..00..00..70..26    /3500/download
lihbidbp12pf:/home/lc25487 $

So I guess these are all logical volumes within hdisk1.

Again, thanks for the info.
# 4  
Old 11-06-2014
Seems like hdisk1-hdisk6 are concurrent VG's

In a cluster any VG that is to be shared are changed to concurrent_capable.
If you run
lsattr -El vg_fsX (where X=1,2,3,4,5) and same for vg_scratch, you should really be concern about this
Code:
auto_on       n                                Auto varyon             True
conc_auto_on  n                                N/A                     True
conc_capable  y                                N/A                     True

i.e.auto_varyon NO when system reboot and conccurrent_capable YES.


Try this
cllsgrp
To list the resource group

clRGinfo
To know the status and location of RG in the cluster.

Its an Active/Passive node cluster, so if you see VG active on one node that means they are in passive mode and they become active when the active node or its network goes down.
# 5  
Old 11-10-2014
As ibmtech said above, filesystems belongs to VGs and thouse VGs are part of a 'Resource group' inside powerHA/HACMP.
/usr/es/sbin/cluster/utilities/clshowres shows which VGs you're sharing, see below:
Code:
 /usr/es/sbin/cluster/utilities/clshowres
Volume Groups                                       vg3 vg2
Concurrent Volume Groups
Use forced varyon for volume groups, if necessary   true
Disks
GMVG Replicated Resources                           vg3 vg2

In my case, VGs are GLVM type.
Try 'lsvg vg_fs1' and check Concurrent|VG Mode
Code:
Concurrent:         Enhanced-Capable         
VG Mode:            Non-Concurrent

have you try to do an 'Extended verification'? (DONT DO IT ON PRODUCTION)
Please be carefull because you are production. Try to clone your nodes, install and configure powerha on a lab and make all test there.

Last edited by rbatte1; 11-12-2014 at 11:27 AM.. Reason: Changed ICODE tags to CODE tags for clarity
This User Gave Thanks to igalvarez For This Post:
# 6  
Old 11-11-2014
Hello,
Again thanks for the info, ibmtech & igalvarez. I found man pages and other references to clRGinfo, cllsgrp, clshowres, and cl-lots-of-other-stuff, but these do not seem to be installed in the server I'm working on. Neither are their smit entries:

Code:
 $smit cspoc
 
 lqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqk
  x                              ERROR MESSAGE                               x
  x                                                                          x
  x Press Enter or Cancel to return to the                                   x
  x application.                                                             x
  x                                                                          x
  x   1800-007 There are currently no SMIT                                   x
  x   screen entries available for this FastPath.                            x
  x   This FastPath may require installation of                              x
  x   additional software before it can be accessed.                         x
  x                                                                          x
  x F1=Help                 F2=Refresh              F3=Cancel                x
  x Esc+8=Image             Esc+0=Exit              Enter=Do                 x
  mqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqj

All I've been assigned to do is to create a particular subdirectory on a set of machines, and it seems that the parent directory (/3500/scratch) is an
an active one on the production cluster, so that's why I'm not seeing it on the inactive node. The 'lspv' listings satisfy me of that. I'd probably get a more explicit determination from the commands you list, were they installed.

Oh well, just another day of working with slightly-less-than-least-privileges Smilie. Thanks again, I will mark this thread closed or solved if possible.
# 7  
Old 11-11-2014
A very quick (maybe not 100% accurate but quick to do) and easy way to determine the shared VGs is to use "lsvg": do a

Code:
lsvg

on the passive node and you will get a list of VGs known to that node. Then rerun the command with the "-o" option which shows only the active (varyon) VGs:

Code:
lsvg -o

and eventually you will get a shorter list. The VGs missing from the first list are (most probably) the shared ones.

I hope this helps.

bakunin
Login or Register to Ask a Question

Previous Thread | Next Thread

9 More Discussions You Might Find Interesting

1. AIX

AIX flag to reduce size of shared file

I am using xlC (Version: 11.01.0000.0011). While build i am using "-g" to have debug information in build. there are many object files (>500) due to which resultant shared file (.so) will have huge size. I can't reduce optimization level. Is there any way or flag is present by using which i... (2 Replies)
Discussion started by: Abhi04
2 Replies

2. AIX

AIX 5.2 C++ shared object issue

Hi all, I am developing an application with two components. One "c" binary and one "C++" shared object. While execution, the shared object crashes out and core dump is created whenever "new" is executed. But if i use malloc this will work perfectly. I tried to use dbx. Below given was... (1 Reply)
Discussion started by: itssujith
1 Replies

3. AIX

Should I be worried about my AIX Cluster?

I have a 2-node Power 7 - 16cpu - 32gb RAM - AIX 6L cluster. The production node has 10 physical cpus (40 cores) and 24gb of RAM. The cluster has been live for 6 weeks and I'm seeing some things on the production node that I think could be of concern but wanted to get some opinions. The... (6 Replies)
Discussion started by: troym72
6 Replies

4. AIX

changing values for nfs shared file system on aix

Hi, I want to change the values for shared file system in aix for that I have run the command smitty chnfsexp but I am not getting the all the values which I have seen while adding the file system while exporting example smitty chnfsexp but after selecting shared file system using F4... (3 Replies)
Discussion started by: manoj.solaris
3 Replies

5. AIX

Breaking AIX cluster

Hello All, I was just wondering: How do I break a server cluster in an AIX 5.2 environment? Thanks. (1 Reply)
Discussion started by: bbbngowc
1 Replies

6. UNIX for Dummies Questions & Answers

Printing systems in Solaris, AIX and HP-UX

Hi, Can anybody teach me the printing systems supported for Solaris 9, AIX and HP-UX 11i. Thanks in advance.:) (0 Replies)
Discussion started by: meeraramanathan
0 Replies

7. AIX

IBM AIX on AS/400 Systems

Sry for my beginner question. I didn't find a list with all supported server types for an AIX 5.3 installation. Unfortunately ibm.com page has problems with the sign in so I can't ask in the ibm foum. Will AIX 5.3 run on a 9402, 9404 or 9406 system? Thanks for your help. (3 Replies)
Discussion started by: analyzer
3 Replies

8. AIX

How to debug a shared library(.so file) on AIX?

How to debug a shared library(.so file) on AIX? (1 Reply)
Discussion started by: AlbertGao
1 Replies

9. UNIX for Advanced & Expert Users

remote file copy across 2 systems (AIX and SCO)

Hello, Pls i need to copy some data from AIX Unix 4.3 to a SCO Openserve 5.0.5 using rcp command. But i keep on having permission error. WHAT IS THE SOLTION OR WHAT COMMAND CAN I USE AGAIN (4 Replies)
Discussion started by: aji
4 Replies
Login or Register to Ask a Question