Hello,
I am working on applications on an AIX 6.1 two-node cluster, with an active and passive node. Is there a command that will show me which mount points / file systems are shared and 'swing' from one node to the other when the active node changes, and which mount points are truly local to each node and not shared?
I can run commands with superuser access, but obviously I do not want to change/break anything, this is a production environment. Thanks for any help you can give.
OK,
That goes with VG, see which VG is shared, when you run lspv
Few VG's show concurrent and few active, the concurrent one's are shared between the nodes in cluster.
The VGs (single or multiple) are combined to form a Resource Group (RG) and that is defined in the cluster.
You cannot failover a single filesystem, it has to be the RG.
If this is a production server DO NOT play around doing failover or fallback.
When I try out the lspv command on the active and inactive nodes of the cluster, I see:
on the actove node and I see:
on the inactive node. I guess this means I do not have any concurrent filesystems, and that hdisk1-hdisk6 (with the same value in the 2nd field on each line) are attached to whichever node is active. The '-l' option on hdisk1 gives me:
So I guess these are all logical volumes within hdisk1.
In a cluster any VG that is to be shared are changed to concurrent_capable.
If you run lsattr -El vg_fsX (where X=1,2,3,4,5) and same for vg_scratch, you should really be concern about this
i.e.auto_varyon NO when system reboot and conccurrent_capable YES.
Try this cllsgrp
To list the resource group
clRGinfo
To know the status and location of RG in the cluster.
Its an Active/Passive node cluster, so if you see VG active on one node that means they are in passive mode and they become active when the active node or its network goes down.
As ibmtech said above, filesystems belongs to VGs and thouse VGs are part of a 'Resource group' inside powerHA/HACMP. /usr/es/sbin/cluster/utilities/clshowres shows which VGs you're sharing, see below:
In my case, VGs are GLVM type.
Try 'lsvg vg_fs1' and check Concurrent|VG Mode
have you try to do an 'Extended verification'? (DONT DO IT ON PRODUCTION)
Please be carefull because you are production. Try to clone your nodes, install and configure powerha on a lab and make all test there.
Last edited by rbatte1; 11-12-2014 at 11:27 AM..
Reason: Changed ICODE tags to CODE tags for clarity
Hello,
Again thanks for the info, ibmtech & igalvarez. I found man pages and other references to clRGinfo, cllsgrp, clshowres, and cl-lots-of-other-stuff, but these do not seem to be installed in the server I'm working on. Neither are their smit entries:
All I've been assigned to do is to create a particular subdirectory on a set of machines, and it seems that the parent directory (/3500/scratch) is an
an active one on the production cluster, so that's why I'm not seeing it on the inactive node. The 'lspv' listings satisfy me of that. I'd probably get a more explicit determination from the commands you list, were they installed.
Oh well, just another day of working with slightly-less-than-least-privileges . Thanks again, I will mark this thread closed or solved if possible.
A very quick (maybe not 100% accurate but quick to do) and easy way to determine the shared VGs is to use "lsvg": do a
on the passive node and you will get a list of VGs known to that node. Then rerun the command with the "-o" option which shows only the active (varyon) VGs:
and eventually you will get a shorter list. The VGs missing from the first list are (most probably) the shared ones.
I am using xlC (Version: 11.01.0000.0011).
While build i am using "-g" to have debug information in build.
there are many object files (>500) due to which resultant shared file (.so) will have huge size.
I can't reduce optimization level.
Is there any way or flag is present by using which i... (2 Replies)
Hi all,
I am developing an application with two components. One "c" binary and one "C++" shared object.
While execution, the shared object crashes out and core dump is created whenever "new" is executed. But if i use malloc this will work perfectly.
I tried to use dbx. Below given was... (1 Reply)
I have a 2-node Power 7 - 16cpu - 32gb RAM - AIX 6L cluster. The production node has 10 physical cpus (40 cores) and 24gb of RAM.
The cluster has been live for 6 weeks and I'm seeing some things on the production node that I think could be of concern but wanted to get some opinions.
The... (6 Replies)
Hi,
I want to change the values for shared file system in aix for that I have run the command smitty chnfsexp but I am not getting the all the values which I have seen while adding the file system while exporting
example
smitty chnfsexp
but after selecting shared file system using F4... (3 Replies)
Sry for my beginner question. I didn't find a list with all supported server types for an AIX 5.3 installation. Unfortunately ibm.com page has problems with the sign in so I can't ask in the ibm foum. Will AIX 5.3 run on a 9402, 9404 or 9406 system? Thanks for your help. (3 Replies)
Hello,
Pls i need to copy some data from AIX Unix 4.3 to a SCO Openserve 5.0.5 using rcp command. But i keep on having permission error. WHAT IS THE SOLTION OR WHAT COMMAND CAN I USE AGAIN (4 Replies)