03-07-2013
I had asked
here
Quote:
is there any way to create shared virtual disk between two LPARs like how you can do it using Storage through Fiber on two servers ?
Trying to stimulate HACMP between two LPARs
and was
replied
Quote:
Originally Posted by
Vit0_Corleone
It's not possible .. you will get error like "resource busy" because it's already reserved for some Lpar
My question:
If I have one VIOS Machine running IVM and SAN Storage connected to it, is there a way to assign one shared disk on two LPARS
- I have one Fiber Card which is on VIOS IVM
- I will assign a disk from Storage to VIOS IVM
- from VIOS IVM how can i assign it to two LPARs for stimulating HACMP ?
thanks
10 More Discussions You Might Find Interesting
1. AIX
Hello,
I would like to know if anyone has faced this problem. Whenever there is a duplicate IP address, HACMP goes down infact HACMP ( PowerHA ) takes the whole system down.
Does anyone know how to solve this problem ? (3 Replies)
Discussion started by: filosophizer
3 Replies
2. AIX
hi,
when I do a failover, hacmp always starts db2 but recently it fails to start db2..noticed the issue is db2nodes.cfg is not modified by hacmp and is still showing primary node..manually changed the node name to secondary after which db2 started immediately..unable to figure out why hacmp is... (4 Replies)
Discussion started by: gkr747
4 Replies
3. AIX
I am planning for building a new database server using AIX 6.1 and Oracle 11.2 using ASM.
As i have learned starting with Oracle 11.2 ASM can only be used in conjunction with Clusterware, which is Oracles HA-software. As is the companies policy we do intend to use PowerHA as HA-solution instead... (1 Reply)
Discussion started by: bakunin
1 Replies
4. AIX
Hello!
I have this infraestructure:
- 1 POWER7 with single VIOS on Site A.
- 1 POWER6 with single VIOS on Site B.
- 1 LPAR called NodeA as primary node for PowerHA 6.1 on Site A.
- 1 LPAR called NodeB as secondary (cold) node for PowerHA 6.1 on SiteB.
- 1 Storage DS4700 on Site A.
- 1... (8 Replies)
Discussion started by: enzote
8 Replies
5. AIX
Hello AIX GURU's
Can anybody tell me the steps to crate shared VG (enhanced concurent) for my Lpars from VIO server?
my questions are:
1.
Should I crate Enhanced Concurent VG in VIO and map it using virtual Scsi to Lpar?
or
2. Can I just create virtual SCSI in VIO and map to Lpar and... (1 Reply)
Discussion started by: Vit0_Corleone
1 Replies
6. AIX
Hello,
Lets say for simplicity that I do not use any vlan config inside my server - one lpar group use hea physical port1, another group hea physical port2. Physical port1 configured as vlan1 on external switch, physical port2 as vlan2.
What is the common practice - should I isolate my vios... (0 Replies)
Discussion started by: vilius
0 Replies
7. AIX
Hello,
AIX 6.1 TL7 SP6
POwerHA 6.1 SP10
I was experimenting with new hacmp build. It's 3-node cluster build on AIX 6.1 lpars. It contains Ethernet and diskhb networks. Shared vg disk is SAN disk. Two nodes see disk using vscsi, third node sees disk using npiv. Application is db2 server.
... (4 Replies)
Discussion started by: vilius
4 Replies
8. AIX
hello
ive installed powerha 7.1.3 on two servers aix 6.1.9 6100-09-03-1415
work with dmx4 EMC storgae.
after sync cluster ( terminate with OK ) ive see that the repository disk upper only in one machine :
hdiskpower60 00c7f6b59fc60d9d caavg_private active... (1 Reply)
Discussion started by: ariec
1 Replies
9. AIX
I have created a VxVM disk group in AIX7.1. I have tried to added this VxVM disk group in powerHA 6.1. But in cluster VxVM DGs are not listing. Is there any other procedure to add vxvm diskgroup to hacmp.
Please share me steps for adding vxvm diskgroup to hacmp. (6 Replies)
Discussion started by: sunnybee
6 Replies
10. AIX
Hi All,
As per the IBM upgrade/support matrix
IBM Techdocs Technote: PowerHA for AIX Version Compatibility Matrix
we can't do online upgrade or rolling migration from powerha v7.1.0 to v7.1.3. on AIX61_TL9_SP4, So we are following the steps as below ...
1 ) Bring down the cluster
2 )... (2 Replies)
Discussion started by: linux.amrit
2 Replies
LEARN ABOUT OPENDARWIN
fs_newcell
FS_NEWCELL(1) AFS Command Reference FS_NEWCELL(1)
NAME
fs_newcell - Changes the kernel-resident list of a cell's database servers
SYNOPSIS
fs newcell -name <cell name> -servers <primary servers>+
[-linkedcell <linked cell name>] [-help]
fs n -n <cell name> -s <primary servers>+
[-l <linked cell name>] [-h]
DESCRIPTION
The fs newcell command removes the Cache Manager's kernel-resident list of database server machines for the cell specified by the -name
argument and replaces it with the database server machines named by the -servers argument.
Each time the machine reboots, the Cache Manager constructs the kernel list of cells and database server machines by reading the local
/etc/openafs/CellServDB file. This command does not change the CellServDB file, so any changes made with it persist only until the next
reboot, unless the issuer also edits the file. The output of the fs listcells command reflects changes made with this command, because that
command consults the kernel-resident list rather than the CellServDB file.
This command can introduce a completely new cell into the kernel-resident list, but cannot make a cell inaccessible (it is not possible to
remove a cell's entry from the kernel-resident list by providing no values for the -server argument). To make a cell inaccessible, remove
its entry from the CellServDB file and reboot the machine.
If the -name argument names a DCE cell, then the -servers argument names DFS Fileset Location (FL) Server machines. The -linkedcell
argument specifies the name of the AFS cell to link to a DCE cell for the purpose of DFS fileset location.
CAUTIONS
Some commands, such as the aklog or klog.krb5 commands, work correctly only when the information is accurate for a cell in both the
CellServDB file and the kernel-resident list.
OPTIONS
-name <cell name>
Specifies the fully-qualified cell name of the AFS or DCE cell.
-servers <primary servers>+
Specifies the fully-qualified hostnames of all AFS database server machines or DFS Fileset Location (FL) Server machines for the cell
named by the -name argument. If FL Server machines are specified, the local machine must be running the AFS/DFS Migration Toolkit
Protocol Translator.
-linkedcell <linked cell name>
Specifies the name of the AFS cell to link to a DCE cell for the purpose of DFS fileset location.
-help
Prints the online help for this command. All other valid options are ignored.
EXAMPLES
The following example changes the machine's kernel-resident list of database server machines for the ABC Corporation cell to include the
machines "db1.abc.com" and "db2.abc.com":
% fs newcell -name abc.com -servers db1.abc.com db2.abc.com
The following example links the DCE cell "dce.abc.com" to the AFS cell "abc.com". The AFS client contacts the Fileset Location (FL) servers
"db1.dce.abc.com" and "db2.dce.abc.com" for fileset location information as it interprets a DFS pathname.
% fs newcell -name dce.abc.com
-servers db1.dce.abc.com db2.dce.abc.com
-linkedcell abc.com
PRIVILEGE REQUIRED
The issuer must be logged in as the local superuser root.
SEE ALSO
CellServDB(5), fs_listcells(1)
COPYRIGHT
IBM Corporation 2000. <http://www.ibm.com/> All Rights Reserved.
This documentation is covered by the IBM Public License Version 1.0. It was converted from HTML to POD by software written by Chas
Williams and Russ Allbery, based on work by Alf Wachsmann and Elizabeth Cassell.
OpenAFS 2012-03-26 FS_NEWCELL(1)