04-11-2011
How to configure new hdisk such that it has a gpfs fs on it and is added to a running gpfs cluster?
Hi,
I have a running GPFS cluster. For one every mountpoint that i have created i have one disk assigned to it. That disk is converted to an NSD and is a part of the GPFS Cluster.
Now i have a new disk and there is this requirement to add it to the GPFS cluster, such that this becomes an NSD. And has a GPFS filesystem on it. How to do it? It is not a production system so I can shutdown the cluster as well
Regards.
---------- Post updated at 05:23 AM ---------- Previous update was at 05:13 AM ----------
In short I want to add another NSD to the existing cluster.
10 More Discussions You Might Find Interesting
1. AIX
dear all
can i create two gpfs in one node each gpfs is pointing to single hdiskpower (1 Reply)
Discussion started by: thecobra151
1 Replies
2. AIX
Hi,
does anyone here happen to know if I could run GLVM or GPFS on Solid State Disks?
I have a high volume / high transaction Sybase HACMP cluster currently setup with SRDF to the DR datacentre. My business now considers to move everything to SSD storage but we still need to get the data to... (0 Replies)
Discussion started by: zxmaus
0 Replies
3. AIX
:cool:Hello,
can someone guide me how to create a GPFS filesystem, I've read a couple of redbooks however certain things are still not certain, like if you require to download files or licenses...
any help would be appreciated! (2 Replies)
Discussion started by: ollie01
2 Replies
4. AIX
Hello I am interested if anybody uses GPFS and is it must to have GPFS in the POWERHA environment?
and can GPFS work as cluster active active or active passive
thanks in advance (0 Replies)
Discussion started by: Vit0_Corleone
0 Replies
5. AIX
Dear all
for the last few days i was searching in IBM web site for GPFS 3.3 to upgrade my gpfs from 3.2 to 3.3 and i did not find the download link for the GPFS 3.3 in IBM website please can anyone give me the link . (4 Replies)
Discussion started by: thecobra151
4 Replies
6. AIX
Hi,
I have a IBM Power series machine that has 2 VIOs and hosting 20 LPARS.
I have two LPARs on which GPFS is configured (4-5 disks)
Now these two LPARs need to be configured for HACMP (PowerHA) as well.
What is recommended? Is it possible that HACMP can be done on this config or do i... (1 Reply)
Discussion started by: aixromeo
1 Replies
7. AIX
Hi
We are doing the migration of DMX3 disks to DMX4 disks using migratepv. We are not using GPFS but we have gpfs disks present in the server. Can anyone advise how to get rid of GPFS in both the servers cbspsrdb01 and cbspsrdb02. I will do migratepv for the other disks present in the servers... (2 Replies)
Discussion started by: newtoaixos
2 Replies
8. AIX
Hello, I need to test whether our product will work with GPFS filesystems and I have some questions regarding the setup:
1. Do I need to dedicate an entire hard disk if I want to have GPFS on it? Or can I somehow split a disk into 2 virtual disks, and only use 1 for gpfs?
2. If lspv returns... (4 Replies)
Discussion started by: bstring
4 Replies
9. AIX
we have implement GPFS 3.5.0.10 with 4 nodes cluster AIX 6.1 TL8 and they VIO clients , after that we noticed a big delay while we execute any command like mmgetstate -a will take about 2.5 minutes . time mmgetstate -a Node number Node name GPFS state ... (3 Replies)
Discussion started by: thecobra151
3 Replies
10. AIX
Hello Gurus,
Could you please help me out of the difference between GPFS and NFS.
Thanks-
P (1 Reply)
Discussion started by: pokhraj_d
1 Replies
LEARN ABOUT SUSE
votequorum_overview
VOTEQUORUM_OVERVIEW(8) Corosync Cluster Engine Programmer's Manual VOTEQUORUM_OVERVIEW(8)
NAME
votequorum_overview - Votequorum Library Overview
OVERVIEW
The votequuorum library is delivered with the corosync project. It is the external interface to the vote-based quorum service. This service
is optionally loaded into all ndes in a corosync cluster to avoid split-brain situations. It does this by having a number of votes assigned
to each system in the cluster and ensuring that only when a majority of the votes are present, cluster operations are allowed to proceed.
The library provides a mechanism to:
* Query the quorum status
* Get a list of nodes known to the quorum service
* Receive notifications of quorum state changes
* Change the number of votes assigned to a node
* Change the number of expected votes for a cluster to be quorate
* Connect an additional quorum device to allow small clusters to remain quorate during node outages.
votequorum reads its configuration from the objdb. The following keys are read when it starts up:
* quorum.expected_votes
* quorum.votes
* quorum.quorumdev_poll
* quorum.disallowed
* quorum.two_node
Most of those values can be changed while corosync is running with the following exceptions: quorum.disallowed cannot be changed, and
two_node cannot be set on-the-fly, though it can be cleared. ie you can start with two nodes in the cluster and add a third without reboot-
ing all the nodes.
BUGS
This software is not yet production, so there may still be some bugs.
SEE ALSO
corosync-quorumtool(8), votequorum_initialize(3), votequorum_finalize(3), votequorum_fd_get(3), votequorum_dispatch(3), votequorum_con-
text_get(3), votequorum_context_set(3),
corosync Man Page 2009-01-26 VOTEQUORUM_OVERVIEW(8)