11-20-2012
Quote:
Originally Posted by
jstnobdy
Hello!
To your first question, GPFS is a full device allocation, therefore, anything you wish to allocate will be consumed as an NSD.
Ah okay, that makes sense. If I have existing data on that device, will it wipe it all out once I add it to the cluster?
Quote:
Regarding your second question, yes, that disk is more than likely empty and therefore can be used for GPFS.
Finally, GPFS can run on one node (a single node quorum) and can have additional nodes added to it later. Make sure you have good time synchronization (xntpd) and decent performing disk storage.
Best regards,
jstnobdy
---------- Post updated at 09:02 PM ---------- Previous update was at 08:57 PM ----------
One more thing, GPFS is pretty tolerant, just make sure you define "dataOnly" and "metadataOnly" disks when you create your NSDs (mmcrnsd). In this case, allocate 'x'-amount of storage for dataOnly NSDs and a relatively small disk for metadataOnly.
Good luck. If you have any questions, let me know.
Thank you very much for the answer!
10 More Discussions You Might Find Interesting
1. UNIX for Dummies Questions & Answers
I have a piece of equipment, a Baytech RPC-3, that I need to communicate with through the com port.
I normally use this equipment at home and am able to connect to it without problems using FC5 and minicom; I brought the equipment into work and cannot get my pc, using Suse 10 and minicom to... (0 Replies)
Discussion started by: thumper
0 Replies
2. AIX
dear all
can i create two gpfs in one node each gpfs is pointing to single hdiskpower (1 Reply)
Discussion started by: thecobra151
1 Replies
3. AIX
Hi,
does anyone here happen to know if I could run GLVM or GPFS on Solid State Disks?
I have a high volume / high transaction Sybase HACMP cluster currently setup with SRDF to the DR datacentre. My business now considers to move everything to SSD storage but we still need to get the data to... (0 Replies)
Discussion started by: zxmaus
0 Replies
4. AIX
:cool:Hello,
can someone guide me how to create a GPFS filesystem, I've read a couple of redbooks however certain things are still not certain, like if you require to download files or licenses...
any help would be appreciated! (2 Replies)
Discussion started by: ollie01
2 Replies
5. AIX
Hi,
I have a running GPFS cluster. For one every mountpoint that i have created i have one disk assigned to it. That disk is converted to an NSD and is a part of the GPFS Cluster.
Now i have a new disk and there is this requirement to add it to the GPFS cluster, such that this becomes an NSD.... (1 Reply)
Discussion started by: aixromeo
1 Replies
6. AIX
Hello I am interested if anybody uses GPFS and is it must to have GPFS in the POWERHA environment?
and can GPFS work as cluster active active or active passive
thanks in advance (0 Replies)
Discussion started by: Vit0_Corleone
0 Replies
7. AIX
Dear all
for the last few days i was searching in IBM web site for GPFS 3.3 to upgrade my gpfs from 3.2 to 3.3 and i did not find the download link for the GPFS 3.3 in IBM website please can anyone give me the link . (4 Replies)
Discussion started by: thecobra151
4 Replies
8. AIX
Hi
We are doing the migration of DMX3 disks to DMX4 disks using migratepv. We are not using GPFS but we have gpfs disks present in the server. Can anyone advise how to get rid of GPFS in both the servers cbspsrdb01 and cbspsrdb02. I will do migratepv for the other disks present in the servers... (2 Replies)
Discussion started by: newtoaixos
2 Replies
9. AIX
we have implement GPFS 3.5.0.10 with 4 nodes cluster AIX 6.1 TL8 and they VIO clients , after that we noticed a big delay while we execute any command like mmgetstate -a will take about 2.5 minutes . time mmgetstate -a Node number Node name GPFS state ... (3 Replies)
Discussion started by: thecobra151
3 Replies
10. AIX
Hello Gurus,
Could you please help me out of the difference between GPFS and NFS.
Thanks-
P (1 Reply)
Discussion started by: pokhraj_d
1 Replies
LEARN ABOUT CENTOS
votequorum_qdevice_master_wins
VOTEQUORUM_QDEVICE_MASTER_WINS(3) Corosync Cluster Engine Programmer's Manual VOTEQUORUM_QDEVICE_MASTER_WINS(3)
NAME
votequorum_qdevice_master_wins - Sets or clears quorum device master_wins flag
SYNOPSIS
#include <corosync/votequorum.h>
int votequorum_qdevice_master_wins(votequorum_handle_t handle const char name, unsigned int allow);
DESCRIPTION
The votequorum_qdevice_master_wins informs votequorum whether or not the currently registered qdevice subsystem supports 'master_wins' mode
(default 0). This mode allows the qdevice to effectively take over the quorum calculations of votequorum. Any node with an active qdevice
that also has master_wins set becomes quorate regardless of the node votes of the cluster. It is left up to the qdevice subsystem itself
(not part of votequorum) to communicate across the nodes or otherwise provide some system of deciding which nodes will be part of the quo-
rate cluster, if any. eg They may be the set of nodes that has access to a quorum disk.
name The name of the currently registered quorum device on this node. This must match the existing name known to votequorum.
allow 0 (the default) means that master_wins is not active on this node. 1 means that master_wins is active on this node.
RETURN VALUE
This call returns the CS_OK value if successful, otherwise an error is returned.
ERRORS
CS_ERR_TRY_AGAIN Resource temporarily unavailable
CS_ERR_INVALID_PARAM Invalid argument
CS_ERR_ACCESS Permission denied
CS_ERR_LIBRARY The connection failed
CS_ERR_INTERRUPT System call inturrupted by a signal
CS_ERR_NOT_SUPPORTED The requested protocol/functuality not supported
CS_ERR_MESSAGE_ERROR Incorrect auth message received
CS_ERR_NO_MEMORY Not enough memory to completed the requested task
SEE ALSO
votequorum_overview(8), votequorum_initialize(3), votequorum_finalize(3), votequorum_getinfo(3), votequorum_trackstart(3), votequo-
rum_trackstop(3), votequorum_fd_get(3), votequorum_dispatch(3), votequorum_context_set(3), votequorum_context_get(3), votequorum_setex-
pected(3), votequorum_setvotes(3), votequorum_qdevice_register(3), votequorum_qdevice_poll(3), votequorum_qdevice_update(3),
corosync Man Page 2014-06-10 VOTEQUORUM_QDEVICE_MASTER_WINS(3)