11-20-2012
Quote:
Originally Posted by
jstnobdy
Hello!
To your first question, GPFS is a full device allocation, therefore, anything you wish to allocate will be consumed as an NSD.
Ah okay, that makes sense. If I have existing data on that device, will it wipe it all out once I add it to the cluster?
Quote:
Regarding your second question, yes, that disk is more than likely empty and therefore can be used for GPFS.
Finally, GPFS can run on one node (a single node quorum) and can have additional nodes added to it later. Make sure you have good time synchronization (xntpd) and decent performing disk storage.
Best regards,
jstnobdy
---------- Post updated at 09:02 PM ---------- Previous update was at 08:57 PM ----------
One more thing, GPFS is pretty tolerant, just make sure you define "dataOnly" and "metadataOnly" disks when you create your NSDs (mmcrnsd). In this case, allocate 'x'-amount of storage for dataOnly NSDs and a relatively small disk for metadataOnly.
Good luck. If you have any questions, let me know.
Thank you very much for the answer!
10 More Discussions You Might Find Interesting
1. UNIX for Dummies Questions & Answers
I have a piece of equipment, a Baytech RPC-3, that I need to communicate with through the com port.
I normally use this equipment at home and am able to connect to it without problems using FC5 and minicom; I brought the equipment into work and cannot get my pc, using Suse 10 and minicom to... (0 Replies)
Discussion started by: thumper
0 Replies
2. AIX
dear all
can i create two gpfs in one node each gpfs is pointing to single hdiskpower (1 Reply)
Discussion started by: thecobra151
1 Replies
3. AIX
Hi,
does anyone here happen to know if I could run GLVM or GPFS on Solid State Disks?
I have a high volume / high transaction Sybase HACMP cluster currently setup with SRDF to the DR datacentre. My business now considers to move everything to SSD storage but we still need to get the data to... (0 Replies)
Discussion started by: zxmaus
0 Replies
4. AIX
:cool:Hello,
can someone guide me how to create a GPFS filesystem, I've read a couple of redbooks however certain things are still not certain, like if you require to download files or licenses...
any help would be appreciated! (2 Replies)
Discussion started by: ollie01
2 Replies
5. AIX
Hi,
I have a running GPFS cluster. For one every mountpoint that i have created i have one disk assigned to it. That disk is converted to an NSD and is a part of the GPFS Cluster.
Now i have a new disk and there is this requirement to add it to the GPFS cluster, such that this becomes an NSD.... (1 Reply)
Discussion started by: aixromeo
1 Replies
6. AIX
Hello I am interested if anybody uses GPFS and is it must to have GPFS in the POWERHA environment?
and can GPFS work as cluster active active or active passive
thanks in advance (0 Replies)
Discussion started by: Vit0_Corleone
0 Replies
7. AIX
Dear all
for the last few days i was searching in IBM web site for GPFS 3.3 to upgrade my gpfs from 3.2 to 3.3 and i did not find the download link for the GPFS 3.3 in IBM website please can anyone give me the link . (4 Replies)
Discussion started by: thecobra151
4 Replies
8. AIX
Hi
We are doing the migration of DMX3 disks to DMX4 disks using migratepv. We are not using GPFS but we have gpfs disks present in the server. Can anyone advise how to get rid of GPFS in both the servers cbspsrdb01 and cbspsrdb02. I will do migratepv for the other disks present in the servers... (2 Replies)
Discussion started by: newtoaixos
2 Replies
9. AIX
we have implement GPFS 3.5.0.10 with 4 nodes cluster AIX 6.1 TL8 and they VIO clients , after that we noticed a big delay while we execute any command like mmgetstate -a will take about 2.5 minutes . time mmgetstate -a Node number Node name GPFS state ... (3 Replies)
Discussion started by: thecobra151
3 Replies
10. AIX
Hello Gurus,
Could you please help me out of the difference between GPFS and NFS.
Thanks-
P (1 Reply)
Discussion started by: pokhraj_d
1 Replies
LEARN ABOUT DEBIAN
votequorum_overview
VOTEQUORUM_OVERVIEW(8) Corosync Cluster Engine Programmer's Manual VOTEQUORUM_OVERVIEW(8)
NAME
votequorum_overview - Votequorum Library Overview
OVERVIEW
The votequuorum library is delivered with the corosync project. It is the external interface to the vote-based quorum service. This service
is optionally loaded into all ndes in a corosync cluster to avoid split-brain situations. It does this by having a number of votes assigned
to each system in the cluster and ensuring that only when a majority of the votes are present, cluster operations are allowed to proceed.
The library provides a mechanism to:
* Query the quorum status
* Get a list of nodes known to the quorum service
* Receive notifications of quorum state changes
* Change the number of votes assigned to a node
* Change the number of expected votes for a cluster to be quorate
* Connect an additional quorum device to allow small clusters to remain quorate during node outages.
votequorum reads its configuration from the objdb. The following keys are read when it starts up:
* quorum.expected_votes
* quorum.votes
* quorum.quorumdev_poll
* quorum.disallowed
* quorum.two_node
Most of those values can be changed while corosync is running with the following exceptions: quorum.disallowed cannot be changed, and
two_node cannot be set on-the-fly, though it can be cleared. ie you can start with two nodes in the cluster and add a third without reboot-
ing all the nodes.
BUGS
This software is not yet production, so there may still be some bugs.
SEE ALSO
corosync-quorumtool(8), votequorum_initialize(3), votequorum_finalize(3), votequorum_fd_get(3), votequorum_dispatch(3), votequorum_con-
text_get(3), votequorum_context_set(3),
corosync Man Page 2009-01-26 VOTEQUORUM_OVERVIEW(8)