Sponsored Content
Operating Systems AIX Questions about setting up GPFS Post 302733475 by bstring on Tuesday 20th of November 2012 02:30:58 PM
Old 11-20-2012
Quote:
Originally Posted by jstnobdy
Hello!

To your first question, GPFS is a full device allocation, therefore, anything you wish to allocate will be consumed as an NSD.
Ah okay, that makes sense. If I have existing data on that device, will it wipe it all out once I add it to the cluster?
Quote:
Regarding your second question, yes, that disk is more than likely empty and therefore can be used for GPFS.

Finally, GPFS can run on one node (a single node quorum) and can have additional nodes added to it later. Make sure you have good time synchronization (xntpd) and decent performing disk storage.

Best regards,
jstnobdy

---------- Post updated at 09:02 PM ---------- Previous update was at 08:57 PM ----------

One more thing, GPFS is pretty tolerant, just make sure you define "dataOnly" and "metadataOnly" disks when you create your NSDs (mmcrnsd). In this case, allocate 'x'-amount of storage for dataOnly NSDs and a relatively small disk for metadataOnly.

Good luck. If you have any questions, let me know.
Thank you very much for the answer! Smilie
 

10 More Discussions You Might Find Interesting

1. UNIX for Dummies Questions & Answers

Setting com port questions

I have a piece of equipment, a Baytech RPC-3, that I need to communicate with through the com port. I normally use this equipment at home and am able to connect to it without problems using FC5 and minicom; I brought the equipment into work and cannot get my pc, using Suse 10 and minicom to... (0 Replies)
Discussion started by: thumper
0 Replies

2. AIX

two gpfs in one node issue

dear all can i create two gpfs in one node each gpfs is pointing to single hdiskpower (1 Reply)
Discussion started by: thecobra151
1 Replies

3. AIX

SSD with GPFS ?

Hi, does anyone here happen to know if I could run GLVM or GPFS on Solid State Disks? I have a high volume / high transaction Sybase HACMP cluster currently setup with SRDF to the DR datacentre. My business now considers to move everything to SSD storage but we still need to get the data to... (0 Replies)
Discussion started by: zxmaus
0 Replies

4. AIX

AIX GPFS Setup

:cool:Hello, can someone guide me how to create a GPFS filesystem, I've read a couple of redbooks however certain things are still not certain, like if you require to download files or licenses... any help would be appreciated! (2 Replies)
Discussion started by: ollie01
2 Replies

5. AIX

How to configure new hdisk such that it has a gpfs fs on it and is added to a running gpfs cluster?

Hi, I have a running GPFS cluster. For one every mountpoint that i have created i have one disk assigned to it. That disk is converted to an NSD and is a part of the GPFS Cluster. Now i have a new disk and there is this requirement to add it to the GPFS cluster, such that this becomes an NSD.... (1 Reply)
Discussion started by: aixromeo
1 Replies

6. AIX

GPFS

Hello I am interested if anybody uses GPFS and is it must to have GPFS in the POWERHA environment? and can GPFS work as cluster active active or active passive thanks in advance (0 Replies)
Discussion started by: Vit0_Corleone
0 Replies

7. AIX

GPFS 3.3

Dear all for the last few days i was searching in IBM web site for GPFS 3.3 to upgrade my gpfs from 3.2 to 3.3 and i did not find the download link for the GPFS 3.3 in IBM website please can anyone give me the link . (4 Replies)
Discussion started by: thecobra151
4 Replies

8. AIX

How to get rid of GPFS ?

Hi We are doing the migration of DMX3 disks to DMX4 disks using migratepv. We are not using GPFS but we have gpfs disks present in the server. Can anyone advise how to get rid of GPFS in both the servers cbspsrdb01 and cbspsrdb02. I will do migratepv for the other disks present in the servers... (2 Replies)
Discussion started by: newtoaixos
2 Replies

9. AIX

GPFS is too slow after installtion

we have implement GPFS 3.5.0.10 with 4 nodes cluster AIX 6.1 TL8 and they VIO clients , after that we noticed a big delay while we execute any command like mmgetstate -a will take about 2.5 minutes . time mmgetstate -a Node number Node name GPFS state ... (3 Replies)
Discussion started by: thecobra151
3 Replies

10. AIX

Difference between NFS and GPFS

Hello Gurus, Could you please help me out of the difference between GPFS and NFS. Thanks- P (1 Reply)
Discussion started by: pokhraj_d
1 Replies
scdpm(1M)						  System Administration Commands						 scdpm(1M)

NAME
scdpm - manage disk path monitoring daemon SYNOPSIS
scdpm [-a] {node | all} scdpm -f filename scdpm -m {[node | all][:/dev/did/rdsk/]dN | [:/dev/rdsk/]cNtXdY | all} scdpm -n {node | all} scdpm -p [-F] {[node | all][:/dev/did/rdsk/]dN | [/dev/rdsk/]cNtXdY | all} scdpm -u {[node | all][:/dev/did/rdsk/]dN | [/dev/rdsk/]cNtXdY | all} DESCRIPTION
Note - Beginning with the Sun Cluster 3.2 release, Sun Cluster software includes an object-oriented command set. Although Sun Cluster software still supports the original command set, Sun Cluster procedural documentation uses only the object-oriented command set. For more infor- mation about the object-oriented command set, see the Intro(1CL) man page. The scdpm command manages the disk path monitoring daemon in a cluster. You use this command to monitor and unmonitor disk paths. You can also use this command to display the status of disk paths or nodes. All of the accessible disk paths in the cluster or on a specific node are printed on the standard output. You must run this command on a cluster node that is online and in cluster mode. You can specify either a global disk name or a UNIX path name when you monitor a new disk path. Additionally, you can force the daemon to reread the entire disk configuration. You can use this command only in the global zone. OPTIONS
The following options are supported: -a Enables the automatic rebooting of a node when all monitored disk paths fail, provided that the following conditions are met: o All monitored disk paths on the node fail. o At least one of the disks is accessible from a different node in the cluster. You can use this option only in the global zone. Rebooting the node restarts all resource and device groups that are mastered on that node on another node. If all monitored disk paths on a node remain inaccessible after the node automatically reboots, the node does not automatically reboot again. However, if any monitored disk paths become available after the node reboots but then all monitored disk paths again fail, the node automatically reboots again. You need solaris.cluster.device.admin role-based access control (RBAC) authorization to use this option. See rbac(5). -F If you specify the -F option with the -p option, scdpm also prints the faulty disk paths in the cluster. The -p option prints the cur- rent status of a node or a specified disk path from all the nodes that are attached to the storage. -f filename Reads a list of disk paths to monitor or unmonitor in filename. You can use this option only in the global zone. The following example shows the contents of filename. u schost-1:/dev/did/rdsk/d5 m schost-2:all Each line in the file must specify whether to monitor or unmonitor the disk path, the node name, and the disk path name. You specify the m option for monitor and the u option for unmonitor. You must insert a space between the command and the node name. You must also insert a colon (:) between the node name and the disk path name. You need solaris.cluster.device.admin RBAC authorization to use this option. See rbac(5). -m Monitors the new disk path that is specified by node:diskpath. You can use this option only in the global zone. You need solaris.cluster.device.admin RBAC authorization to use this option. See rbac(5). -n Disables the automatic rebooting of a node when all monitored disk paths fail. You can use this option only in the global zone. If all monitored disk paths on the node fail, the node is not rebooted. You need solaris.cluster.device.admin RBAC authorization to use this option. See rbac(5). -p Prints the current status of a node or a specified disk path from all the nodes that are attached to the storage. You can use this option only in the global zone. If you also specify the -F option, scdpm prints the faulty disk paths in the cluster. Valid status values for a disk path are Ok, Fail, Unmonitored, or Unknown. The valid status value for a node is Reboot_on_disk_failure. See the description of the -a and the -n options for more information about the Reboot_on_disk_failure status. You need solaris.cluster.device.read RBAC authorization to use this option. See rbac(5). -u Unmonitors a disk path. The daemon on each node stops monitoring the specified path. You can use this option only in the global zone. You need solaris.cluster.device.admin RBAC authorization to use this option. See rbac(5). EXAMPLES
Example 1 Monitoring All Disk Paths in the Cluster Infrastructure The following command forces the daemon to monitor all disk paths in the cluster infrastructure. # scdpm -m all Example 2 Monitoring a New Disk Path The following command monitors a new disk path.All nodes monitor /dev/did/dsk/d3 where this path is valid. # scdpm -m /dev/did/dsk/d3 Example 3 Monitoring New Disk Paths on a Single Node The following command monitors new paths on a single node. The daemon on the schost-2 node monitors paths to the /dev/did/dsk/d4 and /dev/did/dsk/d5 disks. # scdpm -m schost-2:d4 -m schost-2:d5 Example 4 Printing All Disk Paths and Their Status The following command prints all disk paths in the cluster and their status. # scdpm -p schost-1:reboot_on_disk_failure enabled schost-2:reboot_on_disk_failure disabled schost-1:/dev/did/dsk/d4 Ok schost-1:/dev/did/dsk/d3 Ok schost-2:/dev/did/dsk/d4 Fail schost-2:/dev/did/dsk/d3 Ok schost-2:/dev/did/dsk/d5 Unmonitored schost-2:/dev/did/dsk/d6 Ok Example 5 Printing All Failed Disk Paths The following command prints all of the failed disk paths on the schost-2 node. # scdpm -p -F all schost-2:/dev/did/dsk/d4 Fail Example 6 Printing the Status of All Disk Paths From a Single Node The following command prints the disk path and the status of all disks that are monitored on the schost-2 node. # scdpm -p schost-2:all schost-2:reboot_on_disk_failure disabled schost-2:/dev/did/dsk/d4 Fail schost-2:/dev/did/dsk/d3 Ok EXIT STATUS
The following exit values are returned: 0 The command completed successfully. 1 The command failed completely. 2 The command failed partially. Note - The disk path is represented by a node name and a disk name. The node name must be the host name or all. The disk name must be the global disk name, a UNIX path name, or all. The disk name can be either the full global path name or the disk name: /dev/did/dsk/d3 or d3. The disk name can also be the full UNIX path name: /dev/rdsk/c0t0d0s0. Disk path status changes are logged with the syslogd LOG_INFO facility level. All failures are logged with the LOG_ERR facility level. ATTRIBUTES
See attributes(5) for descriptions of the following attributes: +-----------------------------+-----------------------------+ | ATTRIBUTE TYPE | ATTRIBUTE VALUE | +-----------------------------+-----------------------------+ |Availability |SUNWsczu | +-----------------------------+-----------------------------+ |Stability |Evolving | +-----------------------------+-----------------------------+ SEE ALSO
Intro(1CL), cldevice(1CL), clnode(1CL), attributes(5) Sun Cluster System Administration Guide for Solaris OS Sun Cluster 3.2 22 Jun 2006 scdpm(1M)
All times are GMT -4. The time now is 06:45 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy