We are doing the migration of DMX3 disks to DMX4 disks using migratepv. We are not using GPFS but we have gpfs disks present in the server. Can anyone advise how to get rid of GPFS in both the servers cbspsrdb01 and cbspsrdb02. I will do migratepv for the other disks present in the servers but im worried about the gpfs disks.
The below are the outputs from my servers :
Code:
============================================================================================================================
root@cbspsrdb02 [/] #lspv
hdisk6 00c7518dcdac304d cbs_nsd02
============================================================================================================================
root@cbspsrdb01 [/] #lspv
hdisk4 00c7518dcdac304d cbs_nsd02
============================================================================================================================
root@cbspsrdb01 [/] #mmlsnsd
File system Disk name NSD servers
---------------------------------------------------------------------------
workfiles cbs_nsd02 (directly attached)
============================================================================================================================
root@cbspsrdb02 [/] #mmlsnsd
File system Disk name NSD servers
---------------------------------------------------------------------------
workfiles cbs_nsd02 (directly attached)
============================================================================================================================
root@cbspsrdb01 [/] #lslpp -l | grep -i gpfs
gpfs.base 3.3.0.3 COMMITTED GPFS File Manager
gpfs.gui 3.3.0.1 COMMITTED GPFS GUI
gpfs.msg.en_US 3.3.0.3 COMMITTED GPFS Server Messages - U.S.
gpfs.base 3.3.0.3 COMMITTED GPFS File Manager
gpfs.docs.data 3.3.0.1 COMMITTED GPFS Server Manpages and
============================================================================================================================
root@cbspsrdb02 [/] #lslpp -l | grep -i gpfs
gpfs.base 3.3.0.3 COMMITTED GPFS File Manager
gpfs.gui 3.3.0.1 COMMITTED GPFS GUI
gpfs.msg.en_US 3.3.0.3 COMMITTED GPFS Server Messages - U.S.
gpfs.base 3.3.0.3 COMMITTED GPFS File Manager
gpfs.docs.data 3.3.0.1 COMMITTED GPFS Server Manpages and
============================================================================================================================
Hi,
does anyone here happen to know if I could run GLVM or GPFS on Solid State Disks?
I have a high volume / high transaction Sybase HACMP cluster currently setup with SRDF to the DR datacentre. My business now considers to move everything to SSD storage but we still need to get the data to... (0 Replies)
:cool:Hello,
can someone guide me how to create a GPFS filesystem, I've read a couple of redbooks however certain things are still not certain, like if you require to download files or licenses...
any help would be appreciated! (2 Replies)
Hi,
I have a running GPFS cluster. For one every mountpoint that i have created i have one disk assigned to it. That disk is converted to an NSD and is a part of the GPFS Cluster.
Now i have a new disk and there is this requirement to add it to the GPFS cluster, such that this becomes an NSD.... (1 Reply)
Hello I am interested if anybody uses GPFS and is it must to have GPFS in the POWERHA environment?
and can GPFS work as cluster active active or active passive
thanks in advance (0 Replies)
Dear all
for the last few days i was searching in IBM web site for GPFS 3.3 to upgrade my gpfs from 3.2 to 3.3 and i did not find the download link for the GPFS 3.3 in IBM website please can anyone give me the link . (4 Replies)
Hi,
We have GPFS 3.4 Installed on two AIX 6.1 Nodes. We have 3 GPFS Mount points:
/abc01 4TB (Comprises of 14 x 300GB disks from XIV SAN)
/abc02 4TB (Comprises of 14 x 300GB disks from XIV SAN)
/abc03 1TB ((Comprises of Multiple 300GB disks from XIV SAN)
Now these 40... (1 Reply)
Hello, I need to test whether our product will work with GPFS filesystems and I have some questions regarding the setup:
1. Do I need to dedicate an entire hard disk if I want to have GPFS on it? Or can I somehow split a disk into 2 virtual disks, and only use 1 for gpfs?
2. If lspv returns... (4 Replies)
we have implement GPFS 3.5.0.10 with 4 nodes cluster AIX 6.1 TL8 and they VIO clients , after that we noticed a big delay while we execute any command like mmgetstate -a will take about 2.5 minutes . time mmgetstate -a Node number Node name GPFS state ... (3 Replies)
Hello Gurus,
Could you please help me out of the difference between GPFS and NFS.
Thanks-
P (1 Reply)
Discussion started by: pokhraj_d
1 Replies
LEARN ABOUT CENTOS
gnome-disks
GNOME-DISKS(1) gnome-disk-utility GNOME-DISKS(1)NAME
gnome-disks - the GNOME Disks application
SYNOPSIS
gnome-disks [OPTIONS]
DESCRIPTION
gnome-disks is the command to launch the GNOME Disks application. Disks provides a way to inspect, format, partition and configure disks
and block devices.
The Disks application is single-instance. What this means is that if the application is not already running when the gnome-disks command is
invoked, it will get launched and the command invocation will block until the application exits. Otherwise the existing application
instance will be used and the gnome-disks command will exit immediately.
OPTIONS
The following options are understood:
--block-device DEVICE
Switches to the Disks application and selects the block device given by DEVICE (for example, /dev/sda).
--block-device DEVICE --format-device [--xid WINDOW-ID]
Shows the "Format Volume" dialog for the block device given by DEVICE (for example, /dev/sdb1). If WINDOW-ID is given, makes the dialog
transient to the given XID.
--help
Prints a short help text and exits.
AUTHOR
Written by David Zeuthen <zeuthen@gmail.com> with a lot of help from many others.
BUGS
Please send bug reports to either the distribution bug tracker or the upstream bug tracker at
https://bugzilla.gnome.org/enter_bug.cgi?product=gnome-disk-utility.
SEE ALSO gnome-disk-image-mounter(1), udisks(8)GNOME March 2013 GNOME-DISKS(1)