We are doing the migration of DMX3 disks to DMX4 disks using migratepv. We are not using GPFS but we have gpfs disks present in the server. Can anyone advise how to get rid of GPFS in both the servers cbspsrdb01 and cbspsrdb02. I will do migratepv for the other disks present in the servers but im worried about the gpfs disks.
The below are the outputs from my servers :
Code:
============================================================================================================================
root@cbspsrdb02 [/] #lspv
hdisk6 00c7518dcdac304d cbs_nsd02
============================================================================================================================
root@cbspsrdb01 [/] #lspv
hdisk4 00c7518dcdac304d cbs_nsd02
============================================================================================================================
root@cbspsrdb01 [/] #mmlsnsd
File system Disk name NSD servers
---------------------------------------------------------------------------
workfiles cbs_nsd02 (directly attached)
============================================================================================================================
root@cbspsrdb02 [/] #mmlsnsd
File system Disk name NSD servers
---------------------------------------------------------------------------
workfiles cbs_nsd02 (directly attached)
============================================================================================================================
root@cbspsrdb01 [/] #lslpp -l | grep -i gpfs
gpfs.base 3.3.0.3 COMMITTED GPFS File Manager
gpfs.gui 3.3.0.1 COMMITTED GPFS GUI
gpfs.msg.en_US 3.3.0.3 COMMITTED GPFS Server Messages - U.S.
gpfs.base 3.3.0.3 COMMITTED GPFS File Manager
gpfs.docs.data 3.3.0.1 COMMITTED GPFS Server Manpages and
============================================================================================================================
root@cbspsrdb02 [/] #lslpp -l | grep -i gpfs
gpfs.base 3.3.0.3 COMMITTED GPFS File Manager
gpfs.gui 3.3.0.1 COMMITTED GPFS GUI
gpfs.msg.en_US 3.3.0.3 COMMITTED GPFS Server Messages - U.S.
gpfs.base 3.3.0.3 COMMITTED GPFS File Manager
gpfs.docs.data 3.3.0.1 COMMITTED GPFS Server Manpages and
============================================================================================================================
Hi,
does anyone here happen to know if I could run GLVM or GPFS on Solid State Disks?
I have a high volume / high transaction Sybase HACMP cluster currently setup with SRDF to the DR datacentre. My business now considers to move everything to SSD storage but we still need to get the data to... (0 Replies)
:cool:Hello,
can someone guide me how to create a GPFS filesystem, I've read a couple of redbooks however certain things are still not certain, like if you require to download files or licenses...
any help would be appreciated! (2 Replies)
Hi,
I have a running GPFS cluster. For one every mountpoint that i have created i have one disk assigned to it. That disk is converted to an NSD and is a part of the GPFS Cluster.
Now i have a new disk and there is this requirement to add it to the GPFS cluster, such that this becomes an NSD.... (1 Reply)
Hello I am interested if anybody uses GPFS and is it must to have GPFS in the POWERHA environment?
and can GPFS work as cluster active active or active passive
thanks in advance (0 Replies)
Dear all
for the last few days i was searching in IBM web site for GPFS 3.3 to upgrade my gpfs from 3.2 to 3.3 and i did not find the download link for the GPFS 3.3 in IBM website please can anyone give me the link . (4 Replies)
Hi,
We have GPFS 3.4 Installed on two AIX 6.1 Nodes. We have 3 GPFS Mount points:
/abc01 4TB (Comprises of 14 x 300GB disks from XIV SAN)
/abc02 4TB (Comprises of 14 x 300GB disks from XIV SAN)
/abc03 1TB ((Comprises of Multiple 300GB disks from XIV SAN)
Now these 40... (1 Reply)
Hello, I need to test whether our product will work with GPFS filesystems and I have some questions regarding the setup:
1. Do I need to dedicate an entire hard disk if I want to have GPFS on it? Or can I somehow split a disk into 2 virtual disks, and only use 1 for gpfs?
2. If lspv returns... (4 Replies)
we have implement GPFS 3.5.0.10 with 4 nodes cluster AIX 6.1 TL8 and they VIO clients , after that we noticed a big delay while we execute any command like mmgetstate -a will take about 2.5 minutes . time mmgetstate -a Node number Node name GPFS state ... (3 Replies)
Hello Gurus,
Could you please help me out of the difference between GPFS and NFS.
Thanks-
P (1 Reply)
Discussion started by: pokhraj_d
1 Replies
LEARN ABOUT DEBIAN
vgreduce
VGREDUCE(8) System Manager's Manual VGREDUCE(8)NAME
vgreduce - reduce a volume group
SYNOPSIS
vgreduce [-a|--all] [-A|--autobackup y|n] [-d|--debug] [-h|-?|--help] [--removemissing] [-t|--test] [-v|--verbose] VolumeGroupName [Physi-
calVolumePath...]
DESCRIPTION
vgreduce allows you to remove one or more unused physical volumes from a volume group.
OPTIONS
See lvm for common options.
-a, --all
Removes all empty physical volumes if none are given on command line.
--removemissing
Removes all missing physical volumes from the volume group, if there are no logical volumes allocated on those. This resumes normal
operation of the volume group (new logical volumes may again be created, changed and so on).
If this is not possible (there are logical volumes referencing the missing physical volumes) and you cannot or do not want to remove
them manually, you can run this option with --force to have vgreduce remove any partial LVs.
Any logical volumes and dependent snapshots that were partly on the missing disks get removed completely. This includes those parts
that lie on disks that are still present.
If your logical volumes spanned several disks including the ones that are lost, you might want to try to salvage data first by acti-
vating your logical volumes with --partial as described in lvm (8).
SEE ALSO lvm(8), vgextend(8)Sistina Software UK LVM TOOLS 2.02.95(2) (2012-03-06) VGREDUCE(8)