Sponsored Content
Full Discussion: How to get rid of GPFS ?
Operating Systems AIX How to get rid of GPFS ? Post 302567813 by newtoaixos on Tuesday 25th of October 2011 04:56:39 AM
Old 10-25-2011
How to get rid of GPFS ?

Hi

We are doing the migration of DMX3 disks to DMX4 disks using migratepv. We are not using GPFS but we have gpfs disks present in the server. Can anyone advise how to get rid of GPFS in both the servers cbspsrdb01 and cbspsrdb02. I will do migratepv for the other disks present in the servers but im worried about the gpfs disks.

The below are the outputs from my servers :

Code:
   
============================================================================================================================
root@cbspsrdb02 [/] #lspv
hdisk6          00c7518dcdac304d                    cbs_nsd02
============================================================================================================================
root@cbspsrdb01 [/] #lspv
hdisk4          00c7518dcdac304d                    cbs_nsd02
============================================================================================================================
root@cbspsrdb01 [/] #mmlsnsd

 File system   Disk name    NSD servers
---------------------------------------------------------------------------
 workfiles     cbs_nsd02    (directly attached)
============================================================================================================================
root@cbspsrdb02 [/] #mmlsnsd

 File system   Disk name    NSD servers
---------------------------------------------------------------------------
 workfiles     cbs_nsd02    (directly attached)
============================================================================================================================
root@cbspsrdb01 [/] #lslpp -l | grep -i gpfs
  gpfs.base                  3.3.0.3  COMMITTED  GPFS File Manager
  gpfs.gui                   3.3.0.1  COMMITTED  GPFS GUI
  gpfs.msg.en_US             3.3.0.3  COMMITTED  GPFS Server Messages - U.S.
  gpfs.base                  3.3.0.3  COMMITTED  GPFS File Manager
  gpfs.docs.data             3.3.0.1  COMMITTED  GPFS Server Manpages and
============================================================================================================================
root@cbspsrdb02 [/] #lslpp -l | grep -i gpfs
  gpfs.base                  3.3.0.3  COMMITTED  GPFS File Manager
  gpfs.gui                   3.3.0.1  COMMITTED  GPFS GUI
  gpfs.msg.en_US             3.3.0.3  COMMITTED  GPFS Server Messages - U.S.
  gpfs.base                  3.3.0.3  COMMITTED  GPFS File Manager
  gpfs.docs.data             3.3.0.1  COMMITTED  GPFS Server Manpages and
============================================================================================================================

 

10 More Discussions You Might Find Interesting

1. AIX

two gpfs in one node issue

dear all can i create two gpfs in one node each gpfs is pointing to single hdiskpower (1 Reply)
Discussion started by: thecobra151
1 Replies

2. AIX

SSD with GPFS ?

Hi, does anyone here happen to know if I could run GLVM or GPFS on Solid State Disks? I have a high volume / high transaction Sybase HACMP cluster currently setup with SRDF to the DR datacentre. My business now considers to move everything to SSD storage but we still need to get the data to... (0 Replies)
Discussion started by: zxmaus
0 Replies

3. AIX

AIX GPFS Setup

:cool:Hello, can someone guide me how to create a GPFS filesystem, I've read a couple of redbooks however certain things are still not certain, like if you require to download files or licenses... any help would be appreciated! (2 Replies)
Discussion started by: ollie01
2 Replies

4. AIX

How to configure new hdisk such that it has a gpfs fs on it and is added to a running gpfs cluster?

Hi, I have a running GPFS cluster. For one every mountpoint that i have created i have one disk assigned to it. That disk is converted to an NSD and is a part of the GPFS Cluster. Now i have a new disk and there is this requirement to add it to the GPFS cluster, such that this becomes an NSD.... (1 Reply)
Discussion started by: aixromeo
1 Replies

5. AIX

GPFS

Hello I am interested if anybody uses GPFS and is it must to have GPFS in the POWERHA environment? and can GPFS work as cluster active active or active passive thanks in advance (0 Replies)
Discussion started by: Vit0_Corleone
0 Replies

6. AIX

GPFS 3.3

Dear all for the last few days i was searching in IBM web site for GPFS 3.3 to upgrade my gpfs from 3.2 to 3.3 and i did not find the download link for the GPFS 3.3 in IBM website please can anyone give me the link . (4 Replies)
Discussion started by: thecobra151
4 Replies

7. AIX

Performance issues for LPAR with GPFS 3.4

Hi, We have GPFS 3.4 Installed on two AIX 6.1 Nodes. We have 3 GPFS Mount points: /abc01 4TB (Comprises of 14 x 300GB disks from XIV SAN) /abc02 4TB (Comprises of 14 x 300GB disks from XIV SAN) /abc03 1TB ((Comprises of Multiple 300GB disks from XIV SAN) Now these 40... (1 Reply)
Discussion started by: aixromeo
1 Replies

8. AIX

Questions about setting up GPFS

Hello, I need to test whether our product will work with GPFS filesystems and I have some questions regarding the setup: 1. Do I need to dedicate an entire hard disk if I want to have GPFS on it? Or can I somehow split a disk into 2 virtual disks, and only use 1 for gpfs? 2. If lspv returns... (4 Replies)
Discussion started by: bstring
4 Replies

9. AIX

GPFS is too slow after installtion

we have implement GPFS 3.5.0.10 with 4 nodes cluster AIX 6.1 TL8 and they VIO clients , after that we noticed a big delay while we execute any command like mmgetstate -a will take about 2.5 minutes . time mmgetstate -a Node number Node name GPFS state ... (3 Replies)
Discussion started by: thecobra151
3 Replies

10. AIX

Difference between NFS and GPFS

Hello Gurus, Could you please help me out of the difference between GPFS and NFS. Thanks- P (1 Reply)
Discussion started by: pokhraj_d
1 Replies
VGREDUCE(8)						      System Manager's Manual						       VGREDUCE(8)

NAME
vgreduce - reduce a volume group SYNOPSIS
vgreduce [-a|--all] [-A|--autobackup y|n] [-d|--debug] [-h|-?|--help] [--removemissing] [-t|--test] [-v|--verbose] VolumeGroupName [Physi- calVolumePath...] DESCRIPTION
vgreduce allows you to remove one or more unused physical volumes from a volume group. OPTIONS
See lvm for common options. -a, --all Removes all empty physical volumes if none are given on command line. --removemissing Removes all missing physical volumes from the volume group, if there are no logical volumes allocated on those. This resumes normal operation of the volume group (new logical volumes may again be created, changed and so on). If this is not possible (there are logical volumes referencing the missing physical volumes) and you cannot or do not want to remove them manually, you can run this option with --force to have vgreduce remove any partial LVs. Any logical volumes and dependent snapshots that were partly on the missing disks get removed completely. This includes those parts that lie on disks that are still present. If your logical volumes spanned several disks including the ones that are lost, you might want to try to salvage data first by acti- vating your logical volumes with --partial as described in lvm (8). SEE ALSO
lvm(8), vgextend(8) Sistina Software UK LVM TOOLS 2.02.95(2) (2012-03-06) VGREDUCE(8)
All times are GMT -4. The time now is 03:48 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy