We are doing the migration of DMX3 disks to DMX4 disks using migratepv. We are not using GPFS but we have gpfs disks present in the server. Can anyone advise how to get rid of GPFS in both the servers cbspsrdb01 and cbspsrdb02. I will do migratepv for the other disks present in the servers but im worried about the gpfs disks.
Hi,
does anyone here happen to know if I could run GLVM or GPFS on Solid State Disks?
I have a high volume / high transaction Sybase HACMP cluster currently setup with SRDF to the DR datacentre. My business now considers to move everything to SSD storage but we still need to get the data to... (0 Replies)
:cool:Hello,
can someone guide me how to create a GPFS filesystem, I've read a couple of redbooks however certain things are still not certain, like if you require to download files or licenses...
any help would be appreciated! (2 Replies)
Hi,
I have a running GPFS cluster. For one every mountpoint that i have created i have one disk assigned to it. That disk is converted to an NSD and is a part of the GPFS Cluster.
Now i have a new disk and there is this requirement to add it to the GPFS cluster, such that this becomes an NSD.... (1 Reply)
Hello I am interested if anybody uses GPFS and is it must to have GPFS in the POWERHA environment?
and can GPFS work as cluster active active or active passive
thanks in advance (0 Replies)
Dear all
for the last few days i was searching in IBM web site for GPFS 3.3 to upgrade my gpfs from 3.2 to 3.3 and i did not find the download link for the GPFS 3.3 in IBM website please can anyone give me the link . (4 Replies)
Hi,
We have GPFS 3.4 Installed on two AIX 6.1 Nodes. We have 3 GPFS Mount points:
/abc01 4TB (Comprises of 14 x 300GB disks from XIV SAN)
/abc02 4TB (Comprises of 14 x 300GB disks from XIV SAN)
/abc03 1TB ((Comprises of Multiple 300GB disks from XIV SAN)
Now these 40... (1 Reply)
Hello, I need to test whether our product will work with GPFS filesystems and I have some questions regarding the setup:
1. Do I need to dedicate an entire hard disk if I want to have GPFS on it? Or can I somehow split a disk into 2 virtual disks, and only use 1 for gpfs?
2. If lspv returns... (4 Replies)
we have implement GPFS 3.5.0.10 with 4 nodes cluster AIX 6.1 TL8 and they VIO clients , after that we noticed a big delay while we execute any command like mmgetstate -a will take about 2.5 minutes . time mmgetstate -a Node number Node name GPFS state ... (3 Replies)
Hello Gurus,
Could you please help me out of the difference between GPFS and NFS.
Thanks-
P (1 Reply)
Discussion started by: pokhraj_d
1 Replies
LEARN ABOUT HPUX
vgmove
vgmove(1M)vgmove(1M)NAME
vgmove - move data from an old set of disks in a volume group to a new set of disks
SYNOPSIS
autobackup] diskmapfile vg_name
autobackup] diskfile diskmapfile vg_name
DESCRIPTION
The command migrates data from the existing set of disks in a volume group to a new set of disks. After the command completes successfully,
the new set of disks will belong to the same volume group. The command is intended to migrate data on a volume group from old storage to
new storage. The diskmapfile specifies the list of source disks to move data from, and the list of destination disks to move data to. The
user may choose to list only a subset of the existing physical volumes in the volume group that need to be migrated to a new set of disks.
The format of the diskmapfile file is shown below:
source_pv_1 destination_pv_1_1 destination_pv_1_2 ....
source_pv_2 destination_pv_2_1 destination_pv_2_2 ....
....
source_pv_n destination_pv_n_1 destination_pv_n_2 ....
If a destination disk is not already part of the volume group, it will be added using see vgextend(1M). Upon successful completion of the
source disk will be automatically removed from the volume group using see vgreduce(1M).
After successful migration, the destination disks are added to the LVM configuration files; namely, or The source disks along with their
alternate links are removed from the LVM configuration files.
A sample diskmapfile is shown below:
/dev/disk/disk1 /dev/disk/disk51 /dev/disk/disk52
/dev/disk/disk2 /dev/disk/disk51
/dev/disk/disk3 /dev/disk/disk53
The diskmapfile can be manually created, or it can be automatically generated using the diskfile and diskmapfile options. The argument
diskfile contains a list of destination disks, one per line such as the sample file below:
/dev/disk/disk51
/dev/disk/disk52
/dev/disk/disk53
When the option is given, reads a list of destination disks from diskfile, generates the source to destination mapping, and saves it to
diskmapfile.
The volume group must be activated before running the command. If the command is interrupted before it completes, the volume group is in
the same state it was at the beginning of the command. The migration can be continued by running the command with the same options and
disk mapping file.
Options and Arguments
The command recognizes the following options and arguments:
vg_name The path name of the volume group.
Set automatic backup for this invocation of
autobackup can have one of the following values:
Automatically back up configuration changes made to the volume group.
This is the default.
After this command executes, the command is executed for the volume group; see vgcfgbackup(1M).
Do not back up configuration changes this time.
Specify the name of the file containing the
source to destination disk mapping. If the option is also given, will generate the disk mapping and save it to
this filename. (Note that if the diskmapfile already exists, the file will be overwritten). Otherwise, will
perform the data migration using this diskmapfile.
Specify the name of the file containing the
list of destination disks. This option is used with the option to generate the diskmapfile.
When the option is used, no volume group data is moved.
Preview the actions to be taken but do not
move any volume group data.
Shared Volume Group Considerations
For volume group version 1.0 and 2.0, cannot be used if the volume group is activated in shared mode. For volume groups version 2.1 (or
higher), can be performed when activated in either shared, exclusive, or standalone mode.
Note that the daemon must be running on all the nodes sharing a volume group activated in shared mode. See lvmpud(1M).
When a node wants to share the volume group, the user must first execute a if physical volumes were moved in or out of the volume group at
the time the volume group was not activated on that node.
LVM shared mode is currently only available in Serviceguard clusters.
EXTERNAL INFLUENCES
Environment Variables
determines the language in which messages are displayed.
If is not specified or is null, it defaults to "C" (see lang(5)).
If any internationalization variable contains an invalid setting, all internationalization variables default to "C" (see environ(5)).
EXAMPLES
Move data in volume group from to After the migration, remove from the volume group:
Generate a source to destination disk map file for where the destination disks are and
SEE ALSO lvmpud(1M), pvmove(1M), vgcfgbackup(1M), vgcfgrestore(1M), vgextend(1M), vgreduce(1M), intro(7), lvm(7).
vgmove(1M)