03-12-2012
Exactly
Quote:
Originally Posted by
enzote
The disk are in concurrent mode. If a run lspv from one node I got:
# lspv
...
hdisk3 00f6317b414624c2 vg_data concurrent
hdisk4 00f6317b41466700 vg_data concurrent
...
I tried export/import.
The disks are mapped through VSCSI.
I will make a test, unmapping the disks, running cfgdev on VIOS and then remapping them.
Thanks!
Enzote
---------- Post updated at 09:41 AM ---------- Previous update was at 08:12 AM ----------
Hello!
This procedure works:
1.- Stop PowerHA services in both nodes.
2.- In VIOS A:
$ rmdev -dev vtd_data_A
$ rmdev -dev vtd_data_B
$ r oem
# rmdev -dl hdisk1
# rmdev -dl hdisk2
# cfgmgr
# chdev -l hdisk1 -a reserve_policy=no_reserve
# chdev -l hdisk2 -a reserve_policy=no_reserve
# exit
$ mkvdev -vdev hdisk1 -vadaper vhost0 -dev vtd_data_A
$ mkvdev -vdev hdisk2 -vadaper vhost0 -dev vtd_data_B
3.- Repeat step 2 en VIOS B.
4.- On node A:
# rmdev -dl hdisk3
# rmdev -dl hdisk4
# cfgmgr
# chdev -l hdisk3 -a reserve_policy=no_reserve
# chdev -l hdisk4 -a reserve_policy=no_reserve
# chdev -l hdisk3 -a queue_depth=10
# chdev -l hdisk4 -a queue_depth=10
# varyonvg vg_data
0516-1434 varyonvg: Following physical volumes appear to be grown in size.
Run chvg command to activate the new space.
hdisk3 hdisk4
# chvg -g vg_data
# varyoffvg vg_data
5.- Repeat step 4 on node B, except chvg -g.
Thanks for the support!
Enzote
This is why I asked if they were mapped as VSCSI as the new disks needed re-mapping.
10 More Discussions You Might Find Interesting
1. HP-UX
Hi All,
one of the mount point in Hp ux server has reached 95%
its a data base file and can not be deleted.
so i want to know how to increase the size of mount point
i am new to unix ,please help me (1 Reply)
Discussion started by: jyoti
1 Replies
2. AIX
Hi
i increased the lv size using smit hacmp.
but, the new size is not reflecting.. why?.
the pp size is 512 MB. before it was 4 PP's. i increased to 10 PP's.
when i type df -m /xxx.. it is showing 2GB only.
see the info below..
root@db:/ > df -m /xxx/xxx
Filesystem MB blocks Free... (2 Replies)
Discussion started by: honeym210
2 Replies
3. AIX
Hello guys,
It would be so nice of you if someone can provide me with these informations.
1) My SAN group assigned 51G of LUN space to the VIO server.I ran cfgdev to discover the newly added LUN. Unfortunately most of the disks that are in VIO server is 51G. How would I know which is the newly... (3 Replies)
Discussion started by: solaix14
3 Replies
4. UNIX for Dummies Questions & Answers
Unix protect its password by using salt
It that mean larger the salt size the more secure?
if the salt size increase greatly, will the password still able to be cracked?
thank you for helping (1 Reply)
Discussion started by: cryogen
1 Replies
5. Solaris
What is the largest possible LUN size that can be presented to Solaris 10. I've been googling a lot about this. The new EFI lablels (an alternative to VTOC) supports LUNs greater than 2TB. I need to know the upper limit. please help me find it. (4 Replies)
Discussion started by: pingmeback
4 Replies
6. AIX
How to increase the filesystem size in HACMP.
what is the difference between normal chfs command and increase the filesystem size in HACMP. (4 Replies)
Discussion started by: AIXlearner
4 Replies
7. UNIX for Dummies Questions & Answers
hi guys
I am working on my vmware workstation.
I have a /dev/sdb which is 5GB. I am using LVM.
Now I increase /dev/sdb 2 more GB.
fdisk -l shows 7 GB but pvscan still shows 5GB.
how do I make my system recognize the new 7GB added and be able to add those to my physical volumen and... (1 Reply)
Discussion started by: kopper
1 Replies
8. AIX
Few questions regarding Power HA ( previously known as HACMP) and VIOS POWERVM IVM ( IBM Virtualization I/O Server )
Is it possible to create HACMP cluster between two VIOS servers
Physical Machine_1
VIOS_SERVER_1
LPAR_1
SHARED_DISK_XX
VIOS_SERVER_2
Physical Machine_2
LPAR_2... (6 Replies)
Discussion started by: filosophizer
6 Replies
9. AIX
Hi,
Is there any advantage is making all my storage LUNS ( hdisk ) of uniform size. Currently the LUN's are having different size () eg: 50G / 60G / 75G etc ).
I am planning for a storage migration....so should i go for uniform lun size or with current LUN size pattern ?
Regards,
jibu (3 Replies)
Discussion started by: jibujacob
3 Replies
10. AIX
I an trying to increase the file size on an AIX 5.3 box. I think i am missing the correct syntax for the command. Here is was i am trying on a test box
# lsvg rootvg
VOLUME GROUP: rootvg VG IDENTIFIER: 0000bd8b00004c00000
0010d8ed7a76e
VG STATE: active ... (3 Replies)
Discussion started by: fierfek
3 Replies
vgmove(1M) vgmove(1M)
NAME
vgmove - move data from an old set of disks in a volume group to a new set of disks
SYNOPSIS
autobackup] diskmapfile vg_name
autobackup] diskfile diskmapfile vg_name
DESCRIPTION
The command migrates data from the existing set of disks in a volume group to a new set of disks. After the command completes successfully,
the new set of disks will belong to the same volume group. The command is intended to migrate data on a volume group from old storage to
new storage. The diskmapfile specifies the list of source disks to move data from, and the list of destination disks to move data to. The
user may choose to list only a subset of the existing physical volumes in the volume group that need to be migrated to a new set of disks.
The format of the diskmapfile file is shown below:
source_pv_1 destination_pv_1_1 destination_pv_1_2 ....
source_pv_2 destination_pv_2_1 destination_pv_2_2 ....
....
source_pv_n destination_pv_n_1 destination_pv_n_2 ....
If a destination disk is not already part of the volume group, it will be added using see vgextend(1M). Upon successful completion of the
source disk will be automatically removed from the volume group using see vgreduce(1M).
After successful migration, the destination disks are added to the LVM configuration files; namely, or The source disks along with their
alternate links are removed from the LVM configuration files.
A sample diskmapfile is shown below:
/dev/disk/disk1 /dev/disk/disk51 /dev/disk/disk52
/dev/disk/disk2 /dev/disk/disk51
/dev/disk/disk3 /dev/disk/disk53
The diskmapfile can be manually created, or it can be automatically generated using the diskfile and diskmapfile options. The argument
diskfile contains a list of destination disks, one per line such as the sample file below:
/dev/disk/disk51
/dev/disk/disk52
/dev/disk/disk53
When the option is given, reads a list of destination disks from diskfile, generates the source to destination mapping, and saves it to
diskmapfile.
The volume group must be activated before running the command. If the command is interrupted before it completes, the volume group is in
the same state it was at the beginning of the command. The migration can be continued by running the command with the same options and
disk mapping file.
Options and Arguments
The command recognizes the following options and arguments:
vg_name The path name of the volume group.
Set automatic backup for this invocation of
autobackup can have one of the following values:
Automatically back up configuration changes made to the volume group.
This is the default.
After this command executes, the command is executed for the volume group; see vgcfgbackup(1M).
Do not back up configuration changes this time.
Specify the name of the file containing the
source to destination disk mapping. If the option is also given, will generate the disk mapping and save it to
this filename. (Note that if the diskmapfile already exists, the file will be overwritten). Otherwise, will
perform the data migration using this diskmapfile.
Specify the name of the file containing the
list of destination disks. This option is used with the option to generate the diskmapfile.
When the option is used, no volume group data is moved.
Preview the actions to be taken but do not
move any volume group data.
Shared Volume Group Considerations
For volume group version 1.0 and 2.0, cannot be used if the volume group is activated in shared mode. For volume groups version 2.1 (or
higher), can be performed when activated in either shared, exclusive, or standalone mode.
Note that the daemon must be running on all the nodes sharing a volume group activated in shared mode. See lvmpud(1M).
When a node wants to share the volume group, the user must first execute a if physical volumes were moved in or out of the volume group at
the time the volume group was not activated on that node.
LVM shared mode is currently only available in Serviceguard clusters.
EXTERNAL INFLUENCES
Environment Variables
determines the language in which messages are displayed.
If is not specified or is null, it defaults to "C" (see lang(5)).
If any internationalization variable contains an invalid setting, all internationalization variables default to "C" (see environ(5)).
EXAMPLES
Move data in volume group from to After the migration, remove from the volume group:
Generate a source to destination disk map file for where the destination disks are and
SEE ALSO
lvmpud(1M), pvmove(1M), vgcfgbackup(1M), vgcfgrestore(1M), vgextend(1M), vgreduce(1M), intro(7), lvm(7).
vgmove(1M)