03-12-2012
Exactly
Quote:
Originally Posted by
enzote
The disk are in concurrent mode. If a run lspv from one node I got:
# lspv
...
hdisk3 00f6317b414624c2 vg_data concurrent
hdisk4 00f6317b41466700 vg_data concurrent
...
I tried export/import.
The disks are mapped through VSCSI.
I will make a test, unmapping the disks, running cfgdev on VIOS and then remapping them.
Thanks!
Enzote
---------- Post updated at 09:41 AM ---------- Previous update was at 08:12 AM ----------
Hello!
This procedure works:
1.- Stop PowerHA services in both nodes.
2.- In VIOS A:
$ rmdev -dev vtd_data_A
$ rmdev -dev vtd_data_B
$ r oem
# rmdev -dl hdisk1
# rmdev -dl hdisk2
# cfgmgr
# chdev -l hdisk1 -a reserve_policy=no_reserve
# chdev -l hdisk2 -a reserve_policy=no_reserve
# exit
$ mkvdev -vdev hdisk1 -vadaper vhost0 -dev vtd_data_A
$ mkvdev -vdev hdisk2 -vadaper vhost0 -dev vtd_data_B
3.- Repeat step 2 en VIOS B.
4.- On node A:
# rmdev -dl hdisk3
# rmdev -dl hdisk4
# cfgmgr
# chdev -l hdisk3 -a reserve_policy=no_reserve
# chdev -l hdisk4 -a reserve_policy=no_reserve
# chdev -l hdisk3 -a queue_depth=10
# chdev -l hdisk4 -a queue_depth=10
# varyonvg vg_data
0516-1434 varyonvg: Following physical volumes appear to be grown in size.
Run chvg command to activate the new space.
hdisk3 hdisk4
# chvg -g vg_data
# varyoffvg vg_data
5.- Repeat step 4 on node B, except chvg -g.
Thanks for the support!
Enzote
This is why I asked if they were mapped as VSCSI as the new disks needed re-mapping.
10 More Discussions You Might Find Interesting
1. HP-UX
Hi All,
one of the mount point in Hp ux server has reached 95%
its a data base file and can not be deleted.
so i want to know how to increase the size of mount point
i am new to unix ,please help me (1 Reply)
Discussion started by: jyoti
1 Replies
2. AIX
Hi
i increased the lv size using smit hacmp.
but, the new size is not reflecting.. why?.
the pp size is 512 MB. before it was 4 PP's. i increased to 10 PP's.
when i type df -m /xxx.. it is showing 2GB only.
see the info below..
root@db:/ > df -m /xxx/xxx
Filesystem MB blocks Free... (2 Replies)
Discussion started by: honeym210
2 Replies
3. AIX
Hello guys,
It would be so nice of you if someone can provide me with these informations.
1) My SAN group assigned 51G of LUN space to the VIO server.I ran cfgdev to discover the newly added LUN. Unfortunately most of the disks that are in VIO server is 51G. How would I know which is the newly... (3 Replies)
Discussion started by: solaix14
3 Replies
4. UNIX for Dummies Questions & Answers
Unix protect its password by using salt
It that mean larger the salt size the more secure?
if the salt size increase greatly, will the password still able to be cracked?
thank you for helping (1 Reply)
Discussion started by: cryogen
1 Replies
5. Solaris
What is the largest possible LUN size that can be presented to Solaris 10. I've been googling a lot about this. The new EFI lablels (an alternative to VTOC) supports LUNs greater than 2TB. I need to know the upper limit. please help me find it. (4 Replies)
Discussion started by: pingmeback
4 Replies
6. AIX
How to increase the filesystem size in HACMP.
what is the difference between normal chfs command and increase the filesystem size in HACMP. (4 Replies)
Discussion started by: AIXlearner
4 Replies
7. UNIX for Dummies Questions & Answers
hi guys
I am working on my vmware workstation.
I have a /dev/sdb which is 5GB. I am using LVM.
Now I increase /dev/sdb 2 more GB.
fdisk -l shows 7 GB but pvscan still shows 5GB.
how do I make my system recognize the new 7GB added and be able to add those to my physical volumen and... (1 Reply)
Discussion started by: kopper
1 Replies
8. AIX
Few questions regarding Power HA ( previously known as HACMP) and VIOS POWERVM IVM ( IBM Virtualization I/O Server )
Is it possible to create HACMP cluster between two VIOS servers
Physical Machine_1
VIOS_SERVER_1
LPAR_1
SHARED_DISK_XX
VIOS_SERVER_2
Physical Machine_2
LPAR_2... (6 Replies)
Discussion started by: filosophizer
6 Replies
9. AIX
Hi,
Is there any advantage is making all my storage LUNS ( hdisk ) of uniform size. Currently the LUN's are having different size () eg: 50G / 60G / 75G etc ).
I am planning for a storage migration....so should i go for uniform lun size or with current LUN size pattern ?
Regards,
jibu (3 Replies)
Discussion started by: jibujacob
3 Replies
10. AIX
I an trying to increase the file size on an AIX 5.3 box. I think i am missing the correct syntax for the command. Here is was i am trying on a test box
# lsvg rootvg
VOLUME GROUP: rootvg VG IDENTIFIER: 0000bd8b00004c00000
0010d8ed7a76e
VG STATE: active ... (3 Replies)
Discussion started by: fierfek
3 Replies
LEARN ABOUT OPENSOLARIS
did
did(7) Sun Cluster Device and Network Interfaces did(7)
NAME
did - user configurable disk id driver
DESCRIPTION
Note -
Beginning with the Sun Cluster 3.2 release, Sun Cluster software includes an object-oriented command set. Although Sun Cluster software
still supports the original command set, Sun Cluster procedural documentation uses only the object-oriented command set. For more infor-
mation about the object-oriented command set, see the Intro(1CL) man page.
Disk ID (DID) is a user configurable pseudo device driver that provides access to underlying disk, tape, and CDROM devices. When the
device supports unique device ids, multiple paths to a device are determined according to the device id of the device. Even if multiple
paths are available with the same device id, only one DID name is given to the actual device.
In a clustered environment, a particular physical device will have the same DID name regardless of its connectivity to more than one host
or controller. This, however, is only true of devices that support a global unique device identifier such as physical disks.
DID maintains parallel directories for each type of device that it manages under /dev/did. The devices in these directories behave the same
as their non-DID counterparts. This includes maintaining slices for disk and CDROM devices as well as names for different tape device
behaviors. Both raw and block device access is also supported for disks by means of /dev/did/rdsk and /dev/did/rdsk.
At any point in time, I/O is only supported down one path to the device. No multipathing support is currently available through DID.
Before a DID device can be used, it must first be initialized by means of the scdidadm(1M) command.
IOCTLS
The DID driver maintains an admin node as well as nodes for each DID device minor.
No user ioctls are supported by the admin node.
The DKIOCINFO ioctl is supported when called against the DID device nodes such as /dev/did/rdsk/d0s2.
All other ioctls are passed directly to the driver below.
FILES
/dev/did/dsk/dnsm block disk or CDROM device, where n is the device number and m is the slice number
/dev/did/rdsk/dnsm raw disk or CDROM device, where n is the device number and m is the slice number
/dev/did/rmt/n tape device , where n is the device number
/dev/did/admin administrative device
/kernel/drv/did driver module
/kernel/drv/did.conf driver configuration file
/etc/did.conf scdidadm configuration file for non-clustered systems
Cluster Configuration Repository (CCscdidadm(1M) maintains configuration in the CCR for clustered systems
SEE ALSO
devfsadm(1M), Intro(1CL), cldevice(1CL), scdidadm(1M)
NOTES
DID creates names for devices in groups, in order to decrease the overhead during device hot-plug. For disks, device names are created in
/dev/did/dsk and /dev/did/rdsk in groups of 100 disks at a time. For tapes, device names are created in /dev/did/rmt in groups of 10
tapes at a time. If more devices are added to the cluster than are handled by the current names, another group will be created.
Sun Cluster 3.2 24 April 2001 did(7)