Unable to remove VIOS disk


 
Thread Tools Search this Thread
Operating Systems AIX Unable to remove VIOS disk
# 1  
Old 11-04-2014
Unable to remove VIOS disk

Hello,

I am unable to remove the disk, whenever i remove the disk using
Code:
rmdev -dl hdisk2

or
Code:
rmdev -Rdl hdisk2

the disk appears back when i run
Code:
cfgmgr

but unable to create any volume group on it

Code:
# mkvg -y foovg hdisk2
0516-008 /usr/sbin/mkvg: LVM system call returned an unknown
        error code (-267).
0516-1184 /usr/sbin/mkvg: IO failure on hdisk2.
0516-862 /usr/sbin/mkvg: Unable to create volume group.

but shows the disk is available
Code:
#lspv
hdisk9          00c7780e5a93d490                    rootvg          active

hdisk2          none                                None

and the errpt shows

Code:

root@:/>errpt -a | more

DISK DRIVE
DISK DRIVE ELECTRONICS

        Recommended Actions
        PERFORM PROBLEM DETERMINATION PROCEDURES

Detail Data
PATH ID
           0
SENSE DATA
0A00 2800 0000 0000 0000 0104 0000 0000 0000 0000 0000 0000 0102 0000 7000 0400
0000 000A 0000 0000 3E01 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0200 0000 0000 0000 0000 0000 0000 0000 0083 0000
0000 0027 0017
---------------------------------------------------------------------------
LABEL:          SC_DISK_ERR2
IDENTIFIER:     B6267342

Date/Time:       Tue Nov  4 16:17:28 SAUST 2014

# lsdev -Cc disk

hdisk2  Available  Virtual SCSI Disk Drive
hdisk3  Defined    Virtual SCSI Disk Drive

how can i remove this bad disk ?
# 2  
Old 11-04-2014
Hi,

What is the output of;

Code:
# lspv
# chdev -l hdisk2 -a pv=clear
# chdev -l hdisk2 -a pv=yes
# lspv

If you see the status change it's an indication that you can at least access the disk correctly. This might go back to the IBM write and verify problem.

Regards

Dave
# 3  
Old 11-04-2014
It doesn't work.
actually the disk doesn't exist, but some how it is showing up in the LPAR

Code:
root@clodb:/>chdev -l hdisk2 -a pv=clear
Method error (/etc/methods/chgdisk):
        0514-047 Cannot access a device.
     pv

root@clodb:/>chdev -l hdisk2 -a pv=yes
Method error (/etc/methods/chgdisk):
        0514-047 Cannot access a device.
     pv

root@clodb:/>lspv
hdisk9          00c7780e5a93d490                    rootvg          active
hdisk10         00c7780e9a335af3                    backupvg        active
hdisk13         00c7780eb79e72f6                    oradbvg         active
hdisk11         00c7780e723bb1e0                    bkclodbvg       active
hdisk2          none                                None
root@clodb:/>

I did ODM delete also, restarted the machines then the hdisk appeared as hdisk0

but it is not going.
# 4  
Old 11-04-2014
Hi,

I think that you have an exclusive lock, either held by an other LPAR or VIO - you may want to check with the SAN team that the Zoneing is correct and it hasn't been zoned to an other server/vio.

Regards

Dave
# 5  
Old 11-04-2014
Yes, you are right, it was coming from SAN to VIO Server and from VIO server to LPAR

Now the SAN Connection has been removed from the PSeries Machine

and I tried to remove the DISK from VIO Server but couldn't

Code:
# lspv
hdisk0          00c7780e79838606                    rootvg          active
hdisk8          00c7780e8945b5bb                    patchtest       active
hdisk9          00c7780e8945b5bb                    patchtest       active

# lsdev -Cc disk
hdisk0 Available 09-08-00-3,0 16 Bit LVD SCSI Disk Drive
hdisk8 Available 0A-09-02     MPIO Other DS4K Array Disk
hdisk9 Available 0A-09-02     MPIO Other DS4K Array Disk



# varyoffvg patchtest
0516-062 lqueryvg: Unable to read or write logical volume manager
        record. PV may be permanently corrupted. Run diagnostics
0516-012 lvaryoffvg: Logical volume must be closed.  If the logical
        volume contains a filesystem, the umount command will close
        the LV device.
0516-942 varyoffvg: Unable to vary off volume group patchtest.

How can I remove the volume group and the disks ? thanks
# 6  
Old 11-04-2014
I am confused as to how the disk is coming to client?
Have you created a VG on VIOS, created LV and giving those LV's as vscsi disks to client? or you have mapped the whole disk to client?
If your answer is latter then you cannot create a VG on VIOS.

Can you provide the below info
Code:
On VIOS (as padmin)
lsdev -slots
lsmap -vadapter vscsiX -all   X=client ID
lspv -free

Now as root
lsvg -l patchtest
lsvg -p patchtest

On client run this
Code:
lsvg 
lsvg -o
lspv
lsdev -Cc disk
df -g

# 7  
Old 11-04-2014
The Disk were created on SAN then it were mapped to VIO Server
From VIO server it was mapped to the LPAR

The disk in question on VIO Server is hdisk8 and hdisk9 with the volume group called patchtest. I was able to remove hdisk8 with rmdev -dl command.
but I could not remove hdisk9

Code:
$ lsdev -slots
# Slot                    Description       Device(s)
U787B.001.DNW3313-P1-C1   Logical I/O Slot  pci10 fcs0 fcs1
U787B.001.DNW3313-P1-C2   Logical I/O Slot  pci11 sisscsia1
U787B.001.DNW3313-P1-C3   Logical I/O Slot  pci4 pci5 lai0
U787B.001.DNW3313-P1-C4   Logical I/O Slot  pci6 sisioa0
U787B.001.DNW3313-P1-C5   Logical I/O Slot  pci7
U787B.001.DNW3313-P1-T7   Logical I/O Slot  pci2 usbhc0 usbhc1 usbhc2
U787B.001.DNW3313-P1-T9   Logical I/O Slot  pci8 ent0 ent1
U787B.001.DNW3313-P1-T14  Logical I/O Slot  pci9 sisscsia0
U787B.001.DNW3313-P1-T16  Logical I/O Slot  pci3 ide0
U9113.550.107780E-V1-C2   Virtual I/O Slot  ibmvmc0
U9113.550.107780E-V1-C3   Virtual I/O Slot  ent2
U9113.550.107780E-V1-C4   Virtual I/O Slot  ent3
U9113.550.107780E-V1-C5   Virtual I/O Slot  ent4
U9113.550.107780E-V1-C6   Virtual I/O Slot  ent5
U9113.550.107780E-V1-C10  Virtual I/O Slot  vts0
U9113.550.107780E-V1-C11  Virtual I/O Slot  vhost0
U9113.550.107780E-V1-C12  Virtual I/O Slot  vts1
U9113.550.107780E-V1-C13  Virtual I/O Slot  vhost1
U9113.550.107780E-V1-C14  Virtual I/O Slot  vts2
U9113.550.107780E-V1-C15  Virtual I/O Slot  vhost2
U9113.550.107780E-V1-C16  Virtual I/O Slot  vts3
U9113.550.107780E-V1-C17  Virtual I/O Slot  vhost3
U9113.550.107780E-V1-C18  Virtual I/O Slot  vts4
U9113.550.107780E-V1-C19  Virtual I/O Slot  vhost4
U9113.550.107780E-V1-C20  Virtual I/O Slot  vts5
U9113.550.107780E-V1-C21  Virtual I/O Slot  vhost5
U9113.550.107780E-V1-C22  Virtual I/O Slot  vts6
U9113.550.107780E-V1-C23  Virtual I/O Slot  vhost6
U9113.550.107780E-V1-C24  Virtual I/O Slot  vts7
U9113.550.107780E-V1-C25  Virtual I/O Slot  vhost7
U9113.550.107780E-V1-C26  Virtual I/O Slot  vts8
U9113.550.107780E-V1-C27  Virtual I/O Slot  vhost8
U9113.550.107780E-V1-C28  Virtual I/O Slot  vts9
U9113.550.107780E-V1-C29  Virtual I/O Slot  vhost9



$ lsmap -all
SVSA            Physloc                                      Client Partition ID
--------------- -------------------------------------------- ------------------
..........................................................................................

SVSA            Physloc                                      Client Partition ID
--------------- -------------------------------------------- ------------------
vhost2          U9113.550.107780E-V1-C15                     0x00000004

VTD                   vtopt0
Status                Available
LUN                   0x8100000000000000
Backing device
Physloc

VTD                   vtscsi0
Status                Available
LUN                   0x8500000000000000
Backing device        clodba
Physloc

VTD                   vtscsi5
Status                Available
LUN                   0x8200000000000000
Backing device        rootvg_61_2
Physloc

VTD                   vtscsi14
Status                Available
LUN                   0x8600000000000000
Backing device        bkclodb
Physloc

VTD                   vtscsi30
Status                Available
LUN                   0x8300000000000000
Backing device        test_compress
Physloc

VTD                   vtscsi31
Status                Available
LUN                   0x8400000000000000
Backing device        testpatch
Physloc

SVSA            Physloc                                      Client Partition ID
--------------- -------------------------------------------- ------------------
vhost3          U9113.550.107780E-V1-C17                     0x00000005


$ lspv -free
NAME            PVID                                SIZE(megabytes)
hdisk7          00c7780e5293914b                    286102

$ oem_setup_env
# lsvg -l patchtest
0516-062 : Unable to read or write logical volume manager
        record. PV may be permanently corrupted. Run diagnostics
# lsvg -p patchtest
0516-062 : Unable to read or write logical volume manager
        record. PV may be permanently corrupted. Run diagnostics

on the client, the disk which has problem is hdisk0

Code:
root@clodb:/>lspv
hdisk9          00c7780e5a93d490                    rootvg          active
hdisk10         00c7780e9a335af3                    backupvg        active
hdisk13         00c7780eb79e72f6                    oradbvg         active
hdisk11         00c7780e723bb1e0                    bkclodbvg       active
hdisk0          none                                None
root@clodb:/>lsvg
rootvg
backupvg
oradbvg
bkclodbvg
root@clodb:/>lsvg -o
oradbvg
backupvg
bkclodbvg
rootvg
root@clodb:/>lsdev -Cc disk
hdisk0  Available  Virtual SCSI Disk Drive
hdisk3  Defined    Virtual SCSI Disk Drive
hdisk4  Defined    Virtual SCSI Disk Drive
hdisk5  Defined    Virtual SCSI Disk Drive
hdisk6  Defined    Virtual SCSI Disk Drive
hdisk7  Defined    Virtual SCSI Disk Drive
hdisk8  Defined    Virtual SCSI Disk Drive
hdisk9  Available  Virtual SCSI Disk Drive
hdisk10 Available  Virtual SCSI Disk Drive
hdisk11 Available  Virtual SCSI Disk Drive
hdisk12 Defined    Virtual SCSI Disk Drive
hdisk13 Available  Virtual SCSI Disk Drive
root@clodb:/>df -g
Filesystem    GB blocks      Free %Used    Iused %Iused Mounted on
/dev/hd4           1.50      1.26   17%     2262     1% /
/dev/hd2           3.00      0.98   68%    34083    13% /usr
/dev/hd9var        1.00      0.67   34%     4650     3% /var
/dev/hd3           1.00      0.99    1%       61     1% /tmp
/dev/hd1           1.00      0.77   23%       76     1% /home
/proc                 -         -    -         -     -  /proc
/dev/hd10opt       0.50      0.35   30%     3710     5% /opt
/dev/livedump      0.50      0.50    1%        4     1% /var/adm/ras/livedump
/dev/fslv03       19.00     16.85   12%        6     1% /backup
/dev/fslv00      130.00      4.20   97%      421     1% /oradata
/dev/fslv01       25.00      1.70   94%       29     1% /oradata2
/dev/fslv02       14.00      2.39   83%   133321    20% /oratech
/dev/fslv04       24.00      1.47   94%        7     1% /bkclodb
root@clodb:/>


How can I get it removed from VIO ?
Login or Register to Ask a Question

Previous Thread | Next Thread

9 More Discussions You Might Find Interesting

1. UNIX for Advanced & Expert Users

Unable to install client AIX LPAR to vscsi hdisk provided from VIOS

Hi everybody, I have Power5 server with 4 internal hdisks each of 70Gb. VIOS server was installed via Virtual I/O Server Image Repository on the HMC. HMC release - 7.7.0 VIOS rootvg installed on 2 disk(these disks merged to one storage pool during VIOS install process),and 2 others hdisks... (2 Replies)
Discussion started by: Ravil Khalilov
2 Replies

2. AIX

VIOS: Extend virtual disk assigned to running lpar?

Hello, VIOS 2.2.1.4 using IVM. I'm trying to extend a virtual disk assigned to a running lpar so that I can expand the lpar's datavg and grow some filesystems for the user. Storage admin expanded the lun and new size was reflected in VIO right away. I then needed the storage pool to... (2 Replies)
Discussion started by: j_aix
2 Replies

3. Solaris

Solved: Disk Unable to Boot

Update: The / file system (/dev/rdsk/c1t0d0s0) is being checked fsck unable to stat WARNING - unable to repair the / filesystem. Run fsck manually (fsck -F ufs /dev/rdsk/c1t0d0s0). Root password for system maintenance (control-d to bypass): I am unable to hit control-d to by pass. I... (50 Replies)
Discussion started by: br1an
50 Replies

4. AIX

Shared Disk in VIOS between two LPARs ?

is there any way to create shared virtual disk between two LPARs like how you can do it using Storage through Fiber on two servers ? Trying to stimulate HACMP between two LPARs (1 Reply)
Discussion started by: filosophizer
1 Replies

5. AIX

VIOS IP address - separate vlan for vios servers ?

Hello, Lets say for simplicity that I do not use any vlan config inside my server - one lpar group use hea physical port1, another group hea physical port2. Physical port1 configured as vlan1 on external switch, physical port2 as vlan2. What is the common practice - should I isolate my vios... (0 Replies)
Discussion started by: vilius
0 Replies

6. Ubuntu

Unable to mount disk

I am running Ubuntu Server, I recently added a new hard drive to the machine When I run fdisk -l I see both drives. The recently added drive is present but it's listed as extended. when I try to mount the drive it says you must specify the file system type. I can't mount this drive, I was... (2 Replies)
Discussion started by: NelsonC
2 Replies

7. Solaris

Unable to take backup of /var on another disk !!

Hi Gurus, I have to back the /var to other disk, however I am unable to do so. What I did is Created a partition on another disk, placed file system on the slice and mounted on /mnt . Issued following command #ufsdump 0cfu /mnt /var And I am getting... (2 Replies)
Discussion started by: kumarmani
2 Replies

8. AIX

Problem mapping LUN disk from VIOS to the LPAR

Hello guys, It would be so nice of you if someone can provide me with these informations. 1) My SAN group assigned 51G of LUN space to the VIO server.I ran cfgdev to discover the newly added LUN. Unfortunately most of the disks that are in VIO server is 51G. How would I know which is the newly... (3 Replies)
Discussion started by: solaix14
3 Replies

9. Linux

Unable to remove file using rm: Disk space is full

Hi all, My disk space is 100% full. df -k <dir> -> 100% One of my debug files consume huge amount of space and i want to remove the same to start off fresh debugs. However i'm unable to remove the file giving out the following error message: rm -f debug.out22621 rm: cannot remove... (8 Replies)
Discussion started by: Pankajakshan
8 Replies
Login or Register to Ask a Question