Unable to remove VIOS disk


 
Thread Tools Search this Thread
Operating Systems AIX Unable to remove VIOS disk
# 8  
Old 11-04-2014
Ok, so LV is given as disk to client.
Do this
On VIOS (as root)
Code:
lsvg -o
lsvg -l <vgname>

In which VG do you find the LV testpatch?

Now go to client and do
rmdev -Rdl hdisk0

Now go to VIOS (run as padmin)
rmvdev -vtd vtscsi31
The above command will remove the mapping from vhost2 for that LV.

If you run cfgmgr on client you won't find the hdisk0 now.

Now remove the LV testpatch from VIOS
rmlv -f testpatch
If the VG has NO more LVs mapped to any other partition then it will varyoff, if not then you varyoffvg the VG
Code:
varyoffvg <vgname>
exportvg <vgname>
rmdev -Rdl hdisk9

This User Gave Thanks to ibmtech For This Post:
# 9  
Old 11-04-2014
okay, thanks
on the client side after removing the hdisk0 it is gone, however on the VIOS side

Code:
$ rmvdev -vtd vtscsi31

$ rmlv -f testpatch
*******************************************************************************
The command's response was not recognized.  This may or may not indicate a problem.
*******************************************************************************
*******************************************************************************
The command's response was not recognized.  This may or may not indicate a problem.
*******************************************************************************
rmlv: Unable to remove logical volume testpatch.

$ oem_setup_env

# rmlv -f testpatch
0516-062 lquerylv: Unable to read or write logical volume manager
        record. PV may be permanently corrupted. Run diagnostics
0516-062 lqueryvg: Unable to read or write logical volume manager
        record. PV may be permanently corrupted. Run diagnostics
0516-912 rmlv: Unable to remove logical volume testpatch.
# varyoffvg testpatch
0516-306 getlvodm: Unable to find volume group testpatch in the Device
        Configuration Database.
0516-942 varyoffvg: Unable to vary off volume group testpatch.
# lspv
hdisk0          00c7780e79838606                    rootvg          active
hdisk1          00c7780e2e21ec86                    diskpool_4      active
hdisk2          00c7780ea5bd16bb                    diskpool_4      active
hdisk3          00c7780ee224f286                    disk_pool_5     active
hdisk4          00c7780e1b75933b                    diskpool_3      active
hdisk5          00c7780ece91bde2                    diskpool_2      active
hdisk6          00c7780ec2b65f4d                    diskpool_1      active
hdisk7          00c7780e5293914b                    None
hdisk9          00c7780e8945b5bb                    patchtest       active

unable to remove it VG = patchtest
# 10  
Old 11-04-2014
Ok, what is the output of
lsvg -l patchtest
# 11  
Old 11-04-2014
It is corrupt or damaged
the output is

Code:
# lsvg -l patchtest
0516-062 : Unable to read or write logical volume manager
        record. PV may be permanently corrupted. Run diagnostics

# 12  
Old 11-04-2014
I feel there is alteast one more LV (may be more) that is still assigned to client(s).
Ok , do this
On VIOS as root
Code:
lsfs
lsvg -l `lsvg`

Compare the output of those two, see which LV is missing from "lsvg -l 'lsvg`" output.
Look for that LV and see if that is assigned as backing device to any other client.
# 13  
Old 11-04-2014
ok

Code:
# lsfs
Name            Nodename   Mount Pt               VFS   Size    Options    Auto Accounting
/dev/hd4        --         /                      jfs2  1048576 --         yes  no
/dev/hd1        --         /home                  jfs2  20971520 --         yes  no
/dev/hd2        --         /usr                   jfs2  7340032 --         yes  no
/dev/hd9var     --         /var                   jfs2  2097152 --         yes  no
/dev/hd3        --         /tmp                   jfs2  7340032 --         yes  no
/dev/hd11admin  --         /admin                 jfs2  1048576 --         yes  no
/proc           --         /proc                  procfs --      --         yes  no
/dev/hd10opt    --         /opt                   jfs2  3145728 --         yes  no
/dev/livedump   --         /var/adm/ras/livedump  jfs2  1048576 --         yes  no
/dev/fwdump     --         /var/adm/ras/platform  jfs2  1048576 --         no   no
/dev/VMLibrary  --         /var/vio/VMLibrary     jfs2  31457280 rw         yes  no
/dev/fslv00     --         /space                 jfs2  --      rw         no   no


and

Code:
# lsvg -l `lsvg`
rootvg:
LV NAME             TYPE       LPs     PPs     PVs  LV STATE      MOUNT POINT
hd5                 boot       1       1       1    closed/syncd  N/A
hd6                 paging     1       1       1    open/syncd    N/A
paging00            paging     2       2       1    open/syncd    N/A
hd8                 jfs2log    1       1       1    open/syncd    N/A
hd4                 jfs2       1       1       1    open/syncd    /
hd2                 jfs2       7       7       1    open/syncd    /usr
hd9var              jfs2       2       2       1    open/syncd    /var
hd3                 jfs2       7       7       1    open/syncd    /tmp
hd1                 jfs2       20      20      1    open/syncd    /home
hd10opt             jfs2       3       3       1    open/syncd    /opt
hd11admin           jfs2       1       1       1    open/syncd    /admin
livedump            jfs2       1       1       1    open/syncd    /var/adm/ras/livedump
lg_dumplv           sysdump    2       2       1    open/syncd    N/A
fwdump              jfs2       1       1       1    open/syncd    /var/adm/ras/platform
test1               jfs        130     130     1    open/syncd    N/A
bkclodb             jfs        50      50      1    open/syncd    N/A
rootvg_vio_1        jfs        30      30      1    closed/syncd  N/A
bkcloapp            jfs        28      28      1    open/syncd    N/A
diskpool_1:
LV NAME             TYPE       LPs     PPs     PVs  LV STATE      MOUNT POINT
cloappa             jfs        280     280     1    open/syncd    N/A
ebs_backup1         jfs        116     116     1    open/syncd    N/A
paging_1            jfs        28      28      1    open/syncd    N/A
paging_2            jfs        32      32      1    open/syncd    N/A
VMLibrary           jfs2       60      60      1    open/syncd    /var/vio/VMLibrary
rootvg_6            jfs        60      60      1    open/syncd    N/A
rootvg_7            jfs        60      60      1    open/syncd    N/A
archive_log_2       jfs        40      40      1    open/syncd    N/A
diskpool_2:
LV NAME             TYPE       LPs     PPs     PVs  LV STATE      MOUNT POINT
rootvg_1            jfs        60      60      1    open/syncd    N/A
rootvg_2            jfs        40      40      1    open/syncd    N/A
test_ORADB          jfs        680     680     1    open/syncd    N/A
rootvg_0            jfs        60      60      1    open/syncd    N/A
rootvg_8            jfs        60      60      1    open/syncd    N/A
test_compress       jfs        80      80      1    open/syncd    N/A
rootvg_61_3         jfs        76      76      1    open/syncd    N/A
diskpool_3:
LV NAME             TYPE       LPs     PPs     PVs  LV STATE      MOUNT POINT
rootvg_53_upg       jfs        60      60      1    open/syncd    N/A
ebs_backup2         jfs        88      88      1    open/syncd    N/A
rootvg_61_2         jfs        72      72      1    open/syncd    N/A
rootvg_3            jfs        60      60      1    open/syncd    N/A
ebs_backup3         jfs        104     104     1    open/syncd    N/A
ebs_backup0         jfs        93      93      1    open/syncd    N/A
diskpool_4:
LV NAME             TYPE       LPs     PPs     PVs  LV STATE      MOUNT POINT
clodba              jfs        680     680     1    open/syncd    N/A
oracle_ebs_2        jfs        628     628     2    open/syncd    N/A
disk_pool_5:
LV NAME             TYPE       LPs     PPs     PVs  LV STATE      MOUNT POINT
dbrman              jfs        620     620     1    open/syncd    N/A
ORA_APP             jfs        280     280     1    open/syncd    N/A

0516-062 : Unable to read or write logical volume manager
        record. PV may be permanently corrupted. Run diagnostics
#

as you can notice it is giving error because unable to get from patchtest vg.

Last edited by filosophizer; 11-04-2014 at 01:21 PM..
# 14  
Old 11-04-2014
Bingo,
The culprit is
Code:
/dev/fslv00  --         /space                jfs2  -- rw         no  no

Umount the /space filesystem
umount -f  /space

This should automatically varyoff patchtest VG.
Login or Register to Ask a Question

Previous Thread | Next Thread

9 More Discussions You Might Find Interesting

1. UNIX for Advanced & Expert Users

Unable to install client AIX LPAR to vscsi hdisk provided from VIOS

Hi everybody, I have Power5 server with 4 internal hdisks each of 70Gb. VIOS server was installed via Virtual I/O Server Image Repository on the HMC. HMC release - 7.7.0 VIOS rootvg installed on 2 disk(these disks merged to one storage pool during VIOS install process),and 2 others hdisks... (2 Replies)
Discussion started by: Ravil Khalilov
2 Replies

2. AIX

VIOS: Extend virtual disk assigned to running lpar?

Hello, VIOS 2.2.1.4 using IVM. I'm trying to extend a virtual disk assigned to a running lpar so that I can expand the lpar's datavg and grow some filesystems for the user. Storage admin expanded the lun and new size was reflected in VIO right away. I then needed the storage pool to... (2 Replies)
Discussion started by: j_aix
2 Replies

3. Solaris

Solved: Disk Unable to Boot

Update: The / file system (/dev/rdsk/c1t0d0s0) is being checked fsck unable to stat WARNING - unable to repair the / filesystem. Run fsck manually (fsck -F ufs /dev/rdsk/c1t0d0s0). Root password for system maintenance (control-d to bypass): I am unable to hit control-d to by pass. I... (50 Replies)
Discussion started by: br1an
50 Replies

4. AIX

Shared Disk in VIOS between two LPARs ?

is there any way to create shared virtual disk between two LPARs like how you can do it using Storage through Fiber on two servers ? Trying to stimulate HACMP between two LPARs (1 Reply)
Discussion started by: filosophizer
1 Replies

5. AIX

VIOS IP address - separate vlan for vios servers ?

Hello, Lets say for simplicity that I do not use any vlan config inside my server - one lpar group use hea physical port1, another group hea physical port2. Physical port1 configured as vlan1 on external switch, physical port2 as vlan2. What is the common practice - should I isolate my vios... (0 Replies)
Discussion started by: vilius
0 Replies

6. Ubuntu

Unable to mount disk

I am running Ubuntu Server, I recently added a new hard drive to the machine When I run fdisk -l I see both drives. The recently added drive is present but it's listed as extended. when I try to mount the drive it says you must specify the file system type. I can't mount this drive, I was... (2 Replies)
Discussion started by: NelsonC
2 Replies

7. Solaris

Unable to take backup of /var on another disk !!

Hi Gurus, I have to back the /var to other disk, however I am unable to do so. What I did is Created a partition on another disk, placed file system on the slice and mounted on /mnt . Issued following command #ufsdump 0cfu /mnt /var And I am getting... (2 Replies)
Discussion started by: kumarmani
2 Replies

8. AIX

Problem mapping LUN disk from VIOS to the LPAR

Hello guys, It would be so nice of you if someone can provide me with these informations. 1) My SAN group assigned 51G of LUN space to the VIO server.I ran cfgdev to discover the newly added LUN. Unfortunately most of the disks that are in VIO server is 51G. How would I know which is the newly... (3 Replies)
Discussion started by: solaix14
3 Replies

9. Linux

Unable to remove file using rm: Disk space is full

Hi all, My disk space is 100% full. df -k <dir> -> 100% One of my debug files consume huge amount of space and i want to remove the same to start off fresh debugs. However i'm unable to remove the file giving out the following error message: rm -f debug.out22621 rm: cannot remove... (8 Replies)
Discussion started by: Pankajakshan
8 Replies
Login or Register to Ask a Question