Sponsored Content
Operating Systems Solaris Solaris 10 Live Upgrade Issue Post 302636329 by Revo on Monday 7th of May 2012 09:38:48 AM
Old 05-07-2012
Hi Duke, yes that is correct the disks are on a SAN.

I have tried running fsck on the specified disk

Code:
# cat /etc/lutab
# DO NOT EDIT THIS FILE BY HAND. This file is not a public interface.
# The format and contents of this file are subject to change.
# Any user modification to this file may result in the incorrect
# operation of Live Upgrade.
1:solenv1:C:0
1:/:/dev/dsk/c6t60A9800057396D64685A4D7A51725458d0s0:1
1:boot-device:/dev/dsk/c6t60A9800057396D64685A4D7A51725458d0s0:2
2:solenv2:C:0
2:/:/dev/dsk/c6t60A9800057396D6468344D7A4F356151d0s0:1
2:boot-device:/dev/dsk/c6t60A9800057396D6468344D7A4F356151d0s0:2

# echo | format
Searching for disks...done


AVAILABLE DISK SELECTIONS:
       0. c6t60A9800057396D64685A4D7A51725458d0 <NETAPP-LUN-0.2 cyl 6398 alt 2 hd 16 sec 3072>
          /scsi_vhci/ssd@g60a9800057396d64685a4d7a51725458
       1. c6t60A9800057396D64685A4D7A52675733d0 <NETAPP-LUN-7320-500.08GB>
          /scsi_vhci/ssd@g60a9800057396d64685a4d7a52675733
       2. c6t60A9800057396D64685A51774F6E712Dd0 <NETAPP-LUN-0.2 cyl 5118 alt 2 hd 16 sec 512>
          /scsi_vhci/ssd@g60a9800057396d64685a51774f6e712d
       3. c6t60A9800057396D64685A62414D324E54d0 <NETAPP-LUN-7320-50.00GB>
          /scsi_vhci/ssd@g60a9800057396d64685a62414d324e54
       4. c6t60A9800057396D64685A62414D345041d0 <NETAPP-LUN-7320-10.00GB>
          /scsi_vhci/ssd@g60a9800057396d64685a62414d345041
       5. c6t60A9800057396D6468344D7A4E795678d0 <NETAPP-LUN-0.2 cyl 5118 alt 2 hd 16 sec 512>
          /scsi_vhci/ssd@g60a9800057396d6468344d7a4e795678
       6. c6t60A9800057396D6468344D7A4F4A3561d0 <NETAPP-LUN-0.2 cyl 6398 alt 2 hd 16 sec 2048>
          /scsi_vhci/ssd@g60a9800057396d6468344d7a4f4a3561
       7. c6t60A9800057396D6468344D7A4F356151d0 <NETAPP-LUN-0.2 cyl 6398 alt 2 hd 16 sec 4096>
          /scsi_vhci/ssd@g60a9800057396d6468344d7a4f356151
       8. c6t60A9800057396D6468344D7A4F424571d0 <NETAPP-LUN-0.2 cyl 6398 alt 2 hd 16 sec 1024>
          /scsi_vhci/ssd@g60a9800057396d6468344d7a4f424571
       9. c6t60A9800057396D6468344D7A4F456848d0 <NETAPP-LUN-0.2 cyl 5118 alt 2 hd 16 sec 256>
          /scsi_vhci/ssd@g60a9800057396d6468344d7a4f456848
      10. c6t60A9800057396D6468344D7A4F483044d0 <NETAPP-LUN-0.2 cyl 6398 alt 2 hd 16 sec 2048>
          /scsi_vhci/ssd@g60a9800057396d6468344d7a4f483044
Specify disk (enter its number): Specify disk (enter its number):

below is the output
Quote:
# fsck -F ufs /dev/dsk/c6t60A9800057396D6468344D7A4F356151d0s0
** /dev/rdsk/c6t60A9800057396D6468344D7A4F356151d0s0
** Last Mounted on /a
** Phase 1 - Check Blocks and Sizes
INCORRECT DISK BLOCK COUNT I=400 (416 should be 224)
CORRECT? y

FRAGMENT 49976 DUP I=5828 LFN 0
<snip>
EXCESSIVE DUPLICATE FRAGMENTS I=5828
CONTINUE? y

<snip>

** Phase 2 - Check Pathnames
DIRECTORY CORRUPTED I=1497 OWNER=root MODE=40755
SIZE=512 MTIME=May 7 00:07 2012
DIR=?

SALVAGE? yes

MISSING '.' I=1497 OWNER=root MODE=40755
SIZE=512 MTIME=May 7 00:07 2012
DIR=?

FIX? yes

<snip>

FILESYSTEM MAY STILL BE INCONSISTENT.
224419 files, 10919276 used, 110132692 free (6596 frags, 13765762 blocks, 0.0% fragmentation)

***** FILE SYSTEM WAS MODIFIED *****
ORPHANED DIRECTORIES REATTACHED; DIR LINK COUNTS MAY NOT BE CORRECT.
***** FILE SYSTEM IS BAD *****

***** PLEASE RERUN FSCK *****
#
A rerun of fsck brings

Quote:
224419 files, 10920222 used, 110131746 free (5818 frags, 13765741 blocks, 0.0% fragmentation)

***** FILE SYSTEM WAS MODIFIED *****
ORPHANED DIRECTORIES REATTACHED; DIR LINK COUNTS MAY NOT BE CORRECT.

***** PLEASE RERUN FSCK *****
I am then able to delete the BE solenv2

Quote:
# ludelete solenv2
Updating boot environment configuration database.
Updating boot environment description database on all BEs.
Updating all boot environment configuration databases.
#
# lustatus
Boot Environment Is Active Active Can Copy
Name Complete Now On Reboot Delete Status
-------------------------- -------- ------ --------- ------ ----------
solenv1 yes yes yes no -
#
So it looks like i'm getting back on track, thanks very much for your input Smilie

As an aside, looking at the results form the secondary run of fsck, should I rerun it?

Last edited by Revo; 05-07-2012 at 10:41 AM.. Reason: spelling mistake
 

5 More Discussions You Might Find Interesting

1. Solaris

Live upgrade Issue

Hi, I upgraded solaris 10 x86 from update 3 to update 7 with zones installed n UFS file system . The global zone was updated but the non global zone still shows update 3 what could be the reason for this and how can i update the local zones to update 7 (0 Replies)
Discussion started by: fugitive
0 Replies

2. Solaris

Solaris Live Upgrade issue with Zones

I 'm running solaris10 u6 with 141414-02. My system is T5220 running 2 Ldoms and 7 zones on primary domain. I 'm tryin to create booth environment on my alternate root disk after breaking the SVM mirroring but it does not go well and stuck at some point , and i have to reboot the system to get rid... (1 Reply)
Discussion started by: fugitive
1 Replies

3. Solaris

Live upgrade issue with ZFS Root

I have created a solaris10 update9 zfs root flash archive for sun4v environment which i 'm tryin to use for upgrading solaris10 update8 zfs root based server using live upgrade. following is my current system status lustatus Boot Environment Is Active Active Can Copy Name Complete Now... (0 Replies)
Discussion started by: fugitive
0 Replies

4. Solaris

Live Upgrade Issue

I tried a live upgrade for one my solaris 10u8 server which didnt go sucessfull and after that i now have following mounts in memory. df: cannot statvfs /.alt.sol10u8_2/var: No such file or directory df: cannot statvfs /.alt.sol10u8_2/var/run: No such file or directory df: cannot statvfs... (0 Replies)
Discussion started by: fugitive
0 Replies

5. Solaris

Solaris patching issue with Live Upgrade

I have Solaris-10 sparc box with ZFS file-system, which is running two non global zones. I am in process of applying Solaris Recommended patch cluster via Live Upgrade. Though I have enough space in root file-system of both zones, everytime I run installcluster, it fails with complaining less... (7 Replies)
Discussion started by: solaris_1977
7 Replies
All times are GMT -4. The time now is 09:41 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy