Hi Duke, thanks for that, I had a look at that document, early this morning ( was at the end of an 18 hour session so may not have been focussing properly)
I have just tried it again
I was looking at the can delete column, which indicate it couldn't be deleted, but when actually trying to run it, I get the following
However, output from lustatus has now changed
Would I be right in thinking that I can perform an lucreate, using the same names again?
you have to delete solenv2 before you can create the environment with the same name again... but there seems to be a problem with the mountpoints for the new environment. they seem to be on a external device (SAN?) which can not be mounted properly (from what i can see in the error messages...).
Hi Duke, yes that is correct the disks are on a SAN.
I have tried running fsck on the specified disk
below is the output
Quote:
# fsck -F ufs /dev/dsk/c6t60A9800057396D6468344D7A4F356151d0s0
** /dev/rdsk/c6t60A9800057396D6468344D7A4F356151d0s0
** Last Mounted on /a
** Phase 1 - Check Blocks and Sizes
INCORRECT DISK BLOCK COUNT I=400 (416 should be 224)
CORRECT? y
FRAGMENT 49976 DUP I=5828 LFN 0
<snip>
EXCESSIVE DUPLICATE FRAGMENTS I=5828
CONTINUE? y
***** FILE SYSTEM WAS MODIFIED *****
ORPHANED DIRECTORIES REATTACHED; DIR LINK COUNTS MAY NOT BE CORRECT.
***** PLEASE RERUN FSCK *****
I am then able to delete the BE solenv2
Quote:
# ludelete solenv2
Updating boot environment configuration database.
Updating boot environment description database on all BEs.
Updating all boot environment configuration databases.
#
# lustatus
Boot Environment Is Active Active Can Copy
Name Complete Now On Reboot Delete Status
-------------------------- -------- ------ --------- ------ ----------
solenv1 yes yes yes no -
#
So it looks like i'm getting back on track, thanks very much for your input
As an aside, looking at the results form the secondary run of fsck, should I rerun it?
Last edited by Revo; 05-07-2012 at 10:41 AM..
Reason: spelling mistake
as you've deleted the BE i don't think it is neccessary to re-run fsck... just try to create the BE again after creating a new fs (newfs) on the target disk(s).
I have Solaris-10 sparc box with ZFS file-system, which is running two non global zones. I am in process of applying Solaris Recommended patch cluster via Live Upgrade.
Though I have enough space in root file-system of both zones, everytime I run installcluster, it fails with complaining less... (7 Replies)
I tried a live upgrade for one my solaris 10u8 server which didnt go sucessfull and after that i now have following mounts in memory.
df: cannot statvfs /.alt.sol10u8_2/var: No such file or directory
df: cannot statvfs /.alt.sol10u8_2/var/run: No such file or directory
df: cannot statvfs... (0 Replies)
I have created a solaris10 update9 zfs root flash archive for sun4v environment which i 'm tryin to use for upgrading solaris10 update8 zfs root based server using live upgrade.
following is my current system status
lustatus
Boot Environment Is Active Active Can Copy
Name Complete Now... (0 Replies)
I 'm running solaris10 u6 with 141414-02. My system is T5220 running 2 Ldoms and 7 zones on primary domain. I 'm tryin to create booth environment on my alternate root disk after breaking the SVM mirroring but it does not go well and stuck at some point , and i have to reboot the system to get rid... (1 Reply)
Hi,
I upgraded solaris 10 x86 from update 3 to update 7 with zones installed n UFS file system . The global zone was updated but the non global zone still shows update 3 what could be the reason for this and how can i update the local zones to update 7 (0 Replies)