Hi Duke, thanks for that, I had a look at that document, early this morning ( was at the end of an 18 hour session so may not have been focussing properly)
I have just tried it again
Code:
# lustatus
Boot Environment Is Active Active Can Copy
Name Complete Now On Reboot Delete Status
-------------------------- -------- ------ --------- ------ ----------
solenv1 yes yes yes no -
solenv2 yes no no no UPDATING
I was looking at the can delete column, which indicate it couldn't be deleted, but when actually trying to run it, I get the following
Code:
# ludelete solenv2
INFORMATION: Removing invalid lock file.
ERROR: mount: The state of /dev/dsk/c6t60A9800057396D6468344D7A4F356151d0s0 is not okay
and it was attempted to be mounted read/write
mount: Please run fsck and try again
ERROR: cannot mount mount point </.alt.tmp.b-8og.mnt> device </dev/dsk/c6t60A9800057396D6468344D7A4F356151d0s0>
ERROR: failed to mount file system </dev/dsk/c6t60A9800057396D6468344D7A4F356151d0s0> on </.alt.tmp.b-8og.mnt>
ERROR: cannot mount boot environment by icf file </etc/lu/ICF.2>
WARNING: Unable to mount ABE <solenv2>: cannot complete lumk_iconf
WARNING: Unable to determine disk partition configuration information for BE <solenv2>.
ERROR: mount: The state of /dev/dsk/c6t60A9800057396D6468344D7A4F356151d0s0 is not okay
and it was attempted to be mounted read/write
mount: Please run fsck and try again
ERROR: cannot mount mount point </.alt.tmp.b-hpg.mnt> device </dev/dsk/c6t60A9800057396D6468344D7A4F356151d0s0>
ERROR: failed to mount file system </dev/dsk/c6t60A9800057396D6468344D7A4F356151d0s0> on </.alt.tmp.b-hpg.mnt>
ERROR: cannot mount boot environment by icf file </etc/lu/ICF.2>
ERROR: Cannot mount BE <solenv2>.
mount: Please run fsck and try again
luupdall: WARNING: Could not mount the Root Slice of BE:"solenv2".
mount: The state of /dev/dsk/c6t60A9800057396D6468344D7A4F356151d0s0 is not okay
and it was attempted to be mounted read/write
However, output from lustatus has now changed
Code:
# lustatus
Boot Environment Is Active Active Can Copy
Name Complete Now On Reboot Delete Status
-------------------------- -------- ------ --------- ------ ----------
solenv1 yes yes yes no -
solenv2 yes no no yes -
Would I be right in thinking that I can perform an lucreate, using the same names again?
Hi,
I upgraded solaris 10 x86 from update 3 to update 7 with zones installed n UFS file system . The global zone was updated but the non global zone still shows update 3 what could be the reason for this and how can i update the local zones to update 7 (0 Replies)
I 'm running solaris10 u6 with 141414-02. My system is T5220 running 2 Ldoms and 7 zones on primary domain. I 'm tryin to create booth environment on my alternate root disk after breaking the SVM mirroring but it does not go well and stuck at some point , and i have to reboot the system to get rid... (1 Reply)
I have created a solaris10 update9 zfs root flash archive for sun4v environment which i 'm tryin to use for upgrading solaris10 update8 zfs root based server using live upgrade.
following is my current system status
lustatus
Boot Environment Is Active Active Can Copy
Name Complete Now... (0 Replies)
I tried a live upgrade for one my solaris 10u8 server which didnt go sucessfull and after that i now have following mounts in memory.
df: cannot statvfs /.alt.sol10u8_2/var: No such file or directory
df: cannot statvfs /.alt.sol10u8_2/var/run: No such file or directory
df: cannot statvfs... (0 Replies)
I have Solaris-10 sparc box with ZFS file-system, which is running two non global zones. I am in process of applying Solaris Recommended patch cluster via Live Upgrade.
Though I have enough space in root file-system of both zones, everytime I run installcluster, it fails with complaining less... (7 Replies)