Solaris not booting with new BE after performing Liveupgrade.


 
Thread Tools Search this Thread
Operating Systems Solaris Solaris not booting with new BE after performing Liveupgrade.
# 1  
Old 03-11-2016
Solaris not booting with new BE after performing Liveupgrade.

After getting the new BE created and activating the new BE with luactivate command, OS is still booting with OLD BE.

Steps followed below..


Code:
bash-3.2#
bash-3.2# lustatus
Boot Environment           Is       Active Active    Can    Copy
Name                       Complete Now    On Reboot Delete Status
-------------------------- -------- ------ --------- ------ ----------
oldZFS                     yes      yes    yes       no     -
New_zfs                    yes      no     no        yes    -
bash-3.2#
bash-3.2# luactivate New_zfs
System has findroot enabled GRUB
Generating boot-sign, partition and slice information for PBE <oldZFS>

Generating boot-sign for ABE <New_zfs>
Generating partition and slice information for ABE <New_zfs>
Copied boot menu from top level dataset.
Generating multiboot menu entries for PBE.
Generating multiboot menu entries for ABE.
Disabling splashimage
Re-enabling splashimage
No more bootadm entries. Deletion of bootadm entries is complete.
GRUB menu default setting is unaffected
Done eliding bootadm entries.

**********************************************************************

The target boot environment has been activated. It will be used when you
reboot. NOTE: You MUST NOT USE the reboot, halt, or uadmin commands. You
MUST USE either the init or the shutdown command when you reboot. If you
do not use either init or shutdown, the system will not boot using the
target BE.

**********************************************************************

In case of a failure while booting to the target BE, the following process
needs to be followed to fallback to the currently working boot environment:

1. Boot from the Solaris failsafe or boot in Single User mode from Solaris
Install CD or Network.

2. Mount the Parent boot environment root slice to some directory (like
/mnt). You can use the following commands in sequence to mount the BE:

     zpool import rpool
     zfs inherit -r mountpoint rpool/ROOT/s10x_u11wos_24a
     zfs set mountpoint=<mountpointName> rpool/ROOT/s10x_u11wos_24a
     zfs mount rpool/ROOT/s10x_u11wos_24a

3. Run <luactivate> utility with out any arguments from the Parent boot
environment root slice, as shown below:

     <mountpointName>/sbin/luactivate

4. luactivate, activates the previous working boot environment and
indicates the result.
5. umount /mnt
6. zfs set mountpoint=/ rpool/ROOT/s10x_u11wos_24a
7. Exit Single User mode and reboot the machine.

**********************************************************************

Modifying boot archive service
Propagating findroot GRUB for menu conversion.
File </etc/lu/installgrub.findroot> propagation successful
File </etc/lu/stage1.findroot> propagation successful
File </etc/lu/stage2.findroot> propagation successful
File </etc/lu/GRUB_capability> propagation successful
Deleting stale GRUB loader from all BEs.
File </etc/lu/installgrub.latest> deletion successful
File </etc/lu/stage1.latest> deletion successful
File </etc/lu/stage2.latest> deletion successful
Activation of boot environment <New_zfs> successful.
bash-3.2#
bash-3.2# init 6
propagating updated GRUB menu
Saving existing file </boot/grub/menu.lst> in top level dataset for BE <New_zfs> as <mount-point>//boot/grub/menu.lst.prev.
File </boot/grub/menu.lst> propagation successful
File </etc/lu/GRUB_backup_menu> propagation successful
File </etc/lu/menu.cksum> propagation successful
File </sbin/bootadm> propagation successful


######### After Reboot ###########


bash-3.2# lustatus
Boot Environment           Is       Active Active    Can    Copy
Name                       Complete Now    On Reboot Delete Status
-------------------------- -------- ------ --------- ------ ----------
oldZFS                     yes      yes    yes       no     -
New_zfs                    yes      no     no        yes    -



Plz suggest..

Last edited by DukeNuke2; 03-11-2016 at 07:41 PM..
# 2  
Old 03-12-2016
Did you activate the new BE?
What's the output of the lustatus before reboot?
The new BE should be activated, from what I see it's not active on reboot.
# 3  
Old 03-12-2016
You did not give a boot environment argument, i.e. New_zfs, to luactivate.
# 4  
Old 03-12-2016
@br1an : I forgot to mention the lustatus before rebooting. It does shoes New_zfs having tag "yes" for Active on Reboot .

@fpmurphy : I activated the new BE using luactivate New_zfs command. Anything I missed ?


I again tried same steps and got the same results.. Smilie


Code:
bash-3.2#
bash-3.2# lustatus
Boot Environment           Is       Active Active    Can    Copy
Name                       Complete Now    On Reboot Delete Status
-------------------------- -------- ------ --------- ------ ----------
oldZFS                     yes      yes    yes       no     -
New_zfs                    yes      no     no        yes    -
bash-3.2#
bash-3.2# luactivate New_zfs
System has findroot enabled GRUB
Generating boot-sign, partition and slice information for PBE <oldZFS>

Generating boot-sign for ABE <New_zfs>
Generating partition and slice information for ABE <New_zfs>
Copied boot menu from top level dataset.
Generating multiboot menu entries for PBE.
Generating multiboot menu entries for ABE.
Disabling splashimage
Re-enabling splashimage
No more bootadm entries. Deletion of bootadm entries is complete.
GRUB menu default setting is unaffected
Done eliding bootadm entries.

**********************************************************************

The target boot environment has been activated. It will be used when you
reboot. NOTE: You MUST NOT USE the reboot, halt, or uadmin commands. You
MUST USE either the init or the shutdown command when you reboot. If you
do not use either init or shutdown, the system will not boot using the
target BE.

**********************************************************************

In case of a failure while booting to the target BE, the following process
needs to be followed to fallback to the currently working boot environment:

1. Boot from the Solaris failsafe or boot in Single User mode from Solaris
Install CD or Network.

2. Mount the Parent boot environment root slice to some directory (like
/mnt). You can use the following commands in sequence to mount the BE:

     zpool import rpool
     zfs inherit -r mountpoint rpool/ROOT/s10x_u11wos_24a
     zfs set mountpoint=<mountpointName> rpool/ROOT/s10x_u11wos_24a
     zfs mount rpool/ROOT/s10x_u11wos_24a

3. Run <luactivate> utility with out any arguments from the Parent boot
environment root slice, as shown below:

     <mountpointName>/sbin/luactivate

4. luactivate, activates the previous working boot environment and
indicates the result.
5. umount /mnt
6. zfs set mountpoint=/ rpool/ROOT/s10x_u11wos_24a
7. Exit Single User mode and reboot the machine.

**********************************************************************

Modifying boot archive service
Propagating findroot GRUB for menu conversion.
File </etc/lu/installgrub.findroot> propagation successful
File </etc/lu/stage1.findroot> propagation successful
File </etc/lu/stage2.findroot> propagation successful
File </etc/lu/GRUB_capability> propagation successful
Deleting stale GRUB loader from all BEs.
File </etc/lu/installgrub.latest> deletion successful
File </etc/lu/stage1.latest> deletion successful
File </etc/lu/stage2.latest> deletion successful
Activation of boot environment <New_zfs> successful.
bash-3.2#
bash-3.2# lustatus
Boot Environment           Is       Active Active    Can    Copy
Name                       Complete Now    On Reboot Delete Status
-------------------------- -------- ------ --------- ------ ----------
oldZFS                     yes      yes    no        no     -
New_zfs                    yes      no     yes       no     -
bash-3.2#
bash-3.2#
bash-3.2# zfs list
NAME                         USED  AVAIL  REFER  MOUNTPOINT
rpool                       7.34G  12.2G    43K  /rpool
rpool/ROOT                  5.28G  12.2G    31K  legacy
rpool/ROOT/s10x_u11wos_24a  5.28G  12.2G  5.28G  /
rpool/dump                  1.00G  12.2G  1.00G  -
rpool/export                  63K  12.2G    32K  /export
rpool/export/home             31K  12.2G    31K  /export/home
rpool/swap                  1.06G  12.3G  1.00G  -
rpool2                      7.35G  2.43G  41.5K  /rpool2
rpool2/ROOT                 5.29G  2.43G    31K  legacy
rpool2/ROOT/New_zfs         5.29G  2.43G  5.29G  /
rpool2/dump                 1.03G  3.46G    16K  -
rpool2/swap                 1.03G  3.46G    16K  -


FYI : My alternate BE (New BE) is in different zpool.


Thanks

Last edited by Kumar07; 03-15-2016 at 01:28 AM.. Reason: Add CODE tags.
Login or Register to Ask a Question

Previous Thread | Next Thread

7 More Discussions You Might Find Interesting

1. Solaris

Solaris 10 liveupgrade/ABE advice

Hi all, I have got a few questions on the above topic -> q1) if my intention for creating an ABE is just to use for patching / patchset update - e.g. ./installpatchset -B secondboot --s10patchset luactivate secondboot What is the recommendation that my ABE should consist of ?... (5 Replies)
Discussion started by: javanoob
5 Replies

2. Solaris

Booting Solaris 10/09

What type of communication does Solaris 10/09 need to do with it's default gateway before it can boot up? (7 Replies)
Discussion started by: bstauffer
7 Replies

3. UNIX for Dummies Questions & Answers

Help with booting Solaris 9

Folks; I have a Solaris 9 server that after power outage it won't come fully up and i was able to bring it up in single user mode and checked "/var/log/messages" & "/var/adm/messages" with nothing showing since Sept. 17. i ran fsck and no luck. I ran all tests from the OK prompt with no luck... (2 Replies)
Discussion started by: Katkota
2 Replies

4. Solaris

Solaris liveupgrade will NOT boot into new BE

OK, latest in a loooong trail of errors with no information. I've finally gotten a new BE created on another drive, run liveupgrade on it to bring it from sol10u3 to sol10u8 I luactivate sol10u8 lustatus shows the sol10u8 as the active one init 6 system just starts into 10u3 again. I'm... (4 Replies)
Discussion started by: awoodby
4 Replies

5. Solaris

solaris 10 liveupgrade issues

I got this output from a system running solaris 10 : # lustatus BE_name Complete Active ActiveOnReboot CopyStatus -------------------------------------------------------------------------------- oldvol yes yes yes - newvol no no no - bigvol no no no - # How do i remove (or... (1 Reply)
Discussion started by: ibroxy
1 Replies

6. Solaris

Solaris m/c not booting

My Solaris machine is not booting.It just stops at the point Boot device: /pci@1e,600000/ide@d/disk@0,0:a File and args: Whether i try boot or boot -s it is stopping at this point.. I tried boot cdom -s.It went into single user mode but I am not able to run a fsck in this mode. Right... (1 Reply)
Discussion started by: lydiaEd
1 Replies

7. Solaris

Solairs 9 to Solaris 10 liveupgrade issue

Hello, I started the upgrade using LiveUpgrade. I am using Solaris 10 instalation CDs (5 CDs). My server is Enterprise 3500 - SPARC. 1) lucreate -c first_disk -m /:/dev/dsk/c0t11d0s7:ufs -n second_disk ... luupgrade -u -n second_disk -s /cdrom/cdrom0/s0 WARNING: <5> packages failed to... (0 Replies)
Discussion started by: zafyil
0 Replies
Login or Register to Ask a Question