TLDR: Patched a separate boot environment to 11.4. Activated and rebooted the system only to have it boot loop and show 11.3 on the patched boot environment.
Bootloop process: The system went down, came up showing 11.3 then spewed bunch of errors and resetted. SEE BELOW FOR THE FINAL MSG BEFORE AUTO REBOOT
This is a normal process for us to create a new boot environment, patch it, activate it and then boot to it. The update process was clean with running the history command
against the mounted boot environment. Once we were sure all users we off the system, we rebooted. The system had a bunch of messages as below:
This portion below was the final msg before system auto reset/reboot
So what we had to do was shut the system down from console, start it up and get to "OK" prompt to manually switch back to the original
boot environment. We did attempt to boot the be with 11.4 but ran into the same issues.
What was strange is when the 11.4 be was booted, it showed 11.3 instead of 11.4. When we were able to boot into original be and check
the version of failed be, it shows 11.4
There was more msg than that but, don't want to give any confidential information out. Currently having support look into this but, it was strange. Thinking, maybe we should've updated to a middle release of 11.4 then trying to go for the latest.
Last edited by samthewildone; 07-16-2019 at 08:03 PM..
Reason: cleanup
The original boot environment was not clean itself. When a new boot environment was created,
it came with all the previous issues with the original. We've got a lot of work to do.
fyi, I already have SR opened with Oracle.
We are looking to upgrade from S11.3 to S11.4 with the latest SRU.
Create new BE; success
Mount new BE; success
pkg -R /mnt update the updating of the global went fine until it touched local zone.
pkg: update failed (linked image... (2 Replies)
My OS solarius 5.10 Generic_147148-2 i86
Error: please review new boots environments using options
1. Solution - show me the commands
Partition is full, try to remove some unneeded files,
then try to compress some other unneeded files.
man command creates a temp file under... (0 Replies)
I'm trying to automate the patching process using scripts and cronjobs in Solaris 11.
one of the things I'd like to do is cleanup the old boot environments.
unfortunately
beadm destroy
requires a response
:~$ pfexec beadm destroy -f solaris-13
Are you sure you want to destroy... (3 Replies)
Hi all,
I'm fairly new to Solaris and am just getting to grips with using LU (Live Upgrade) for OS patching purposes.
worcester#uname -a
SunOS worcester 5.10 Generic_144488-12 sun4v sparc SUNW,SPARC-Enterprise-T5220I have successfully created and patched a new BE (boot environment) using the... (5 Replies)
Hi,
Is it possible to perform an luupgrade on the active boot environment in Solaris?
I want to perform this on BEAlpha - the disk that has BEOmega will be unavailable whilst performing the upgrade but I still want to install the patches using luupgrade.
Boot Environment Is... (4 Replies)
I 'm having a weired situation my system has 8 zones, running fine on solaris x86_u4. I installed the live upgrade bundle patch and did a live upgrade. The new BE was created but it missed one of the zone and now if i mount the new BE i do not see that zone in the environment so my question is how... (3 Replies)
Ok, a couple weeks ago I was fixing a cron report about perl not happy with 'locale' info (LANG and LC not set). As a result, I was experimenting with setting the correct 'locale' in several areas (like /etc/sysconfig/i18n and who knows where). Somehow after a reboot, as soon as the OS starts... (3 Replies)
During the boot-up process *nix runs scripts linked into the runlevel directories rc#.d. What I'm wondering is, how do I control the environment that those scripts see? I need to set a couple environment variables, and I can NOT do it from within the scripts because it poses a maintenance nightmare... (1 Reply)