Sponsored Content
Operating Systems Solaris T5-4 Bootloop with 11.4 Boot Environment Post 303036830 by samthewildone on Saturday 13th of July 2019 08:09:00 AM
Old 07-13-2019
[SOLVED] T5-4 Bootloop with 11.4 Boot Environment

TLDR: Patched a separate boot environment to 11.4. Activated and rebooted the system only to have it boot loop and show 11.3 on the patched boot environment.

Bootloop process: The system went down, came up showing 11.3 then spewed bunch of errors and resetted. SEE BELOW FOR THE FINAL MSG BEFORE AUTO REBOOT

BE = boot environment

This is a normal process for us to create a new boot environment, patch it, activate it and then boot to it. The update process was clean with running the history command
against the mounted boot environment. Once we were sure all users we off the system, we rebooted. The system had a bunch of messages as below:
Code:
WARNING: mod_load: cannot load module 'dev'

Code:
Warning - stack not written to the dumpbuf

This portion below was the final msg before system auto reset/reboot
Code:
Deferred dump not available.
dump subsystem not initialised
rebooting...
Resetting...

So what we had to do was shut the system down from console, start it up and get to "OK" prompt to manually switch back to the original
boot environment. We did attempt to boot the be with 11.4 but ran into the same issues.

What was strange is when the 11.4 be was booted, it showed 11.3 instead of 11.4. When we were able to boot into original be and check
the version of failed be, it shows 11.4Smilie

There was more msg than that but, don't want to give any confidential information out. Currently having support look into this but, it was strange. Thinking, maybe we should've updated to a middle release of 11.4 then trying to go for the latest.

Last edited by samthewildone; 07-16-2019 at 08:03 PM.. Reason: cleanup
 

8 More Discussions You Might Find Interesting

1. UNIX for Dummies Questions & Answers

Adding to boot-up environment

During the boot-up process *nix runs scripts linked into the runlevel directories rc#.d. What I'm wondering is, how do I control the environment that those scripts see? I need to set a couple environment variables, and I can NOT do it from within the scripts because it poses a maintenance nightmare... (1 Reply)
Discussion started by: DreamWarrior
1 Replies

2. UNIX for Dummies Questions & Answers

Messed up my boot environment or root profile

Ok, a couple weeks ago I was fixing a cron report about perl not happy with 'locale' info (LANG and LC not set). As a result, I was experimenting with setting the correct 'locale' in several areas (like /etc/sysconfig/i18n and who knows where). Somehow after a reboot, as soon as the OS starts... (3 Replies)
Discussion started by: Garball
3 Replies

3. Solaris

Update single zone in alternate boot environment.

I 'm having a weired situation my system has 8 zones, running fine on solaris x86_u4. I installed the live upgrade bundle patch and did a live upgrade. The new BE was created but it missed one of the zone and now if i mount the new BE i do not see that zone in the environment so my question is how... (3 Replies)
Discussion started by: fugitive
3 Replies

4. Solaris

Solaris live upgrade on Active boot environment

Hi, Is it possible to perform an luupgrade on the active boot environment in Solaris? I want to perform this on BEAlpha - the disk that has BEOmega will be unavailable whilst performing the upgrade but I still want to install the patches using luupgrade. Boot Environment Is... (4 Replies)
Discussion started by: Mr_Webster
4 Replies

5. Solaris

Restoring to previous Boot Environment

Hi all, I'm fairly new to Solaris and am just getting to grips with using LU (Live Upgrade) for OS patching purposes. worcester#uname -a SunOS worcester 5.10 Generic_144488-12 sun4v sparc SUNW,SPARC-Enterprise-T5220I have successfully created and patched a new BE (boot environment) using the... (5 Replies)
Discussion started by: polo_mint4
5 Replies

6. Solaris

Automating old Boot Environment Cleanup Solaris 11

I'm trying to automate the patching process using scripts and cronjobs in Solaris 11. one of the things I'd like to do is cleanup the old boot environments. unfortunately beadm destroy requires a response :~$ pfexec beadm destroy -f solaris-13 Are you sure you want to destroy... (3 Replies)
Discussion started by: os2mac
3 Replies

7. UNIX for Beginners Questions & Answers

Lucreate Fails to Create Boot Environment;

My OS solarius 5.10 Generic_147148-2 i86 Error: please review new boots environments using options 1. Solution - show me the commands Partition is full, try to remove some unneeded files, then try to compress some other unneeded files. man command creates a temp file under... (0 Replies)
Discussion started by: zbest1966
0 Replies

8. Solaris

How to remove pkg from zone in newly created boot environment?

fyi, I already have SR opened with Oracle. We are looking to upgrade from S11.3 to S11.4 with the latest SRU. Create new BE; success Mount new BE; success pkg -R /mnt update the updating of the global went fine until it touched local zone. pkg: update failed (linked image... (2 Replies)
Discussion started by: samthewildone
2 Replies
REBOOT(8)						    BSD System Manager's Manual 						 REBOOT(8)

NAME
reboot, halt, fastboot, fasthalt -- stopping and restarting the system SYNOPSIS
halt [-lnpq] [-k kernel] reboot [-dlnpq] [-k kernel] fasthalt [-lnpq] [-k kernel] fastboot [-dlnpq] [-k kernel] DESCRIPTION
The halt and reboot utilities flush the file system cache to disk, send all running processes a SIGTERM (and subsequently a SIGKILL) and, respectively, halt or restart the system. The action is logged, including entering a shutdown record into the user accounting database. The options are as follows: -d The system is requested to create a crash dump. This option is supported only when rebooting, and it has no effect unless a dump device has previously been specified with dumpon(8). -k kernel Boot the specified kernel on the next system boot. If the kernel boots successfully, the default kernel will be booted on successive boots, this is a one-shot option. If the boot fails, the system will continue attempting to boot kernel until the boot process is interrupted and a valid kernel booted. This may change in the future. -l The halt or reboot is not logged to the system log. This option is intended for applications such as shutdown(8), that call reboot or halt and log this themselves. -n The file system cache is not flushed. This option should probably not be used. -p The system will turn off the power if it can. If the power down action fails, the system will halt or reboot normally, depending on whether halt or reboot was called. -q The system is halted or restarted quickly and ungracefully, and only the flushing of the file system cache is performed (if the -n option is not specified). This option should probably not be used. The fasthalt and fastboot utilities are nothing more than aliases for the halt and reboot utilities. Normally, the shutdown(8) utility is used when the system needs to be halted or restarted, giving users advance warning of their impending doom and cleanly terminating specific programs. SEE ALSO
getutxent(3), boot(8), dumpon(8), nextboot(8), savecore(8), shutdown(8), sync(8) HISTORY
A reboot utility appeared in Version 6 AT&T UNIX. BSD
October 11, 2010 BSD
All times are GMT -4. The time now is 08:54 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy