Sponsored Content
Operating Systems Solaris T5-4 Bootloop with 11.4 Boot Environment Post 303036902 by samthewildone on Tuesday 16th of July 2019 07:05:49 PM
Old 07-16-2019
The problem was figured out.

The original boot environment was not clean itself. When a new boot environment was created,
it came with all the previous issues with the original. We've got a lot of work to do.
 

8 More Discussions You Might Find Interesting

1. UNIX for Dummies Questions & Answers

Adding to boot-up environment

During the boot-up process *nix runs scripts linked into the runlevel directories rc#.d. What I'm wondering is, how do I control the environment that those scripts see? I need to set a couple environment variables, and I can NOT do it from within the scripts because it poses a maintenance nightmare... (1 Reply)
Discussion started by: DreamWarrior
1 Replies

2. UNIX for Dummies Questions & Answers

Messed up my boot environment or root profile

Ok, a couple weeks ago I was fixing a cron report about perl not happy with 'locale' info (LANG and LC not set). As a result, I was experimenting with setting the correct 'locale' in several areas (like /etc/sysconfig/i18n and who knows where). Somehow after a reboot, as soon as the OS starts... (3 Replies)
Discussion started by: Garball
3 Replies

3. Solaris

Update single zone in alternate boot environment.

I 'm having a weired situation my system has 8 zones, running fine on solaris x86_u4. I installed the live upgrade bundle patch and did a live upgrade. The new BE was created but it missed one of the zone and now if i mount the new BE i do not see that zone in the environment so my question is how... (3 Replies)
Discussion started by: fugitive
3 Replies

4. Solaris

Solaris live upgrade on Active boot environment

Hi, Is it possible to perform an luupgrade on the active boot environment in Solaris? I want to perform this on BEAlpha - the disk that has BEOmega will be unavailable whilst performing the upgrade but I still want to install the patches using luupgrade. Boot Environment Is... (4 Replies)
Discussion started by: Mr_Webster
4 Replies

5. Solaris

Restoring to previous Boot Environment

Hi all, I'm fairly new to Solaris and am just getting to grips with using LU (Live Upgrade) for OS patching purposes. worcester#uname -a SunOS worcester 5.10 Generic_144488-12 sun4v sparc SUNW,SPARC-Enterprise-T5220I have successfully created and patched a new BE (boot environment) using the... (5 Replies)
Discussion started by: polo_mint4
5 Replies

6. Solaris

Automating old Boot Environment Cleanup Solaris 11

I'm trying to automate the patching process using scripts and cronjobs in Solaris 11. one of the things I'd like to do is cleanup the old boot environments. unfortunately beadm destroy requires a response :~$ pfexec beadm destroy -f solaris-13 Are you sure you want to destroy... (3 Replies)
Discussion started by: os2mac
3 Replies

7. UNIX for Beginners Questions & Answers

Lucreate Fails to Create Boot Environment;

My OS solarius 5.10 Generic_147148-2 i86 Error: please review new boots environments using options 1. Solution - show me the commands Partition is full, try to remove some unneeded files, then try to compress some other unneeded files. man command creates a temp file under... (0 Replies)
Discussion started by: zbest1966
0 Replies

8. Solaris

How to remove pkg from zone in newly created boot environment?

fyi, I already have SR opened with Oracle. We are looking to upgrade from S11.3 to S11.4 with the latest SRU. Create new BE; success Mount new BE; success pkg -R /mnt update the updating of the global went fine until it touched local zone. pkg: update failed (linked image... (2 Replies)
Discussion started by: samthewildone
2 Replies
WIPE(1) 							     LAM TOOLS								   WIPE(1)

NAME
wipe - Shutdown LAM. SYNTAX
wipe [-bdhv] [-n <#>] [<bhost>] OPTIONS
-b Assume local and remote shell are the same. This means that only one remote shell invocation is used to each node. If -b is not used, two remote shell invocations are used to each node. -d Turn on debugging mode. This implies -v. -h Print the command help menu. -v Be verbose. -n <#> Wipe only the first <#> nodes. DESCRIPTION
This command has been deprecated in favor of the lamhalt command. wipe should only be necessary if lamhalt fails and is unable to clean up the LAM run-time environment properly. The wipe tool terminates the LAM software on each of the machines specified in the boot schema, <bhost>. wipe is the topology tool that terminates LAM on the UNIX(tm) nodes of a multicomputer system. It invokes tkill(1) on each machine. See tkill(1) for a description of how LAM is terminated on each node. The <bhost> file is a LAM boot schema written in the host file syntax. CPU counts in the boot schema are ignored by wipe. See bhost(5). Instead of the command line, a boot schema can be specified in the LAMBHOST environment variable. Otherwise a default file, bhost.def, is used. LAM searches for <bhost> first in the local directory and then in the installation directory under etc/. wipe does not quit if a particular remote node cannot be reached or if tkill(1) fails on any node. A message is printed if either of these failures occur, in which case the user should investigate the cause of failure and, if necessary, terminate LAM by manually executing tkill(1) on the problem node(s). In extreme cases, the user may have to terminate individual LAM processes with kill(1). wipe will terminate after a limited number of nodes if the -n option is given. This is mainly intended for use by lamboot(1), which invokes wipe when a boot does not successfully complete. EXAMPLES
wipe -v mynodes Shutdown LAM on the machines described in the boot schema, mynodes. Report about important steps as they are done. FILES
$LAMHOME/etc/lam-bhost.def default boot schema file SEE ALSO
recon(1), lamboot(1), tkill(1), bhost(5), lam-helpfile(5) LAM 6.5.8 November, 2002 WIPE(1)
All times are GMT -4. The time now is 10:54 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy