Sponsored Content
Operating Systems Solaris Recommended Patch Cluster Using ZFS Snapshots Post 302692591 by christr on Tuesday 28th of August 2012 12:27:54 AM
Old 08-28-2012
Recommended Patch Cluster Using ZFS Snapshots

I have a question regarding installing recommended patch clusters via ZFS snapshots. Someone wrote a pretty good blog about it here:

Initial Program Load: Live Upgrade to install the recommended patch cluster on a ZFS snapshot

The person's article is similar to what I've done in the past. I've actually done this a few times before, but in this new situation I need to install the Recommended patch cluster to an alternative LU boot environment on a system that also contains zones which are on seperate ZFS pools (the zones are on on MPXIO ZFS LUNs).

The lucreate -p option only seems to have an option to specify one ZFS pool for the alternate boot environment.

I've also looked at the following from Oracle, but I still can't seem to find a clear answer for this:


Oracle Solaris 10 9/10 Installation Guide: Solaris Live Upgrade and Upgrade Planning - Oracle Solaris 10 9/10 Installation Guide: Solaris Live Upgrade and Upgrade Planning
Creating and Upgrading a Boot Environment When Non-Global Zones Are Installed (Tasks) - Oracle Solaris 10 9/10 Installation Guide: Solaris Live Upgrade and Upgrade Planning
Synopsis - man pages section 1M: System Administration Commands


Has anyone ever done a patch cluster to an alternative boot environment that contains zones in different ZFS pools? When I run the patch cluster I want it to be able to patch the zones while it does the global zone, just as if I were running it from single user.
 

6 More Discussions You Might Find Interesting

1. Solaris

Issue while installing: Solaris 10 SPARC Recommended Patch Cluster (2009.10.23)

Hello, As explained, I've encountered an issue while installing Solaris 10 SPARC Recommended Patch Cluster (2009.10.23). Actually, patch no 120011-14 stops with the following error: ERROR: attribute verification of </var/run/.patchSafeMode/root/usr/bin/passwd> failed file type <f>... (6 Replies)
Discussion started by: a.mauger
6 Replies

2. Solaris

Applying Recommended Patch Cluster to Whole Root Zone

Hi there, Apologies if this question has been asked and answered already but I've not been able to find the thread. Question: Is it possible to apply the Solaris 10 Recommended Patch Cluster to a whole root (non-global) zone locally? I.E. apply the patch cluster from the non-global in... (3 Replies)
Discussion started by: nm146332
3 Replies

3. Solaris

Jumpstart and Applying Recommended Patch Cluster

I'm trying to setup our jumpstart server to automatically apply the latest patch cluster during installs, but I'm running into an issue. Every time Jumpstart runs it has this error. Obviously it's processing the patch_order file, so I'm not sure what I'm missing. ... (0 Replies)
Discussion started by: christr
0 Replies

4. Solaris

Oracle stopped updating Solaris 10 recommended patch cluster ?

Dear All, Has Oracle stopped updating Solaris 10 recommended patch cluster ? From suport.oracle.com i could see the last patch bundle was released on 11th july and there has been no updates after that. Does anyone know about any official announcement from oracle on this ? Thanks ... (1 Reply)
Discussion started by: abhi_8029
1 Replies

5. Solaris

Permissions to run ZFS Snapshots

Hi, I work as an Oracle Technical consultant (mainly DBA related), and I have used ZFS snapshots on previous projects which has helped me a great deal. I often take snapshots before doing some dev work, and then I can roll it back if I want to start again, or if it goes pear shaped!! I have... (4 Replies)
Discussion started by: AndyG
4 Replies

6. Solaris

Understanding ZFS Snapshots - why will it utilize space ?

Hi all, I am moving to Solaris11 and is trying to understand how ZFS snapshot works. Chances upon this Oracle Blog and can't wrap my head around it. https://blogs.oracle.com/solaris/understanding-the-space-used-by-zfs-v2 Hope gurus here can shed some light . ======= ... (4 Replies)
Discussion started by: javanoob
4 Replies
lustatus(1M)						  System Administration Commands					      lustatus(1M)

NAME
lustatus - display status of boot environments SYNOPSIS
/usr/sbin/lustatus [-l error_log] [-o outfile] [BE_name] [-X] DESCRIPTION
The lustatus command is part of a suite of commands that make up the Live Upgrade feature of the Solaris operating environment. See live_upgrade(5) for a description of the Live Upgrade feature. The lustatus command displays the status information of the boot environment (BE) BE_name. If no BE is specified, the status information for all BEs on the system is displayed. The headings in the lustatus information display are described as follows: Boot Environment Name Name of the BE. Is Complete Indicates whether a BE is able to be booted. Any current activity or failure in an lucreate(1M) or luupgrade(1M) operation causes a BE to be incomplete. For example, if there is a copy operation proceeding on or scheduled for a BE, that BE is considered incomplete. Active Now Indicates whether the BE is currently active. The "active" BE is the one currently booted. Active On Reboot Indicates whether the BE becomes active upon next reboot of the system. Can Delete Indicates that no copy, compare, or upgrade operations are being performed on a BE. Also, none of that BE's file systems are currently mounted. With all of these conditions in place, the BE can be deleted. Copy Status Indicates whether the creation or repopulation of a BE is scheduled or active (that is, in progress). A status of ACTIVE, COMPARING (from lucompare(1M)), UPGRADING, or SCHEDULED prevents you performing Live Upgrade copy, rename, or upgrade operations. The following is an example lustatus display: Boot Environment Is Active Active Can Copy Name Complete Now On Reboot Delete Status -------------------- -------- ------ --------- ------ ---------- disk_a_S7 yes yes yes no - disk_b_S7db yes no no no UPGRADING disk_b_S8 no no no no - S9testbed yes no no yes - Note that you could not perform copy, rename, or upgrade operations on disk_b_S8, because it is not complete, nor on disk_b_S7db, because a Live Upgrade operation is pending. The lustatus command requires root privileges. OPTIONS
The lustatus command has the following options: -l error_log Error and status messages are sent to error_log, in addition to where they are sent in your current environment. -o outfile All command output is sent to outfile, in addition to where it is sent in your current environment. -X Enable XML output. Characteristics of XML are defined in DTD, in /usr/share/lib/xml/dtd/lu_cli.dtd.<num>, where <num> is the version number of the DTD file. OPERANDS
BE_name Name of the BE for which to obtain status. If BE_name is omitted, lustatus displays status for all BEs in the system. EXIT STATUS
The following exit values are returned: 0 Successful completion. >0 An error occurred. FILES
/etc/lutab list of BEs on the system /usr/share/lib/xml/dtd/lu_cli.dtd.<num> Live Upgrade DTD (see -X option) ATTRIBUTES
See attributes(5) for descriptions of the following attributes: +-----------------------------+-----------------------------+ | ATTRIBUTE TYPE | ATTRIBUTE VALUE | +-----------------------------+-----------------------------+ |Availability |SUNWluu | +-----------------------------+-----------------------------+ SEE ALSO
lu(1M), luactivate(1M), lucancel(1M), lucompare(1M), lucreate(1M), lucurr(1M), ludesc(1M), ludelete(1M), lufslist(1M), lumake(1M), lumount(1M), lurename(1M), luupgrade(1M), lutab(4), attributes(5), live_upgrade(5) SunOS 5.10 23 Apr 2003 lustatus(1M)
All times are GMT -4. The time now is 06:26 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy