Sponsored Content
Operating Systems Solaris Need assistance using live upgrade to patch a zfs server. Post 302432586 by NewSolarisAdmin on Friday 25th of June 2010 11:58:34 AM
Old 06-25-2010
Need assistance using live upgrade to patch a zfs server.

I am new to using zfs. I have a new Solaris 10 server and I would like to start using live upgrade to help me have a route to "get back to good" if when patching the server things go badly. In my searching so far I have found the following pages and learned a lot...

How to make and mount a clone of the BE to apply patches to:
Patching a live Solaris 10 system with LU, ZFS, and PCA | Probably

Some basics of live upgrade:
11.Maintaining Solaris Live Upgrade Boot Environments (Tasks) (Solaris 10 Installation Guide: Solaris Live Upgrade and Upgrade Planning) - Sun Microsystems

For the purposes of this discussion lets call my live BE Production and the cloned BE I will use to patch Patching. SO if I follow the instruction in the first link above it explains how to get started. Make a cloned boot environment called Patching and mount it up and patch it. Set the box to boot using the Patching BE, reboot and see if all is well. If it is not, I would use this procedure to go back to the old unpatched Production BE:

In case of a failure while booting to the target BE, the following process
needs to be followed to fallback to the currently working boot environment:
1. Boot from Solaris failsafe or boot in single user mode from the Solaris
Install CD or Network.
2. Mount the Parent boot environment root slice to some directory (like /mnt). You can use the following command to mount:
mount -Fzfs /dev/dsk/c0d0s0 /mnt
3. Run utility with out any arguments from the Parent boot
environment root slice, as shown below:
/mnt/sbin/luactivate
4. luactivate, activates the previous working boot environment and
indicates the result.
5. Exit Single User mode and reboot the machine.
(This info is from the first link)

But if it boots OK pointed at the patched Patching BE and all is well then what? At the end of the instructions from the first link I have a server that is booted to the cloned environment and not the "Production" or regular one. So what now?

Here is what I am thinking.

In short copy the changes from BE Patching to BE Production and set it to boot back to Production BE and reboot again...

To do that I would:
1. Tell the server to boot back the old way to the unpatched Production boot environment.
2. Then use:
lumake -n Production -s Patching
to copy over the changes I have made from the Patching BE to the Production BE.
3. Unmount the Patching BE and blow it and it's clones away...

Question, is that all there is to it? Do I need to do this in single user mode from the console? Am I missing anything here, this seems too simple to me?
 

8 More Discussions You Might Find Interesting

1. Solaris

upgrade patch

The above was the result i obtained from my live upgrade, will i have to manually installed this downloaded patch or is it alright to leave it like that (1 Reply)
Discussion started by: seyiisq
1 Replies

2. Solaris

Cleaning out /var/sadm/patch after Live Upgrade

I recently upgraded my OS to Solaris 10 10/09 from Solaris 10 06/06 using Live Upgrade. I wanted to clean up space in /var/sadm/patch. I'm assuming the server is now clean with a fresh version of Solaris 10 10/09. Can I safely remove everything in /var/sadm/patch? Thanks, jeremy (0 Replies)
Discussion started by: griff11
0 Replies

3. Solaris

Live upgrade issue with ZFS Root

I have created a solaris10 update9 zfs root flash archive for sun4v environment which i 'm tryin to use for upgrading solaris10 update8 zfs root based server using live upgrade. following is my current system status lustatus Boot Environment Is Active Active Can Copy Name Complete Now... (0 Replies)
Discussion started by: fugitive
0 Replies

4. Solaris

Solaris 10 10/09 (Update 8) Patch upgrade

Solaris 10 10/09 (Update 8) Patch upgrade can be done in single user mode? any suggestions.. thanks (2 Replies)
Discussion started by: chandravadrevu
2 Replies

5. Solaris

zfs upgrade version

did a solaris 10 from an older update version to s10u9... it boots up clean.. one of the zfs pool comes up with this..older zfs version cannot be by new software.. try to do zpool import <name_of_zone> cannot import 'zone': pool is formatted using a newer ZFS version I know... (4 Replies)
Discussion started by: ppchu99
4 Replies

6. Solaris

Recommended Patch Cluster Using ZFS Snapshots

I have a question regarding installing recommended patch clusters via ZFS snapshots. Someone wrote a pretty good blog about it here: Initial Program Load: Live Upgrade to install the recommended patch cluster on a ZFS snapshot The person's article is similar to what I've done in the past. ... (0 Replies)
Discussion started by: christr
0 Replies

7. Solaris

Live upgrade first steps

Hello Guys, I am a little confused about the first step in the live upgrade process. I will be glad if someone can clarify this for me. The pre-live upgrade patch, when do you add this patch to the OS you want to upgrade? 1. before creating the new boot environment? or 2. after creating... (1 Reply)
Discussion started by: cjashu
1 Replies

8. Ubuntu

Ubuntu 16.04 - upgrade to ZFS 0.7?

What is the recommended way to upgrade ZFS on Ubuntu 16.04? i have read this thread, but the PPA is not safe? Ubuntu 16.04, ZFS 0.7.3 anyone got it running? : zfs (7 Replies)
Discussion started by: kebabbert
7 Replies
lucurr(1M)																lucurr(1M)

NAME
lucurr - display the name of the active boot environment SYNOPSIS
/usr/sbin/lucurr [-l error_log] [-m mount_point] [-o outfile] [-X] DESCRIPTION
The lucurr command is part of a suite of commands that make up the Live Upgrade feature of the Solaris operating environment. See live_upgrade(5) for a description of the Live Upgrade feature. The lucurr command displays the name of the currently running boot environment (BE). If no BEs are configured on the system, lucurr dis- plays the message "No Boot Environments are defined". Note that lucurr reports only the name of the current BE, not the BE that will be active upon the next reboot. Use lustatus(1M) or luactivate(1M) for this information. The lucurr command requires root privileges. OPTIONS
The lucurr command has the following options: -l error_log Error and status messages are sent to error_log, in addition to where they are sent in your current environment. -m mount_point Returns the name of the BE that owns mount_point, where mount_point is the mount point of a BE's root file system. This can be a mount point of the current BE or the mount point of a BE other than the current BE. If the latter, the file system of the BE must have been mounted with lumount(1M) or mount(1M) before entering this option. -o outfile All command output is sent to outfile, in addition to where it is sent in your current environment. -X Enable XML output. Characteristics of XML are defined in DTD, in /usr/share/lib/xml/dtd/lu_cli.dtd.<num>, where <num> is the version number of the DTD file. EXIT STATUS
The following exit values are returned: 0 Successful completion. >0 An error occurred. FILES
/etc/lutab list of BEs on the system /usr/share/lib/xml/dtd/lu_cli.dtd.<num> Live Upgrade DTD (see -X option) ATTRIBUTES
See attributes(5) for descriptions of the following attributes: +-----------------------------+-----------------------------+ | ATTRIBUTE TYPE | ATTRIBUTE VALUE | +-----------------------------+-----------------------------+ |Availability |SUNWluu | +-----------------------------+-----------------------------+ SEE ALSO
lu(1M), luactivate(1M), lucancel(1M), lucompare(1M), lucreate(1M), ludelete(1M), ludesc(1M), lufslist(1M), lumake(1M), lumount(1M), lure- name(1M), lustatus(1M), luupgrade(1M), lutab(4), attributes(5), live_upgrade(5) SunOS 5.10 24 Jan 2002 lucurr(1M)
All times are GMT -4. The time now is 09:16 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy