Need assistance using live upgrade to patch a zfs server.


 
Thread Tools Search this Thread
Operating Systems Solaris Need assistance using live upgrade to patch a zfs server.
# 1  
Old 06-25-2010
Need assistance using live upgrade to patch a zfs server.

I am new to using zfs. I have a new Solaris 10 server and I would like to start using live upgrade to help me have a route to "get back to good" if when patching the server things go badly. In my searching so far I have found the following pages and learned a lot...

How to make and mount a clone of the BE to apply patches to:
Patching a live Solaris 10 system with LU, ZFS, and PCA | Probably

Some basics of live upgrade:
11.Maintaining Solaris Live Upgrade Boot Environments (Tasks) (Solaris 10 Installation Guide: Solaris Live Upgrade and Upgrade Planning) - Sun Microsystems

For the purposes of this discussion lets call my live BE Production and the cloned BE I will use to patch Patching. SO if I follow the instruction in the first link above it explains how to get started. Make a cloned boot environment called Patching and mount it up and patch it. Set the box to boot using the Patching BE, reboot and see if all is well. If it is not, I would use this procedure to go back to the old unpatched Production BE:

In case of a failure while booting to the target BE, the following process
needs to be followed to fallback to the currently working boot environment:
1. Boot from Solaris failsafe or boot in single user mode from the Solaris
Install CD or Network.
2. Mount the Parent boot environment root slice to some directory (like /mnt). You can use the following command to mount:
mount -Fzfs /dev/dsk/c0d0s0 /mnt
3. Run utility with out any arguments from the Parent boot
environment root slice, as shown below:
/mnt/sbin/luactivate
4. luactivate, activates the previous working boot environment and
indicates the result.
5. Exit Single User mode and reboot the machine.
(This info is from the first link)

But if it boots OK pointed at the patched Patching BE and all is well then what? At the end of the instructions from the first link I have a server that is booted to the cloned environment and not the "Production" or regular one. So what now?

Here is what I am thinking.

In short copy the changes from BE Patching to BE Production and set it to boot back to Production BE and reboot again...

To do that I would:
1. Tell the server to boot back the old way to the unpatched Production boot environment.
2. Then use:
lumake -n Production -s Patching
to copy over the changes I have made from the Patching BE to the Production BE.
3. Unmount the Patching BE and blow it and it's clones away...

Question, is that all there is to it? Do I need to do this in single user mode from the console? Am I missing anything here, this seems too simple to me?
Login or Register to Ask a Question

Previous Thread | Next Thread

8 More Discussions You Might Find Interesting

1. Ubuntu

Ubuntu 16.04 - upgrade to ZFS 0.7?

What is the recommended way to upgrade ZFS on Ubuntu 16.04? i have read this thread, but the PPA is not safe? Ubuntu 16.04, ZFS 0.7.3 anyone got it running? : zfs (7 Replies)
Discussion started by: kebabbert
7 Replies

2. Solaris

Live upgrade first steps

Hello Guys, I am a little confused about the first step in the live upgrade process. I will be glad if someone can clarify this for me. The pre-live upgrade patch, when do you add this patch to the OS you want to upgrade? 1. before creating the new boot environment? or 2. after creating... (1 Reply)
Discussion started by: cjashu
1 Replies

3. Solaris

Recommended Patch Cluster Using ZFS Snapshots

I have a question regarding installing recommended patch clusters via ZFS snapshots. Someone wrote a pretty good blog about it here: Initial Program Load: Live Upgrade to install the recommended patch cluster on a ZFS snapshot The person's article is similar to what I've done in the past. ... (0 Replies)
Discussion started by: christr
0 Replies

4. Solaris

zfs upgrade version

did a solaris 10 from an older update version to s10u9... it boots up clean.. one of the zfs pool comes up with this..older zfs version cannot be by new software.. try to do zpool import <name_of_zone> cannot import 'zone': pool is formatted using a newer ZFS version I know... (4 Replies)
Discussion started by: ppchu99
4 Replies

5. Solaris

Solaris 10 10/09 (Update 8) Patch upgrade

Solaris 10 10/09 (Update 8) Patch upgrade can be done in single user mode? any suggestions.. thanks (2 Replies)
Discussion started by: chandravadrevu
2 Replies

6. Solaris

Live upgrade issue with ZFS Root

I have created a solaris10 update9 zfs root flash archive for sun4v environment which i 'm tryin to use for upgrading solaris10 update8 zfs root based server using live upgrade. following is my current system status lustatus Boot Environment Is Active Active Can Copy Name Complete Now... (0 Replies)
Discussion started by: fugitive
0 Replies

7. Solaris

Cleaning out /var/sadm/patch after Live Upgrade

I recently upgraded my OS to Solaris 10 10/09 from Solaris 10 06/06 using Live Upgrade. I wanted to clean up space in /var/sadm/patch. I'm assuming the server is now clean with a fresh version of Solaris 10 10/09. Can I safely remove everything in /var/sadm/patch? Thanks, jeremy (0 Replies)
Discussion started by: griff11
0 Replies

8. Solaris

upgrade patch

The above was the result i obtained from my live upgrade, will i have to manually installed this downloaded patch or is it alright to leave it like that (1 Reply)
Discussion started by: seyiisq
1 Replies
Login or Register to Ask a Question