Solaris patching issue with Live Upgrade


 
Thread Tools Search this Thread
Operating Systems Solaris Solaris patching issue with Live Upgrade
# 1  
Old 03-20-2013
Solaris patching issue with Live Upgrade

I have Solaris-10 sparc box with ZFS file-system, which is running two non global zones. I am in process of applying Solaris Recommended patch cluster via Live Upgrade.
Though I have enough space in root file-system of both zones, everytime I run installcluster, it fails with complaining less space (but in alternate BE). It seems its snapshot is taking too much space. I am not sure, how to fix this issue.
PHP Code:
root@oraprod_sap21:/# zoneadm list -icv
  
ID NAME             STATUS     PATH                           BRAND    IP
   0 
global           running    /                              native   shared
   1 oraprod_sap21
-zesbr01 running    /zone/oraprod_sap21-zesbr01/root    native   shared
   3 oraprod_sap21
-zesbq01 running    /zone/oraprod_sap21-zesbq01/root    native   shared
root
@oraprod_sap21:/# df -h | grep -i root
rpool/ROOT/s10s_u9wos_14a   274G    11G   216G     5%    /
rpool/ROOT/s10s_u9wos_14a/var   274G    21G   216G     9%    /var
zesbq01_root_pool       17G    21K    53M     1%    /zesbq01_root_pool
zesbr01_root_pool       17G    21K    80M     1
%    /zesbr01_root_pool
zesbq01_root_pool
/root    17G   6.9G    10G    41%    /zone/oraprod_sap21-zesbq01/root
zesbr01_root_pool
/zone    17G   6.2G    11G    37%    /zone/oraprod_sap21-zesbr01/root
root
@oraprod_sap21:/# lustatus
Boot Environment           Is       Active Active    Can    Copy
Name                       Complete Now    On Reboot Delete Status
-------------------------- -------- ------ --------- ------ ----------
old_patch                  yes      yes    yes       no     -
19_march                   yes      no     no        yes    -
root@oraprod_sap21:/# cd /var/tmp/10_Recommended
root@oraprod_sap21:/var/tmp/10_Recommended# ./installpatchset -B 19_march --s10patchset
Setup
..................
 
Recommended OS Patchset Solaris 10 SPARC (2013.01.29)
Application of patches started 2013.03.19 23:12:28
 
Application of patches finished 
2013.03.19 23:12:28
 
The following filesystems have available space less than the recommended limit
to safely 
continue installation of this patch set :
 /.
alt.19_march/zone/oraprod_sap21-zesbq01/root-19_march (zesbq01_root_pool/root-19_march) : 54513kb available1260901kb recommended
 
/.alt.19_march/zone/oraprod_sap21-zesbr01/root-19_march (zesbr01_root_pool/zone-19_march) : 80976kb available1252637kb recommended
The recommended limit is an estimated upper bound on the amount of space an
individual patch application operation may 
require to complete successfully.
Due to the way the recommended limit is estimatedit will always be greater
than the actual amount of space required
sometimes by a significant margin.
Note the recommended limit is neither the exact amount of free space required
to apply a patch
, or the amount of free space to completely install the
bundle
these interpretations are incorrect.
If 
the operator wishes to continue installation of this patch set at their own
risk
space checking can be overridden by invoking this script with the
'--disable-space-check' option.
Install log files written :
  /.
alt.19_march/var/sadm/install_data/s10s_rec_patchset_short_2013.03.19_23.12.28.log
  
/.alt.19_march/var/sadm/install_data/s10s_rec_patchset_verbose_2013.03.19_23.12.28.log
root
@oraprod_sap21:/var/tmp/10_Recommended
root@oraprod_sap21:/# cd /
root@oraprod_sap21:/# zfs list | grep -i 19_march
rpool/ROOT/19_march                      917M   216G  10.6G  /
rpool/ROOT/19_march/var                  483M   216G  21.8G  /var
rpool/ROOT/s10s_u9wos_14a@19_march      26.6M      -  10.7G  -
rpool/ROOT/s10s_u9wos_14a/var@19_march  46.9M      -  21.3G  -
zesbq01_root_pool/root@19_march         34.7M      -  6.87G  -
zesbq01_root_pool/root-19_march          301M  53.5M  6.70G  /zone/oraprod_sap21-zesbq01/root-19_march
zesbr01_root_pool
/zone@19_march         33.1M      -  6.22G  -
zesbr01_root_pool/zone-19_march          275M  79.3M  6.29G  /zone/oraprod_sap21-zesbr01/root-19_march 
Please suggest, how to fix this issue.
# 2  
Old 03-21-2013
What's the output from
Code:
zfs list -t all | egrep 'rpool|root_pool'

There's also a problem with using Live Upgrade if your zone names and therefore ZFS pool names are long enough to make the output columns from df (IIRC) to run together. I seem to remember that one of the LU scripts uses df to parse file system, and when the columns run together the LU fails badly.
This User Gave Thanks to achenle For This Post:
# 3  
Old 03-21-2013
Here is output
PHP Code:
root@oraprod_sap21:/# zfs list -t all | egrep 'rpool|root_pool'
rpool                                   57.3G   216G    99K  /rpool
rpool
/ROOT                              33.0G   216G    21K  legacy
rpool
/ROOT/19_march                      917M   216G  10.6G  /
rpool/ROOT/19_march/var                  483M   216G  21.8G  /var
rpool/ROOT/s10s_u9wos_14a               32.1G   216G  10.7G  /
rpool/ROOT/s10s_u9wos_14a@19_march      53.8M      -  10.7G  -
rpool/ROOT/s10s_u9wos_14a/var           21.4G   216G  21.3G  /var
rpool/ROOT/s10s_u9wos_14a/var@19_march  85.1M      -  21.3G  -
rpool/dump                              11.9G   216G  11.9G  -
rpool/export                              44K   216G    23K  /export
rpool
/export/home                         21K   216G    21K  /export/home
rpool
/swap                              12.3G   229G    16K  -
zesbq01_root_pool                       17.3G  53.2M    21K  /zesbq01_root_pool
zesbq01_root_pool
/root                  6.91G  10.1G  6.86G  /zone/oraprod_sap21-zesbq01/root
zesbq01_root_pool
/root@19_march         55.0M      -  6.87G  -
zesbq01_root_pool/root-19_march          301M  53.2M  6.70G  /zone/oraprod_sap21-zesbq01/root-19_march
zesbr01_root_pool                       17.3G  79.3M    22K  
/zesbr01_root_pool
zesbr01_root_pool
/zone                  6.26G  10.7G  6.21G  /zone/oraprod_sap21-zesbr01/root
zesbr01_root_pool
/zone@19_march         51.2M      -  6.22G  -
zesbr01_root_pool/zone-19_march          275M  79.3M  6.29G  /zone/oraprod_sap21-zesbr01/root-19_march 
# 4  
Old 03-21-2013
How Where did the "...@19_march" snapshots come from? I don't seem to recall snapshots like that being created from "lucreate".

The zesbq01_root_pool seems to be only 17GB, which IMO is awfully small for a root pool. I'd be half tempted to just destroy that pool and start over.
# 5  
Old 03-21-2013
I had created 19_march BE and after that ran lucreate which might have created it. I can remove it, if this is causing issue.
PHP Code:
root@oraprod_sap21:/# lustatus
Boot Environment           Is       Active Active    Can    Copy
Name                       Complete Now    On Reboot Delete Status
-------------------------- -------- ------ --------- ------ ----------
old_patch                  yes      yes    yes       no     -
19_march                   yes      no     no        yes    
Most of our zone root are of 8 gb, since occupancy is also less. In this zone, it is 17 gb and 6.9 gb is used. It is still having 10 gb space free. That should not be issue. Anything to do with quota ?
PHP Code:
root@oraprod_sap21:/# df -h | grep -i root
rpool/ROOT/s10s_u9wos_14a   274G    11G   216G     5%    /
rpool/ROOT/s10s_u9wos_14a/var   274G    21G   216G     9%    /var
zesbq01_root_pool       17G    21K    53M     1%    /zesbq01_root_pool
zesbr01_root_pool       17G    22K    79M     1
%    /zesbr01_root_pool
zesbq01_root_pool
/root    17G   6.9G    10G    41%    /zone/oraprod_sap21-zesbq01/root
zesbr01_root_pool
/zone    17G   6.2G    11G    37%    /zone/oraprod_sap21-zesbr01/root 
# 6  
Old 03-22-2013
It looks like lucreate might have created another boot environment - or done something with the existing one. I'd clean out the pool and start over. A simple "lucreate -n name -p pool" should be all you need to do.

FWIW, I like putting new boot envs in a separate pool from the active boot env - it's slower and takes a lot more disk space, but you don't wind up with a maze of clones and snapshots.
# 7  
Old 03-22-2013
Do I need to do following ?
PHP Code:
zfs destroy zesbq01_root_pool/root@19_march
zfs destroy zesbq01_root_pool
/root-19_march
zfs destroy zesbr01_root_pool
/zone@19_march
zfs destroy zesbr01_root_pool
/zone-19_march 
Login or Register to Ask a Question

Previous Thread | Next Thread

6 More Discussions You Might Find Interesting

1. Solaris

How to use live-upgrade with single disk, pre-patching steps?

Hi, I have Solaris-10 x86 (running on HP hardware), with 12 non-global zones running on this. I have to install latest patch cluster/set on this server. This server is not under backup schedule, so before installing patch cluster, I want to keep a backup. In case of any issue (bad patch or... (4 Replies)
Discussion started by: solaris_1977
4 Replies

2. Solaris

Patching using live upgrade - with non-globalzone

Hi all, I would like to ask what will be the best practice for the following setup / - c0t0d0s0 - current BE (named First) / - c0t0d1s0 - alternate BE (name Second) i have a non-global zone with zonepath in /zones/myzone /mnt/opt - c0t0d2s6 (shared between the 2 BE)... (3 Replies)
Discussion started by: javanoob
3 Replies

3. Solaris

Solaris 10 patching using live upgrade with VxVM

Hello, I was assigned some Solaris machines and need to patch them to N-1, where N is the latest OS realease, which means, upgrade till one version before the latest one. I do not now a lot about Solaris. What I only know is that need to make use of live upgrade and be careful with VxVM... (4 Replies)
Discussion started by: feroccimx
4 Replies

4. Solaris

Live Upgrade Patching Error: Unable to write vtoc

Attempting to patch several servers using live upgrade Release: Oracle Solaris 10 8/11 s10x_u10wos_17b X86 Error I'm receiving is in the message in the log below tail -15 /var/svc/log/rc6.log Legacy init script "/etc/rc0.d/K50pppd" exited with return code 0. Executing legacy init... (5 Replies)
Discussion started by: Siralos
5 Replies

5. Solaris

Solaris 10 Live Upgrade Issue

Hi Guys, having an issue with running Live Upgrade on a t5240 runiing Solaris 10 5/08. The system has the required patches 121430, and Live upgrade was updated from the install media sol-10-u10-ga2-sparc-dvd.iso The following boot environments were created solenv1 and solenv2 with the... (8 Replies)
Discussion started by: Revo
8 Replies

6. Solaris

Solaris Live Upgrade issue with Zones

I 'm running solaris10 u6 with 141414-02. My system is T5220 running 2 Ldoms and 7 zones on primary domain. I 'm tryin to create booth environment on my alternate root disk after breaking the SVM mirroring but it does not go well and stuck at some point , and i have to reboot the system to get rid... (1 Reply)
Discussion started by: fugitive
1 Replies
Login or Register to Ask a Question