Sun solaris 5.9 to 5.10 upgrade


 
Thread Tools Search this Thread
Operating Systems Solaris Sun solaris 5.9 to 5.10 upgrade
# 8  
Old 05-09-2012
Quote:
Originally Posted by hicksd8
The Sparc usually has a network management port on the back which connects to the System Controller (ILOM/ALOM). If this has been given an IP address (different from the main interfaces) and you can connect to that then you have a "console" connection. No GUI needed; just raw text mode. However, if you're remote and the network management port hasn't been already set up, you've got a problem. (You can't do a O/S upgrade in multi-user mode.)

Notice that you seem to have Oracle on here. Oracle dB should be stopped before a ufsdump is taken (but you probably know that already).

You've got a volume manager running here too. However, in terms of backup I would say you should ufsdump d0,d2,d3,d4,d5 and d6.

Other forum members please do comment.
Thanks for quick reply.
There is a management console that is been configured but unfortunately the cable has been disconnected so i have requested them (onsite ppl) to connect it.
defnetly will be taking the backups of all the mount points.
How much down time do we require in order to complete this task and get the system back to running as before ?

thanks you once again for a quick reply
# 9  
Old 05-09-2012
So if they will connect the cable to the network management port and you know the ip address, then you can connect over your network directly to the console (using something like Putty) and game on!
# 10  
Old 05-09-2012
Quote:
Originally Posted by hicksd8
So if they will connect the cable to the network management port and you know the ip address, then you can connect over your network directly to the console (using something like Putty) and game on!
I know this would be too much to ask but could you let me know the steps that needs to be done for this job.

i will connect to the console as admin,when i shutdown my server and restart through console then do i need to run the command boot cdrom ?

Any information as how long does this upgrade take to finish, just to know the downtime required.

Thanks a lot for your replies which helps a lot.
# 11  
Old 05-09-2012
Once you are connected to the console you can login as normal. Issue a

Code:
 
init 0

to take the system down (in an orderly way). This will take you to the "ok" prompt.

At the ok prompt (ensuring the cdrom is in the drive) type:

Quote:
boot cdrom
This will take you into the install routine.

My experience is that an upgrade will take circa 1 hour to 90 minutes tops.

Any other opinions out there??
# 12  
Old 05-09-2012
If I were you... since you are upgrading from 9 to 10... I would definitely go with Live Upgrade.

Basically you can split the mirror, Live Upgrade your system from Solaris 9 to Solaris 10 on the inactive leg. Boot to the new environment and if something goes wrong the back out plan is really fast and straight forward.

You can read more here: Upgrading from Solaris 9 with a Root SVM Mirror to Solaris 10 with a Root ZFS Mirror with less than 10 Minutes of Downtime

I haven't followed that process but it looks good.

The good thing about LU is that the downtime is minimal, and you still preserve the old OS untouched...

Still you will need a console connection and an installation CD in case you run into problems.

Another good Idea would be to replicate the environment on a box, if it local much better, try and document the procedure and then go for the production box, but I know that that sometimes is asking too much...

Good Luck!

Juan
# 13  
Old 05-10-2012
Quote:
Originally Posted by juan.brein
If I were you... since you are upgrading from 9 to 10... I would definitely go with Live Upgrade.

Basically you can split the mirror, Live Upgrade your system from Solaris 9 to Solaris 10 on the inactive leg. Boot to the new environment and if something goes wrong the back out plan is really fast and straight forward.

You can read more here: Upgrading from Solaris 9 with a Root SVM Mirror to Solaris 10 with a Root ZFS Mirror with less than 10 Minutes of Downtime

I haven't followed that process but it looks good.

The good thing about LU is that the downtime is minimal, and you still preserve the old OS untouched...

Still you will need a console connection and an installation CD in case you run into problems.

Another good Idea would be to replicate the environment on a box, if it local much better, try and document the procedure and then go for the production box, but I know that that sometimes is asking too much...

Good Luck!

Juan

Hi,

i was thinking about the live upgrade as the downtime will be minimal. Could you let me know if below steps are right?

This is my partition table.

Code:
bash-2.05# df -h
Filesystem             size   used  avail capacity  Mounted on
/dev/md/dsk/d0         9.8G   5.5G   4.2G    57%    /
/proc                    0K     0K     0K     0%    /proc
mnttab                   0K     0K     0K     0%    /etc/mnttab
fd                       0K     0K     0K     0%    /dev/fd
swap                   5.2G    88K   5.2G     1%    /var/run
swap                   6.1G   878M   5.2G    15%    /tmp
/dev/md/dsk/d5          44G    12G    32G    28%    /db01
/dev/md/dsk/d6          90G    14G    76G    16%    /backup01
/dev/md/dsk/d4         2.0G    76M   1.8G     4%    /export/home
/dev/md/dsk/d2         9.8G   5.6G   4.1G    58%    /opt/oracle
/dev/md/dsk/d3          42G    42M    41G     1%    /opt/oracle2



Outputs of metastat and metadb. Now the plan is to split the submirrors ie here d20 and upgrade it as below.

Code:
bash-2.05# metadb 
        flags           first blk       block count
       a m  p  luo        16              8192            /dev/dsk/c1t0d0s7
       a    p  luo        8208            8192            /dev/dsk/c1t0d0s7
       a    p  luo        16400           8192            /dev/dsk/c1t0d0s7
       a    p  luo        16              8192            /dev/dsk/c1t2d0s7
       a    p  luo        8208            8192            /dev/dsk/c1t2d0s7
       a    p  luo        16400           8192            /dev/dsk/c1t2d0s7
       a    p  luo        16              8192            /dev/dsk/c1t1d0s7
       a    p  luo        8208            8192            /dev/dsk/c1t1d0s7
       a    p  luo        16400           8192            /dev/dsk/c1t1d0s7

Code:
bash-2.05# metastat -p
d6 -m d16 d26 1
d16 1 1 c1t2d0s1
d26 1 1 c1t3d0s1
d5 -m d15 d25 1
d15 1 1 c1t2d0s0
d25 1 1 c1t3d0s0
d2 -m d12 d22 1
d12 1 1 c1t0d0s3
d22 1 1 c1t1d0s3
d1 -m d11 d21 1
d11 1 1 c1t0d0s1
d21 1 1 c1t1d0s1
d0 -m d10 d20 1
d10 1 1 c1t0d0s0
d20 1 1 c1t1d0s0
d3 -m d13 d23 1
d13 1 1 c1t0d0s4
d23 1 1 c1t1d0s4
d4 -m d14 d24 1
d14 1 1 c1t0d0s5
d24 1 1 c1t1d0s5



As i have transfered the iso image of solaris 10 and placed it on the server as mounting it as below

Code:
 

#lofiadm -a /share/solaris-10-update-u10.iso
#mkdir /mnt/Solaris10u10
#mount -F hsfs -o ro /dev/lofi/1 /mnt/Solaris10u10



Now here i am not sure while detaching should i use d20 or complete disk like (/dev/dsk/c1t1d0s0) this.

Code:
#pkgrm SUNWlucfg SUNWlur SUNWluu

#/mnt/Solaris10010/Tools/Installers/liveupgrade20 -noconsole -nodisplay

#lucreate -c sol9 -n sol10 -m /:/dev/md/dsk/d50:ufs,mirror -m /:d20:detach,attach,preserve 

OR

#lucreate -c sol9 -n sol10 -m /:/dev/md/dsk/d50:ufs,mirror -m /:/dev/dsk/c1t1d0s0:detach,attach,preserve



Now upgrading to solaris 10, activating it and rebooting the system.
Code:
#luupgrade -u -n sol10 -s /mnt/Solaris10u10

#luactivate sol10  (activate sol10)

#init 6   (reboot with init 6)



Now we check which boot environment is active and once we get solaris 10 that is active we remove the older version and clear the d0 to release the submirror (d10)from it and attach it to the new mirror (d50) and resync with new submirror with old one under d50 (d20) which has solaris 10.
Code:
#lustatus (just to check, if new BE is active)

#ludelete sol9

#metaclear d0

#metattach d50 d10

Could you let me know if iam missing anymore step after these things done and any information about lucreate about using the submirror name (d20) or complete disk location (dev/dsk/c1t1d0s0) which is advisable of them?


Thanks a lot for your suggestions

---------- Post updated at 04:00 PM ---------- Previous update was at 03:58 PM ----------

Quote:
Originally Posted by hicksd8
Once you are connected to the console you can login as normal. Issue a

Code:
 
init 0

to take the system down (in an orderly way). This will take you to the "ok" prompt.

At the ok prompt (ensuring the cdrom is in the drive) type:



This will take you into the install routine.

My experience is that an upgrade will take circa 1 hour to 90 minutes tops.

Any other opinions out there??
Hi, I was thinking about live upgrade as it has minimal downtime as i have mentioned in previous post.

could you let me know if i am missing anything in there and once again thanks for your support.
# 14  
Old 05-10-2012
I always run Live Upgrade over physical devices, never did it over Metadevices , it should work as it is documented, but I think is kind of better to remove some extra layers in case you have to roll back / troubleshoot boot issues.

In this case you would have to break the mirror like this:

# metadetach d0 d20
# metaroot /dev/dsk/c1t0d0s0

At this point verify /etc/system and /etc/vfstab, the system has to boot from the /dev/dsk/c1t0d0s0, not from the /dev/md/dsk/d0

Reboot the system:
# init 6

Finish to remove the rest of the metadevices associated with the root filesystem:

# metaclear d20
# metaclear d0
# metaclear d10

Now you are ready to run the lucreate like this:

# lucreate -c sol9 -n sol10 -m /:/dev/dsk/c1t1d0s0:ufs

These extra steps will require an additional reboot as you can see... but since it is a Major upgrade I think it worth it.

One more thing, Live Upgrade will take some time and remember performance can be impacted during that time.

Another tip is, do not remove the sol9 environment until you are not sure everything is working perfectly... I would let the Database run for a couple of days and then recreate the mirror

Finally, backup backup backup, and roll back plan... documented just in case... is not a good idea to learn how to roll back when you realise the Database is not coming back.

Talking about DB, make sure the DB version u use is supported by Solaris 10, most of then they do... but you never know.

Any question let me know

Good luck
Login or Register to Ask a Question

Previous Thread | Next Thread

6 More Discussions You Might Find Interesting

1. Solaris

Sun-Fire-V490 Printer Issue After Upgrade of Solaris

Hey Guys I am new here, dont know if any one can assist me with this issue. I have a Sun-Fire-V490 machine that was upgraded to version 9 and patched a few months back. Problem is a few network printers managed by the server is printing an extra page that comes out before and after every print... (0 Replies)
Discussion started by: mprogams
0 Replies

2. Solaris

Couldn't upgrade Solaris 10 x86 from Sun Fire x2200 to Dell Precision Wrkst

:)Dear Solaris Experts, I need to clone a Solaris 10 x86 system using a ufsdump backup from the following source & target servers: Sun Fire x2200 M2 with Quad Core Processor # /usr/platform/`uname -i`/sbin/prtdiag System Configuration: Sun Microsystems Sun Fire X2200 M2 with Quad Core... (8 Replies)
Discussion started by: gjackson123
8 Replies

3. Solaris

live upgrade SUN cluster

Hi, Guys On this weekend I have to upgrade Solaris 10 2 node SUN cluster from u4 to u9. I have not touch Sun cluster for a while so I need some help. Can anybody let me know what I have to do specifically for this upgrade? Thank you very much (4 Replies)
Discussion started by: alex57
4 Replies

4. Solaris

Solaris 9 to Solaris 10 upgrade on Sun Fire 3800

Hello there! I have Sun Fire 3800 with very old Solaris 9 and I need to perform upgrade to concurrent Solaris 10 version, preserving current OS configuration. I supose to make it using Live Upgrade, but according to Solaris Live Upgrade Software: Minimum Patch Requirements page, I need to... (5 Replies)
Discussion started by: Sapfeer
5 Replies

5. Solaris

Sun Blade 100 Memory Upgrade

Hi, I've been a member for a while but have never posted. I have a Sun Blade 100 desktop and I just installed Solaris 10 and it is unbearably slow. I only have 128 Meg of RAM and need to upgrade. I have searched and found many online resources that have RAM, but I was wondering if anyone has... (12 Replies)
Discussion started by: BrewDudeBob
12 Replies

6. UNIX for Dummies Questions & Answers

Memory upgrade Sun Blade 150

I would like to upgrade the memory in my Sun Blade 150 workstation. In reading through Sun's Blade 150 DIMM Installation Guide, it indicates that the memory must be "certified by Sun for the Sun Blade 150 system." Does anyone know if any SDRAM, PC133 • CL=2 • Unbuffered • ECC • 133MHz • 3.3V ... (2 Replies)
Discussion started by: here2learn
2 Replies
Login or Register to Ask a Question