Sponsored Content
Full Discussion: Liveupgrade with RAID
Operating Systems Solaris Liveupgrade with RAID Post 302758199 by chilinski on Friday 18th of January 2013 03:05:14 PM
Old 01-18-2013
Liveupgrade with RAID

I am not a Solaris maven.

I've read and read and read the docs, and I just don't get it, so I'm here to ask for clarification.

I have a T2000/Solaris10u8. It has three drives. Drive 0 and Drive 1 are mirrored with metaxxxxx. Drive 2 has some untouchable data. It hasn't been treated nicely in a very long time and now it's on my plate.

It needs to be patched. So my approach was to install a fourth drive (Drive 3) and do a lucreate to copy the system to a backup environment on Drive 3. I then patched the backup and luactivate(d) it and booted from it. Everything was fine. Since it's only one drive, it exists as c1t3d0s0.

So after successful testing, I want to return control to Drive 0 so I can take Drive 3 out and use it on the next machine to update. Lumake didn't like me trying to use either md0 or c1t0d0s0. In the end I got rid of the original BE that was on Drive 0 (ludelete) and did a lucreate from Drive 3 to Drive 0 (using its /dev/md/dsk/d0 address). When I activated Drive 0 and rebooted, it came up just fine. The mirror was in place and metastat reported all was good.

But that all seemed kind of a clunky way to do it. Anyone have a better, more educated and experienced approach?
 

9 More Discussions You Might Find Interesting

1. Solaris

Solairs 9 to Solaris 10 liveupgrade issue

Hello, I started the upgrade using LiveUpgrade. I am using Solaris 10 instalation CDs (5 CDs). My server is Enterprise 3500 - SPARC. 1) lucreate -c first_disk -m /:/dev/dsk/c0t11d0s7:ufs -n second_disk ... luupgrade -u -n second_disk -s /cdrom/cdrom0/s0 WARNING: <5> packages failed to... (0 Replies)
Discussion started by: zafyil
0 Replies

2. Solaris

solaris 10 liveupgrade issues

I got this output from a system running solaris 10 : # lustatus BE_name Complete Active ActiveOnReboot CopyStatus -------------------------------------------------------------------------------- oldvol yes yes yes - newvol no no no - bigvol no no no - # How do i remove (or... (1 Reply)
Discussion started by: ibroxy
1 Replies

3. Solaris

Solaris liveupgrade will NOT boot into new BE

OK, latest in a loooong trail of errors with no information. I've finally gotten a new BE created on another drive, run liveupgrade on it to bring it from sol10u3 to sol10u8 I luactivate sol10u8 lustatus shows the sol10u8 as the active one init 6 system just starts into 10u3 again. I'm... (4 Replies)
Discussion started by: awoodby
4 Replies

4. Solaris

Hardware Raid - LiveUpgrade

Hi, I have a question. Do LiveUpgrade supports hardware raid? How to choose the configuration of the system disk for Solaris 10 SPARC? 1st Hardware RAID-1 and UFS 2nd Hardware RAID-1 and ZFS 3rd SVM - UFS and RAID1 4th Software RAID-1 and ZFS I care about this in the future to take... (1 Reply)
Discussion started by: bieszczaders
1 Replies

5. AIX

SCSI PCI - X RAID Controller card RAID 5 AIX Disks disappeared

Hello, I have a scsi pci x raid controller card on which I had created a disk array of 3 disks when I type lspv ; I used to see 3 physical disks ( two local disks and one raid 5 disk ) suddenly the raid 5 disk array disappeared ; so the hardware engineer thought the problem was with SCSI... (0 Replies)
Discussion started by: filosophizer
0 Replies

6. Solaris

Software RAID on top of Hardware RAID

Server Model: T5120 with 146G x4 disks. OS: Solaris 10 - installed on c1t0d0. Plan to use software raid (veritas volume mgr) on c1t2d0 disk. After format and label the disk, still not able to detect using vxdiskadm. Question: Should I remove the hardware raid on c1t2d0 first? My... (4 Replies)
Discussion started by: KhawHL
4 Replies

7. Solaris

Using liveupgrade on single ZFS pool

Hi Guys, I have a single ZFS pool with 2 disk which is mirrored if i create a new BE with lucreate should i specify which disk where the new BE should be created? (7 Replies)
Discussion started by: batas
7 Replies

8. Red Hat

RAID Configuration for IBM Serveraid-7k SCSI RAID Controller

Hello, I want to delete a RAID configuration an old server has. Since i haven't the chance to work with the specific raid controller in the past can you please help me how to perform the configuraiton? I downloaded IBM ServeRAID Support CD but i wasn't able to configure the video card so i... (0 Replies)
Discussion started by: @dagio
0 Replies

9. Solaris

Solaris 10 liveupgrade/ABE advice

Hi all, I have got a few questions on the above topic -> q1) if my intention for creating an ABE is just to use for patching / patchset update - e.g. ./installpatchset -B secondboot --s10patchset luactivate secondboot What is the recommendation that my ABE should consist of ?... (5 Replies)
Discussion started by: javanoob
5 Replies
Gnome2::VFS::Drive(3pm) 				User Contributed Perl Documentation				   Gnome2::VFS::Drive(3pm)

NAME
Gnome2::VFS::Drive - Container around Gnome2::VFS::Volume HIERARCHY
Glib::Object +----Gnome2::VFS::Drive METHODS
string = $drive->get_activation_uri integer = $a->compare ($b) o $b (Gnome2::VFS::Drive) string = $drive->get_device_path devicetype = $drive->get_device_type string = $drive->get_display_name $drive->eject ($func, $data=undef) o $func (scalar) o $data (scalar) string = $drive->get_hal_udi Since: vfs 2.8 string = $drive->get_icon unsigned = $drive->get_id boolean = $drive->is_connected boolean = $drive->is_mounted boolean = $drive->is_user_visible $drive->mount ($func, $data=undef) o $func (scalar) o $data (scalar) list = $drive->get_mounted_volumes Since: vfs 2.8 boolean = $drive->needs_eject Since: vfs 2.16 $drive->unmount ($func, $data=undef) o $func (scalar) o $data (scalar) SIGNALS
volume-mounted (Gnome2::VFS::Drive, Gnome2::VFS::Volume) volume-pre-unmount (Gnome2::VFS::Drive, Gnome2::VFS::Volume) volume-unmounted (Gnome2::VFS::Drive, Gnome2::VFS::Volume) ENUMS AND FLAGS
enum Gnome2::VFS::DeviceType o 'unknown' / 'GNOME_VFS_DEVICE_TYPE_UNKNOWN' o 'audio-cd' / 'GNOME_VFS_DEVICE_TYPE_AUDIO_CD' o 'video-dvd' / 'GNOME_VFS_DEVICE_TYPE_VIDEO_DVD' o 'harddrive' / 'GNOME_VFS_DEVICE_TYPE_HARDDRIVE' o 'cdrom' / 'GNOME_VFS_DEVICE_TYPE_CDROM' o 'floppy' / 'GNOME_VFS_DEVICE_TYPE_FLOPPY' o 'zip' / 'GNOME_VFS_DEVICE_TYPE_ZIP' o 'jaz' / 'GNOME_VFS_DEVICE_TYPE_JAZ' o 'nfs' / 'GNOME_VFS_DEVICE_TYPE_NFS' o 'autofs' / 'GNOME_VFS_DEVICE_TYPE_AUTOFS' o 'camera' / 'GNOME_VFS_DEVICE_TYPE_CAMERA' o 'memory-stick' / 'GNOME_VFS_DEVICE_TYPE_MEMORY_STICK' o 'smb' / 'GNOME_VFS_DEVICE_TYPE_SMB' o 'apple' / 'GNOME_VFS_DEVICE_TYPE_APPLE' o 'music-player' / 'GNOME_VFS_DEVICE_TYPE_MUSIC_PLAYER' o 'windows' / 'GNOME_VFS_DEVICE_TYPE_WINDOWS' o 'loopback' / 'GNOME_VFS_DEVICE_TYPE_LOOPBACK' o 'network' / 'GNOME_VFS_DEVICE_TYPE_NETWORK' SEE ALSO
Gnome2::VFS, Glib::Object COPYRIGHT
Copyright (C) 2003-2007 by the gtk2-perl team. This software is licensed under the LGPL. See Gnome2::VFS for a full notice. perl v5.14.2 2011-11-15 Gnome2::VFS::Drive(3pm)
All times are GMT -4. The time now is 10:55 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy