DiskSuite mirroring on x86 ?


 
Thread Tools Search this Thread
Operating Systems Solaris DiskSuite mirroring on x86 ?
# 1  
Old 11-08-2006
DiskSuite mirroring on x86 ?

Hi there , I have a 4 disk SunFire X4100 (x86) and my customer has specified that they want their disks to be in a mirrored configuration ....my understanding was that DiskSuite (or whatever its called now) didnt support x86, is this the case and if so, does anybody know of any options (preferably non cost)


Thanks
Gary
# 2  
Old 11-08-2006
SVM does support x86..

I will dig up a document I have for you...
Tornado
# 3  
Old 11-08-2006
Here you go..
This is for Solaris 10.

Description:
Sun has recently released x86 and x64 platforms, which have inbuilt SCSI/Serial
Attached SCSI(SAS) RAID controllers.

It is recommended to use Hardware RAID controllers for the Root disk mirror,
for performance and reliability reasons.

But, Solaris[TM] Operating System(OS) also gives the flexibility to mirror the
root disk using Solaris[TM] Volume Manager(SVM), in Solaris 10 OS.

Document Body:
This document provides the step-by-step procedure, to install and maintain the
Solaris Volume Manager, in a Solaris 10 OS for x86 Platforms environment for
Sun Fire[TM] x64 platforms.

NOTE: This root mirroring procedure for Solaris Volume Manager is tested, and is
only recommended, with Solaris 10 OS for x86 Update 1 (1/06) release or later.

Previous releases of Solaris 10 OS for x86 release had some problems
which are fixed in update1. So it is recommended to use Solaris 10U1 if
customer's intend to use SVM[TM] for root mirroring.

Following, is a list of servers, on which this procedure can be used:

Sun Fire[TM] V20z / Sun Fire V40z
Sun Fire x4100 / Sun Fire x4200
Sun Fire x2100

Note: In the above list of servers, Sun Fire x2100 uses an nVidia RAID
controller. Solaris OS doesn't support the nVidia driver as of now, so Solaris
Volume Manager is the only option for mirroring the root disk.

Step by Step procedure for SVM root mirroring:
Root and mirror disks, are here referred to as primary and secondary disks respectively.

1. In this step, you will need to boot cdrom in single user mode off the Solaris media or network. [refer to info doc 21310 for cdrom boot in single user mode and 13267 for info doc on fdisk] [refer to info doc 84167 for pxe boot in Solaris 10] You will need to initialize each disk prior to installation and or mirroring.

In order to initialize any new X86 disk, you need to run fdisk prior to mirroring and or installation. It is recommended to delete the current layout and then create the new partition as follows. I am using c1t1d0s2 in this case but that will vary according to your system and format output.

1.
fdisk /dev/rdsk/c1t0d0s2
WARNING: Device /dev/rdsk/c1t0d0s2:
The device does not appear to include absolute
sector 0 of the PHYSICAL disk (the normal location for an fdisk table).
Fdisk is normally used with the device that represents the entire fixed disk.
(For example, /dev/rdsk/c0d0p0 on x86 or /dev/rdsk/c0t5d0s2 on sparc).
Are you sure you want to continue? (y/n) y

Total disk size is 8924 cylinders
Cylinder size is 16065 (512 byte) blocks
Cylinders
Partition Status Type Start End Length %
========= ====== ============ ===== === ====== ===
1 Active Solaris2 0 365 366 4
2 Diagnostic 365 366 2 0


SELECT ONE OF THE FOLLOWING:
1. Create a partition
2. Specify the active partition
3. Delete a partition
4. Change between Solaris and Solaris2 Partition IDs
5. Exit (update disk configuration and exit)
6. Cancel (exit without updating disk configuration)
Enter Selection: 3
Specify the partition number to delete (or enter 0 to exit): 1
Are you sure you want to delete partition 1? This will make all files and
programs in this partition inaccessible (type "y" or "n"). y
Partition 1 has been deleted. This was the active partition.1

Total disk size is 8924 cylinders
Cylinder size is 16065 (512 byte) blocks
Cylinders
Partition Status Type Start End Length %
========= ====== ============ ===== === ====== ===
1 Diagnostic 365 366 2 0


SELECT ONE OF THE FOLLOWING:
1. Create a partition
2. Specify the active partition
3. Delete a partition
4. Change between Solaris and Solaris2 Partition IDs
5. Exit (update disk configuration and exit)
6. Cancel (exit without updating disk configuration)
Enter Selection: 3
Specify the partition number to delete (or enter 0 to exit): 1
Are you sure you want to delete partition 1? This will make all files and
programs in this partition inaccessible (type "y" or "n"). y
Partition 1 has been deleted.1
Total disk size is 8924 cylinders
Cylinder size is 16065 (512 byte) blocks
Cylinders

Partition Status Type Start End Length %
========= ====== ============ ===== === ====== ===

WARNING: no partitions are defined!
SELECT ONE OF THE FOLLOWING:
1. Create a partition
2. Specify the active partition
3. Delete a partition
4. Change between Solaris and Solaris2 Partition IDs
5. Exit (update disk configuration and exit)
6. Cancel (exit without updating disk configuration)
Enter Selection: 1
Select the partition type to create:
1=SOLARIS2 2=UNIX 3=PCIXOS 4=Other
5=DOS12 6=DOS16 7=DOSEXT 8=DOSBIG
9=DOS16LBA A=x86 Boot B=Diagnostic C=FAT32
D=FAT32LBA E=DOSEXTLBA F=EFI 0=Exit? 1
Specify the percentage of disk to use for this partition[1]
(or type "c" to specify the size in cylinders). c
Enter starting cylinder number: 1
Enter partition size in cylinders: 8000
Should this become the active partition? If yes, it will be activated
each time the computer is reset or turned on.
Please type "y" or "n". y
Partition 1 is now the active partition.1
Total disk size is 8924 cylinders
Cylinder size is 16065 (512 byte) blocks
Cylinders
Partition Status Type Start End Length %
========= ====== ============ ===== === ====== ===
1 Active Solaris2 1 8000 8000 90

SELECT ONE OF THE FOLLOWING:
1. Create a partition
2. Specify the active partition
3. Delete a partition
4. Change between Solaris and Solaris2 Partition IDs
5. Exit (update disk configuration and exit)
6. Cancel (exit without updating disk configuration)
Enter Selection: 1
Select the partition type to create:
1=SOLARIS2 2=UNIX 3=PCIXOS 4=Other
5=DOS12 6=DOS16 7=DOSEXT 8=DOSBIG
9=DOS16LBA A=x86 Boot B=Diagnostic C=FAT32
D=FAT32LBA E=DOSEXTLBA F=EFI 0=Exit? B
Specify the percentage of disk to use for this partition
(or type "c" to specify the size in cylinders). c
Enter starting cylinder number: 8001
Enter partition size in cylinders: 924
Total disk size is 8924 cylinders
Cylinder size is 16065 (512 byte) blocks

Should this become the active partition? If yes, it will be activated
each time the computer is reset or turned on.
Please type "y" or "n". n
Total disk size is 8924 cylinders
Cylinder size is 16065 (512 byte) blocks
Cylinders
Partition Status Type Start End Length %
========= ====== ============ ===== === ====== ===
1 Active Solaris2 1 8000 8000 90
2 Diagnostic 8001 8919 919 10

SELECT ONE OF THE FOLLOWING:
1. Create a partition
2. Specify the active partition
3. Delete a partition
4. Change between Solaris and Solaris2 Partition IDs
5. Exit (update disk configuration and exit)
6. Cancel (exit without updating disk configuration)
Enter Selection: 5
#
Please note if the disk geometry of the x86 is different you will need to match this in disk labeling under format and in fdisk.
print disk 0
Volume:
Current partition table (original):
Total disk cylinders available: 15940 + 2 (reserved cylinders)
Part Tag Flag Cylinders Size Blocks
0 root wm 518 - 15581 17.44GB (15064/0/0) 36575392
1 swap wu 1 - 432 512.16MB (432/0/0) 1048896
2 backup wm 0 - 15939 18.45GB (15940/0/0) 38702320
3 unassigned wm 433 - 517 100.77MB (85/0/0) 206380
4 unassigned wu 0 0 (0/0/0) 0
5 unassigned wu 0 0 (0/0/0) 0
6 unassigned wu 0 0 (0/0/0) 0
7 unassigned wu 0 0 (0/0/0) 0
8 boot wu 0 - 0 1.19MB (1/0/0) 2428
9 unassigned wu 0 0 (0/0/0) 0

print disk 1
Current partition table (original):
Total disk cylinders available: 2548 + 2 (reserved cylinders)
Part Tag Flag Cylinders Size Blocks
0 root wm 129 - 2404 17.44GB (2276/0/0) 36563940
1 swap wu 1 - 128 1004.06MB (128/0/0) 2056320
2 backup wm 0 - 2547 19.52GB (2548/0/0) 40933620
3 unassigned wm 2405 - 2417 101.98MB (13/0/0) 208845
4 unassigned wm 0 0 (0/0/0) 0
5 unassigned wm 0 0 (0/0/0) 0
6 unassigned wm 0 0 (0/0/0) 0
7 unassigned wm 0 0 (0/0/0) 0
8 boot wu 0 - 0 7.84MB (1/0/0) 16065
9 unassigned wm 0 0 (0/0/0) 0

You will have to layout your disks in format to label them with the same disk geometry as shown above but varied according to your specifications. It is recommended that the primary disk should be the smaller of the two disks per Bugid 6225636.

2. Check for the latest Solaris 10 x86 SVM patches. Check for the latest patch

3. Create one slice(preferably Slice 3) of 100MB(minimum recommended) on
the primary disk, for storing the metadb replicas.

4. After creating the slice for metadb replicas, copy the VTOC of the primary
disk to the secondary disk, so the same slice information is copied to the
secondary disk.(In the examples. c1t0d0 is primary and c1t1d0 is secondary
disk)

For example: # prtvtoc /dev/rdsk/c1t0d0s2 | fmthard -s - /dev/rdsk/c1t1d0s2

5. Create metadb replicas in slice 3 of both primary and secondary disks.

For example: # metadb -afc 3 c1t0d0s3 c1t1d0s3

6. Create metadevices for /, swap, and /export/home file systems. Create
meta-devices for any other filesystems that are part of the primary disk.

For example:
# metainit -f d10 1 1 c1t0d0s0 --->
This creates metadevice d10 for root slice of primary disk.
# metainit d20 1 1 c1t1d0s0
This creates metadevice d20 for root slice of secondary disk.
# metainit d0 -m d10
creating a one way mirror using d10.

In the same way create, metadevices for other file systems, including swap.

7. Update the /etc/vfstab file, with the metadevices for the file systems.

Example: metaroot d0
--> This command will update vfstab - /dev/md/dsk/d0 as / filesystem.
For other filesystems and swap, manually edit the vfstab
entries.

8. reboot the system.

9. Once the system has booted, check with either the "df" or "mount" command,
to check whether the file systems are using metadevices.

10. Attach the second part of the mirror to the meta mirrors.

Example: # metattach d0 d20

This will attach metadevice d20 to d0.
After this command, mirror resync will start.

11. Once all file systems are attached, check all the metadevice status are
"Okay". Then, update the secondary disk bootable using an installgrub
command.

Note: From Solaris 10 x86 U1, the installboot command is obselete. So, use the
installgrub command, to make the second disk bootable. It is better to
run this command once, to make sure the secondary disk is bootable.

Example: # installgrub -fm /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c1t1d0s0

The installgrub utility will update the Master Boot record of the
secondary disk, so that it can be bootable.

12. Now update the secondary disk as alternate bootpath in /boot/solaris/bootenv.rc file.
(This can be done by executing the eeprom command)

Example: eeprom altbootpath="<secondary boot disk path>"

It is important to add this altbootpath to the bootenv.rc for successful mirroring.

13. Now the system can be safely rebooted, to check if it is booting with
root disk mirrored.


Recovering from primary boot disk failure:

If the primary root disk fails, the system will continue to run from the
secondary boot disk. There will be SCSI errors on the console, about the disk
failure.

"Primary root disk can be replaced online without bringing down the system"


Note: It is not necessary to remove the failed disk using the cfgadm or devfsadm
commands. If a new disk is installed in place of the failed disk, just
execute the devfsadm command, so that the OS can recognize the new disk.
Then the "format" command will show the new disk. If the new installed
disk installed has no fdisk partition, then create the fdisk partition
from the format program, so that the new disk can be used for mirroring.

Step-by-Step Procedure for replacing a failed disk:

1. Insert the new disk in place of the failed disk. Create the whole disk for
Solaris partition.

2. copy the VTOC of the secondary disk to the primary disk.
Example: prtvtoc /dev/rdsk/c1t1d0s2 | fmthard -s - /dev/rdsk/c1t0d0s2

3. Then re-attach the failed mirrors using the metareplace command.
Example: metareplace -e d0 < new disk >

Once the resynchronisation has completed, the primary disk is successfully
mirrored back online.

NOTE: The above procedure can be followed for replacing a secondary disk
failure.
Tornado
# 4  
Old 12-26-2006
DiskSuite mirroring on x86 ?

This is a wonderful guide on Solaris 10. By any chance, does anyone know how to do this in Solaris 8 ?

Thanks,
Remi
# 5  
Old 12-26-2006
Quote:
Originally Posted by Remi
This is a wonderful guide on Solaris 10. By any chance, does anyone know how to do this in Solaris 8 ?

Thanks,
Remi
Solaris 10 is the minimum supported OS release for the SunFire X4100

But Solaris 8/9 is supported on some of the older hardware...
Solaris 8 02/02 for Sun LX50 (x86), Solaris 9 12/02 (x86), Sun LX65 Server, Sun LX60 Server, Sun LX50 Server, Sun Fire V65x Server, Sun Fire V60x Server

Have a look at this: http://sunsolve.sun.com/search/docum...ey=1-9-72150-1
Code:
Document Body:

Currently the task of setting up automatic failover on a SPARC(R) is
relatively straight forward, however the same task on an x86 machine is
considerably more convoluted and difficult. For a SPARC machine, only a
couple of simple commands to set the nvalias in the NVRAM are needed to set
up automatic failover to the secondary drive. To set up automatic failover
on an x86 machine, the System Administrator needs to modify two files on
the boot floppy (boot.rc and bootenv.rc) and ALWAYS boot from the floppy.

Setting up automatic failover for a mirrored root:

Currently for automatic failover to succeed this solution requires use of
the boot floppy. The files boot.rc and bootenv.rc on the boot floppy need
to be modified and autoboot must be set. The floppy is used as a primitive
type of boot prom, as Solaris does not have control over the varied types
of BIOS that can be found on these machines, to allow the system to
automatically failover to the alternate boot device.

To set up autofailover use the following procedure:

1) Create the boot floppy from Solaris CD2 using dd.exe.

2) Modify the /floppy/floppy0/solaris/boot.rc file:

  The following line in /floppy/floppy0/boot.rc:


    run /boot/solaris/bootconf.exe ${confflags}


  should be changed to:


    run /boot/solaris/bootconf.exe -n ${confflags}


  Add these lines immediately after the previous line:


    cd /options
    if .streq ( ${root-is-mounted}X , falseX )
      echo
      echo 'Failed to mount primary bootpath:'
      echo "  ${bootpath}"
      echo
      echo 'Trying alternate boot disk:'
      echo "  ${altbootpath}"
      echo
      setprop bootpath "${altbootpath}"
      run /boot/solaris/bootconf.exe ${confflags}
    endif


3) Modify the /floppy/floppy0/solaris/bootenv.rc file as follows.

  Make sure auto-boot? is set to true:


    setprop auto-boot? True


  Set auto-boot-timeout to an appropriately low value:


    setprop auto-boot-timeout 5


  Set altbootpath to your alternate submirror:


    setprop altbootpath <backup_root_device_id>


  Make sure the bootpath is set to the location of your primary root  
  partition:


    setprop bootpath <primary_root_device_id>


  e.g.
    setprop auto-boot? true
    setprop auto-boot-timeout 5
    setprop bootpath    
      /pci@0,0/pci8086,2545@3/pci8086,1460@1d/pci8086,341a@7,1/sd@0,0:a
    setprop altbootpath
      /pci@0,0/pci8086,2545@3/pci8086,1460@1d/pci8086,341a@7,1/sd@1,0:a


SVM Failed Disk Recovery

When one of the disks fails, reboot the system with the boot floppy inserted.
During the boot you will be told that there are state databases which no longer exist.
You will then be prompted to give the root password for system maintenance.

1) Enter the root password for system maintenance.

2) Execute metadb to see which state databases have failed.

3) Delete references to the dead state databases:

    Metadb -d <failed disk> (eg c0t1d0s3)


4) Reboot the system which should come up normally, however you will see
some messages reporting that, for example, metadevice d21 is "unavailable
needs maintenance" for each of the slices which were on the failed disk.

5) Execute metastat and make a note of the slices and their disk slice
locations.

6) Shutdown the server, replace the failed drive with a replacement and power up again.

7) Create a copy of the good disk's partition information onto the new disk:

     fdisk -W /var/tmp/fdisk.dat  /dev/rdsk/c0t0d0p0  <good disk>
     fdisk -F /var/tmp/fdisk.dat  /dev/rdsk/c0t1d0p0  <replaced disk>


8) Add bootblock as per answerbook to the new drive.

  http://docs.sun.com/db/doc/806-4073/...isksxadd-45774


e.g.
installboot /usr/platform/`uname -i`/lib/fs/ufs/pboot
/usr/platform/`uname -i`/lib/fs/ufs/bootblk /dev/rdsk/c0t1d0s2 <replaced disk>

9) Copy the disk layout of the good disk with the new replacement for the failed disk:

 prtvtoc /dev/rdsk/<good disk> | fmthard -s - /dev/rdsk/<replaced disk>


10) Now create new state databases on the new disk:

       metadb -a -c3 <replaced disk>


11) Finally you have to execute metareplace for each failed slice:

       metareplace -e <SVM device> <replaced disk>


   e.g.        
       metareplace -e d30 c0t1d0s0
       metareplace -e d31 c0t1d0s1
       metareplace -e d32 c0t1d0s4
       metareplace -e d33 c0t1d0s5
       metareplace -e d34 c0t1d0s7          


You can monitor the progress of the re-sync by either using the metastat
command or by firing up the Solaris Management Console (SMC) and double
clicking on "The Computer" -> "Storage" -> "Enhanced Storage" -> "Volumes".

Issues:

There is an issue with reference to booting from the second half of an SVM
mirror on certain systems x86 systems. For example, with a failed first
half of a mirror on a V60x the system should boot from the second half of
the mirror, but instead fails with an error like the following:

 unable to mount a Solaris root file system from the device: 
 Disk target:SEGATE ST336607LSUN36G 0307 on Adaptec Ultra/320 
 on Board PCI bus 4, at Dev7, func1 Error message from mount:


 /pci@0,0/pci8086,2545@3/pci8086,1460@1d/pci8086,341a@7,1/sd@1,0:a


Workaround:

The easiest solution is to pull the failed first disk and put the second
disk in its place. The replacement disk can then be put in the free slot
and sync(ed) from there.

Tornado
# 6  
Old 01-17-2008
Thanks for the info but...

Is there something special about using fdisk?

I am currently working with an X4500 and I have already installed the system. The partitioning on the primary drive was done during the installation. I ran the prtvtoc command to move the vtoc over to the secondary drive. Created all of the metadevices ran metaroot rebooted and the box went bye bye. I couldn't even sync the mirrors. I verified my vfstab etc. and it was correct, but it couldn't seem to mount the root ( / ) slice as rw. I tried a couple variations in vfstab, and even tried to go back to mounting the original NON-metadevices but no luck. I ended up just rebuilding the system since I'm in a time crunch. I do need to get these mirrored though.

Thanks for any info. Oh, and yes, I did find the document at sun that indicates you should boot from a boot floppy. I haven't used one of those in at least 8 years and is not an acceptable resolution for a system of this caliber. Not to mention that it doesn't have a floppy drive. ;-)

Smilie
# 7  
Old 01-17-2008
It sounds as if your vfstab may have been correct but your metadevices were not, you really didn't give anough information for anyone to be able to help. I have had no problems getting a thumper mirrored using the standard procedures.
Login or Register to Ask a Question

Previous Thread | Next Thread

10 More Discussions You Might Find Interesting

1. Solaris

Disksuite question

Hello all, I have a Solaris Disksuite question :- I will be adding 4 new drives to an E250 server and need will be configuring 2 striped volumes each consisting 2 new disks with SVM. In the end i will have 2 volumes each of 72gb. So in effect i will have 1 volume called D7 and another volume... (6 Replies)
Discussion started by: commandline
6 Replies

2. Solaris

disksuite mirroring d0 to d2 and d1 to d3

I have a SOlaris 10 v240 server. I'm installing disksuite to mirror the root drive D0 to D2. I also have one partition on disk 1 that I want to mirror to D3. I am not using ZFS right now. Can I add that to my initial mirroring configuration or can I only mirror 1 drive to 1 drive? Can I... (2 Replies)
Discussion started by: csross
2 Replies

3. Solaris

Looking for Solstice DiskSuite 4.2

Hi all, Do you know where I can download Soltice Disksuite 4.2 for Solaris 2.6 ? I haven't the CD labeled “Solaris Server Intranet Extensions 1.0” . Thanks in advance for your precious help. Bests Regards Hosni (2 Replies)
Discussion started by: hosni
2 Replies

4. Solaris

mirroring with disksuite

hi all, i want to mirror two disks with disksuite under solaris 9 , doses smeone can explain me Briefly the essential steps to do that plz ? (3 Replies)
Discussion started by: lid-j-one
3 Replies

5. Solaris

DiskSuite dilemma

Hello, We have this system a SunFire 280R running Solaris 8 Generic_117350-46. It has 2 36GB disks in it. They are mirrored with DiskSuite 4.2.1. When we execute a metastat all the devices report an Okay status, but when we go into metatool everything is in the Critical (maintenance)... (4 Replies)
Discussion started by: mgb
4 Replies

6. Solaris

Disk Mirroring for Solaris 10 x86

I am having an issue with setting up disk mirroring for Solaris 10 on an x86 server. My main problem is that the volumes and slices have already been setup and our proprietary software has already been installed and configured. The entire drive has been allocated in this configuration and the... (2 Replies)
Discussion started by: chuck43
2 Replies

7. UNIX for Dummies Questions & Answers

disksuite and raidctl used together

I have a live Sunfire v440 server with 4 drives and I want to mirror drive 0 & 1 to 2 & 3. The on-board raid controller only allows for 1 live mirror. I was thinking of disksuite, but unfortunately the second disk is just one large partition with no free slices. I was thinking of using... (0 Replies)
Discussion started by: csgonan
0 Replies

8. Solaris

mirroring the boot slice (slice 8) on x86

Hi there I am about to mirror a Solaris 10 x86 box (SunFire X4100) onto a secondary disk using svm (current system is one disk). My question is this, on X86 boxes there is a slice 8 defined as boot partition (and also a slice 9, dunno what its used for tho). Do I need to mirror this boot slice... (0 Replies)
Discussion started by: hcclnoodles
0 Replies

9. Solaris

Solstice DiskSuite

Has anybody every used Solstice DiskSuite? I am having trouble setting it up. I installed it without a problem, but do I really have to blow away the drives on the D1000 just to create a metastate database? (8 Replies)
Discussion started by: hshapiro
8 Replies

10. UNIX for Dummies Questions & Answers

Disksuite questions

Is this correct if I have to create 2 partions of 100 Gigs each? (spread accross (6) 36 Gig drives) AVAILABLE DISK SELECTIONS: 0. c0t0d0 <SUN36G cyl 24620 alt 2 hd 27 sec 107> /pci@1f,4000/scsi@3/sd@0,0 1. c0t1d0 <SUN36G cyl 24620 alt 2 hd 27 sec 107> ... (10 Replies)
Discussion started by: finster
10 Replies
Login or Register to Ask a Question