Sponsored Content
Full Discussion: Sun T3-1 hardware RAID
Special Forums Hardware Sun T3-1 hardware RAID Post 302525648 by soliberus on Friday 27th of May 2011 10:19:37 AM
Old 05-27-2011
Sun T3-1 hardware RAID

Hi all

I've just received my T3-1. It has 8 disks and I would like to configure RAID1 on the disks. The Sun documentation states that you can either use the OpenBoot PROMP utility called Fcode or you can use software via the Solaris OS.

The documentation doesn't make it clear if:

1. The drives are hot swappable if the RAID is configured using Fcode
2. How would I know if a drive failed (or is busy failing) if the RAID utility is only accessible from OBP.

Does anyone have any experience configuring RAID on a T3-1 server?

Many thanks
 

9 More Discussions You Might Find Interesting

1. Solaris

Hardware RAID

I don't understood why on SPARC-Platforms have not present RAID-Controller ? Sorry for my bad english, but it's crazy always setup software RAID !!! I whanna Hardware RAID and when i can find solution ? (7 Replies)
Discussion started by: jess_t03
7 Replies

2. Solaris

how to hardware RAID 1 on T5120

Hi, I have t5120 sparc and I have 2 146 G drives in the system. I will be installing solaris 10 and also want the system mirrored using Hardware RAID "1" The System did come preinstalled as it comes from sun. I did not do much on it. I booted system using boot cdrom -s gave format... (6 Replies)
Discussion started by: upengan78
6 Replies

3. Solaris

Sun T5120 hardware RAID question

Hi everyone I've just purchased a Sun T5120 server with 2 internal disks. I've configured hardware RAID (mirror) and as a result the device tree in Solaris only contains 1 hard drive. My question is, how would I know when one of the drives become faulty? Thanks (2 Replies)
Discussion started by: soliberus
2 Replies

4. UNIX for Dummies Questions & Answers

RAID software vs hardware RAID

Hi Can someone tell me what are the differences between software and hardware raid ? thx for help. (2 Replies)
Discussion started by: presul
2 Replies

5. Solaris

Hardware Raid - LiveUpgrade

Hi, I have a question. Do LiveUpgrade supports hardware raid? How to choose the configuration of the system disk for Solaris 10 SPARC? 1st Hardware RAID-1 and UFS 2nd Hardware RAID-1 and ZFS 3rd SVM - UFS and RAID1 4th Software RAID-1 and ZFS I care about this in the future to take... (1 Reply)
Discussion started by: bieszczaders
1 Replies

6. Hardware

Hardware RAID on Sun T2000 Server

Hi All I have a Sun T2000 server. Couple of years ago I had configured and mirrored the boot drive with an other drive using hardware RAID 1 using raidctl command. Following is the hardware RAID output. root@oracledatabaseserver / $ raidctl RAID Volume RAID RAID Disk... (0 Replies)
Discussion started by: Tirmazi
0 Replies

7. Solaris

Software RAID on top of Hardware RAID

Server Model: T5120 with 146G x4 disks. OS: Solaris 10 - installed on c1t0d0. Plan to use software raid (veritas volume mgr) on c1t2d0 disk. After format and label the disk, still not able to detect using vxdiskadm. Question: Should I remove the hardware raid on c1t2d0 first? My... (4 Replies)
Discussion started by: KhawHL
4 Replies

8. Solaris

Hardware RAID not recognize the new disk [Sun T6320]

We have hardware RAID configured on Sun-Blade-T6320 and one of the disk got failed. Hence we replaced the failed disk. But the hot swapped disk not recognized by RAID. Kindly help on fixing this issue. We have 2 LDOM configured on this server and this server running on single disk. #... (8 Replies)
Discussion started by: rock123
8 Replies

9. Solaris

Hardware raid patching

Dear All , we have hardware raid 1 implemented on Solaris Disks. We need to patch the Servers. Kindly let me know how to patch hardware raid implemented Servers. Thanks... Rj (7 Replies)
Discussion started by: jegaraman
7 Replies
did(7)						     Sun Cluster Device and Network Interfaces						    did(7)

NAME
did - user configurable disk id driver DESCRIPTION
Note - Beginning with the Sun Cluster 3.2 release, Sun Cluster software includes an object-oriented command set. Although Sun Cluster software still supports the original command set, Sun Cluster procedural documentation uses only the object-oriented command set. For more infor- mation about the object-oriented command set, see the Intro(1CL) man page. Disk ID (DID) is a user configurable pseudo device driver that provides access to underlying disk, tape, and CDROM devices. When the device supports unique device ids, multiple paths to a device are determined according to the device id of the device. Even if multiple paths are available with the same device id, only one DID name is given to the actual device. In a clustered environment, a particular physical device will have the same DID name regardless of its connectivity to more than one host or controller. This, however, is only true of devices that support a global unique device identifier such as physical disks. DID maintains parallel directories for each type of device that it manages under /dev/did. The devices in these directories behave the same as their non-DID counterparts. This includes maintaining slices for disk and CDROM devices as well as names for different tape device behaviors. Both raw and block device access is also supported for disks by means of /dev/did/rdsk and /dev/did/rdsk. At any point in time, I/O is only supported down one path to the device. No multipathing support is currently available through DID. Before a DID device can be used, it must first be initialized by means of the scdidadm(1M) command. IOCTLS
The DID driver maintains an admin node as well as nodes for each DID device minor. No user ioctls are supported by the admin node. The DKIOCINFO ioctl is supported when called against the DID device nodes such as /dev/did/rdsk/d0s2. All other ioctls are passed directly to the driver below. FILES
/dev/did/dsk/dnsm block disk or CDROM device, where n is the device number and m is the slice number /dev/did/rdsk/dnsm raw disk or CDROM device, where n is the device number and m is the slice number /dev/did/rmt/n tape device , where n is the device number /dev/did/admin administrative device /kernel/drv/did driver module /kernel/drv/did.conf driver configuration file /etc/did.conf scdidadm configuration file for non-clustered systems Cluster Configuration Repository (CCscdidadm(1M) maintains configuration in the CCR for clustered systems SEE ALSO
devfsadm(1M), Intro(1CL), cldevice(1CL), scdidadm(1M) NOTES
DID creates names for devices in groups, in order to decrease the overhead during device hot-plug. For disks, device names are created in /dev/did/dsk and /dev/did/rdsk in groups of 100 disks at a time. For tapes, device names are created in /dev/did/rmt in groups of 10 tapes at a time. If more devices are added to the cluster than are handled by the current names, another group will be created. Sun Cluster 3.2 24 April 2001 did(7)
All times are GMT -4. The time now is 04:50 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy