Sponsored Content
Full Discussion: Sun T3-1 hardware RAID
Special Forums Hardware Sun T3-1 hardware RAID Post 302526622 by DukeNuke2 on Wednesday 1st of June 2011 04:38:02 AM
Old 06-01-2011
which means it doesn't work?
 

9 More Discussions You Might Find Interesting

1. Solaris

Hardware RAID

I don't understood why on SPARC-Platforms have not present RAID-Controller ? Sorry for my bad english, but it's crazy always setup software RAID !!! I whanna Hardware RAID and when i can find solution ? (7 Replies)
Discussion started by: jess_t03
7 Replies

2. Solaris

how to hardware RAID 1 on T5120

Hi, I have t5120 sparc and I have 2 146 G drives in the system. I will be installing solaris 10 and also want the system mirrored using Hardware RAID "1" The System did come preinstalled as it comes from sun. I did not do much on it. I booted system using boot cdrom -s gave format... (6 Replies)
Discussion started by: upengan78
6 Replies

3. Solaris

Sun T5120 hardware RAID question

Hi everyone I've just purchased a Sun T5120 server with 2 internal disks. I've configured hardware RAID (mirror) and as a result the device tree in Solaris only contains 1 hard drive. My question is, how would I know when one of the drives become faulty? Thanks (2 Replies)
Discussion started by: soliberus
2 Replies

4. UNIX for Dummies Questions & Answers

RAID software vs hardware RAID

Hi Can someone tell me what are the differences between software and hardware raid ? thx for help. (2 Replies)
Discussion started by: presul
2 Replies

5. Solaris

Hardware Raid - LiveUpgrade

Hi, I have a question. Do LiveUpgrade supports hardware raid? How to choose the configuration of the system disk for Solaris 10 SPARC? 1st Hardware RAID-1 and UFS 2nd Hardware RAID-1 and ZFS 3rd SVM - UFS and RAID1 4th Software RAID-1 and ZFS I care about this in the future to take... (1 Reply)
Discussion started by: bieszczaders
1 Replies

6. Hardware

Hardware RAID on Sun T2000 Server

Hi All I have a Sun T2000 server. Couple of years ago I had configured and mirrored the boot drive with an other drive using hardware RAID 1 using raidctl command. Following is the hardware RAID output. root@oracledatabaseserver / $ raidctl RAID Volume RAID RAID Disk... (0 Replies)
Discussion started by: Tirmazi
0 Replies

7. Solaris

Software RAID on top of Hardware RAID

Server Model: T5120 with 146G x4 disks. OS: Solaris 10 - installed on c1t0d0. Plan to use software raid (veritas volume mgr) on c1t2d0 disk. After format and label the disk, still not able to detect using vxdiskadm. Question: Should I remove the hardware raid on c1t2d0 first? My... (4 Replies)
Discussion started by: KhawHL
4 Replies

8. Solaris

Hardware RAID not recognize the new disk [Sun T6320]

We have hardware RAID configured on Sun-Blade-T6320 and one of the disk got failed. Hence we replaced the failed disk. But the hot swapped disk not recognized by RAID. Kindly help on fixing this issue. We have 2 LDOM configured on this server and this server running on single disk. #... (8 Replies)
Discussion started by: rock123
8 Replies

9. Solaris

Hardware raid patching

Dear All , we have hardware raid 1 implemented on Solaris Disks. We need to patch the Servers. Kindly let me know how to patch hardware raid implemented Servers. Thanks... Rj (7 Replies)
Discussion started by: jegaraman
7 Replies
BIOCTL(8)						    BSD System Manager's Manual 						 BIOCTL(8)

NAME
bioctl -- RAID management interface SYNOPSIS
bioctl device command [arg [...]] DESCRIPTION
RAID device drivers which support management functionality can register their services with the bio(4) driver. bioctl then can be used to manage the RAID controller's properties. COMMANDS
The following commands are supported: show [disks | volumes] Without any argument by default bioctl will show information about all volumes and the logical disks used on them. If disks is specified, only information about physical disks will be shown. If volumes is specified, only information about the volumes will be shown. alarm [disable | enable | silence | test] Control the RAID card's alarm functionality, if supported. By default if no argument is specified, its current state will be shown. Optionally the disable, enable, silence, or test arguments may be specified to enable, disable, silence, or test the RAID card's alarm. blink start channel:target.lun | stop channel:target.lun Instruct the device at channel:target.lun to start or cease blinking, if there's ses(4) support in the enclosure. hotspare add channel:target.lun | remove channel:target.lun Create or remove a hot-spare drive at location channel:target.lun. passthru add DISKID channel:target.lun | remove channel:target.lun Create or remove a pass-through device. The DISKID argument specifies the disk that will be used for the new device, and it will be created at the location channel:target.lun. NOTE: Removing a pass-through device that has a mounted filesys- tem will lead to undefined behaviour. check start VOLID | stop VOLID Start or stop consistency volume check in the volume with index VOLID. NOTE: Not many RAID controllers support this fea- ture. create volume VOLID DISKIDs [SIZE] STRIPE RAID_LEVEL channel:target.lun Create a volume at index VOLID. The DISKIDs argument will specify the first and last disk, i.e.: 0-3 will use the disks 0, 1, 2, and 3. The SIZE argument is optional and may be specified if not all available disk space is wanted (also dependent of the RAID_LEVEL). The volume will have a stripe size defined in the STRIPE argument and it will be located at channel:target.lun. remove volume VOLID channel:target.lun Remove a volume at index VOLID and located at channel:target.lun. NOTE: Removing a RAID volume that has a mounted filesystem will lead to undefined behaviour. EXAMPLES
The following command, executed from the command line, shows the status of the volumes and its logical disks on the RAID controller: $ bioctl arcmsr0 show Volume Status Size Device/Label RAID Level Stripe ================================================================= 0 Building 468G sd0 ARC-1210-VOL#00 RAID 6 128KB 0% done 0:0 Online 234G 0:0.0 noencl <WDC WD2500YS-01SHB1 20.06C06> 0:1 Online 234G 0:1.0 noencl <WDC WD2500YS-01SHB1 20.06C06> 0:2 Online 234G 0:2.0 noencl <WDC WD2500YS-01SHB1 20.06C06> 0:3 Online 234G 0:3.0 noencl <WDC WD2500YS-01SHB1 20.06C06> To create a RAID 5 volume on the SCSI 0:15.0 location on the disks 0, 1, 2, and 3, with stripe size of 64Kb on the first volume ID, using all available free space on the disks: $ bioctl arcmsr0 create volume 0 0-3 64 5 0:15.0 To remove the volume 0 previously created at the SCSI 0:15.0 location: $ bioctl arcmsr0 remove volume 0 0:15.0 SEE ALSO
arcmsr(4), bio(4), cac(4), ciss(4), mfi(4) HISTORY
The bioctl command first appeared in OpenBSD 3.8, it was rewritten for NetBSD 5.0. AUTHORS
The bioctl interface was written by Marco Peereboom <marco@openbsd.org> and was rewritten with multiple features by Juan Romero Pardines <xtraeme@NetBSD.org>. BSD
March 16, 2008 BSD
All times are GMT -4. The time now is 01:10 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy