12-05-2011
Software RAID on top of Hardware RAID
Server Model: T5120 with 146G x4 disks.
OS: Solaris 10 - installed on c1t0d0.
Plan to use software raid (veritas volume mgr) on c1t2d0 disk.
After format and label the disk, still not able to detect using vxdiskadm.
Question:
Should I remove the hardware raid on c1t2d0 first?
My purpose using veritas volume manager is to ease of increase the disk size when it required. If to use hardware raid, once partition and disk is used on that partition, it will not that easy to expand size. Please correct me if my understand is wrong and do advice me for better approach.
Thank you.
# raidctl
Controller: 1
Volume:c1t0d0
Volume:c1t2d0
Disk: 0.0.0
Disk: 0.1.0
Disk: 0.2.0
Disk: 0.3.0
# raidctl -l c1t0d0
Volume Size Stripe Status Cache RAID
Sub Size Level
Disk
----------------------------------------------------------------
c1t0d0 136.6G N/A OPTIMAL OFF RAID1
0.0.0 136.6G GOOD
0.1.0 136.6G GOOD
# raidctl -l c1t2d0
Volume Size Stripe Status Cache RAID
Sub Size Level
Disk
----------------------------------------------------------------
c1t2d0 136.6G N/A OPTIMAL OFF RAID1
0.2.0 136.6G GOOD
0.3.0 136.6G GOOD
9 More Discussions You Might Find Interesting
1. Solaris
I don't understood why on SPARC-Platforms have not present RAID-Controller ? Sorry for my bad english, but it's crazy always setup software RAID !!! I whanna Hardware RAID and when i can find solution ? (7 Replies)
Discussion started by: jess_t03
7 Replies
2. Solaris
Hi,
I have t5120 sparc and I have 2 146 G drives in the system. I will be installing solaris 10 and also want the system mirrored using Hardware RAID "1"
The System did come preinstalled as it comes from sun. I did not do much on it.
I booted system using boot cdrom -s
gave format... (6 Replies)
Discussion started by: upengan78
6 Replies
3. Solaris
Hi,
I have a root with hardware RAID on c0t0d0 and c0t2d0. I would like to set the boot device sequence in OBP for both hdds.
I have checked in ls -l /dev/rdsk/ for the path of c0t2d0 but it does not exist. Can anyone shed some lights on this?
AVAILABLE DISK SELECTIONS:
0.... (12 Replies)
Discussion started by: honmin
12 Replies
4. UNIX for Dummies Questions & Answers
Hi
Can someone tell me what are the differences between software and hardware raid ?
thx for help. (2 Replies)
Discussion started by: presul
2 Replies
5. Solaris
Hi,
I have a question. Do LiveUpgrade supports hardware raid?
How to choose the configuration of the system disk for Solaris 10 SPARC?
1st Hardware RAID-1 and UFS
2nd Hardware RAID-1 and ZFS
3rd SVM - UFS and RAID1
4th Software RAID-1 and ZFS
I care about this in the future to take... (1 Reply)
Discussion started by: bieszczaders
1 Replies
6. Hardware
Hi all
I've just received my T3-1. It has 8 disks and I would like to configure RAID1 on the disks. The Sun documentation states that you can either use the OpenBoot PROMP utility called Fcode or you can use software via the Solaris OS.
The documentation doesn't make it clear if:
1. The... (6 Replies)
Discussion started by: soliberus
6 Replies
7. Solaris
We have hardware RAID configured on our T6320 server and two LDOMs are running on this server. One of our disk got failed and replaced. After replacemnt the newly installed disk not detected by RAID controlled so Oracle suggested to upgrade the REM firmware. As this is the standalone production... (0 Replies)
Discussion started by: rock123
0 Replies
8. Solaris
Dear All ,
we have hardware raid 1 implemented on Solaris Disks.
We need to patch the Servers. Kindly let me know how to patch hardware raid implemented Servers.
Thanks...
Rj (7 Replies)
Discussion started by: jegaraman
7 Replies
9. Solaris
Dear All ,
Pl find the below command ,
# raidctl -l
Controller: 1
Volume:c1t0d0
Disk: 0.0.0
Disk: 0.1.0
Disk: 0.3.0
#
raidctl -l c1t0d0
Volume Size Stripe Status Cache RAID
Sub Size ... (10 Replies)
Discussion started by: jegaraman
10 Replies
LEARN ABOUT NETBSD
bioctl
BIOCTL(8) BSD System Manager's Manual BIOCTL(8)
NAME
bioctl -- RAID management interface
SYNOPSIS
bioctl device command [arg [...]]
DESCRIPTION
RAID device drivers which support management functionality can register their services with the bio(4) driver. bioctl then can be used to
manage the RAID controller's properties.
COMMANDS
The following commands are supported:
show [disks | volumes]
Without any argument by default bioctl will show information about all volumes and the logical disks used on them. If
disks is specified, only information about physical disks will be shown. If volumes is specified, only information about
the volumes will be shown.
alarm [disable | enable | silence | test]
Control the RAID card's alarm functionality, if supported. By default if no argument is specified, its current state
will be shown. Optionally the disable, enable, silence, or test arguments may be specified to enable, disable, silence,
or test the RAID card's alarm.
blink start channel:target.lun | stop channel:target.lun
Instruct the device at channel:target.lun to start or cease blinking, if there's ses(4) support in the enclosure.
hotspare add channel:target.lun | remove channel:target.lun
Create or remove a hot-spare drive at location channel:target.lun.
passthru add DISKID channel:target.lun | remove channel:target.lun
Create or remove a pass-through device. The DISKID argument specifies the disk that will be used for the new device, and
it will be created at the location channel:target.lun. NOTE: Removing a pass-through device that has a mounted filesys-
tem will lead to undefined behaviour.
check start VOLID | stop VOLID
Start or stop consistency volume check in the volume with index VOLID. NOTE: Not many RAID controllers support this fea-
ture.
create volume VOLID DISKIDs [SIZE] STRIPE RAID_LEVEL channel:target.lun
Create a volume at index VOLID. The DISKIDs argument will specify the first and last disk, i.e.: 0-3 will use the disks
0, 1, 2, and 3. The SIZE argument is optional and may be specified if not all available disk space is wanted (also
dependent of the RAID_LEVEL). The volume will have a stripe size defined in the STRIPE argument and it will be located
at channel:target.lun.
remove volume VOLID channel:target.lun
Remove a volume at index VOLID and located at channel:target.lun. NOTE: Removing a RAID volume that has a mounted
filesystem will lead to undefined behaviour.
EXAMPLES
The following command, executed from the command line, shows the status of the volumes and its logical disks on the RAID controller:
$ bioctl arcmsr0 show
Volume Status Size Device/Label RAID Level Stripe
=================================================================
0 Building 468G sd0 ARC-1210-VOL#00 RAID 6 128KB 0% done
0:0 Online 234G 0:0.0 noencl <WDC WD2500YS-01SHB1 20.06C06>
0:1 Online 234G 0:1.0 noencl <WDC WD2500YS-01SHB1 20.06C06>
0:2 Online 234G 0:2.0 noencl <WDC WD2500YS-01SHB1 20.06C06>
0:3 Online 234G 0:3.0 noencl <WDC WD2500YS-01SHB1 20.06C06>
To create a RAID 5 volume on the SCSI 0:15.0 location on the disks 0, 1, 2, and 3, with stripe size of 64Kb on the first volume ID, using all
available free space on the disks:
$ bioctl arcmsr0 create volume 0 0-3 64 5 0:15.0
To remove the volume 0 previously created at the SCSI 0:15.0 location:
$ bioctl arcmsr0 remove volume 0 0:15.0
SEE ALSO
arcmsr(4), bio(4), cac(4), ciss(4), mfi(4)
HISTORY
The bioctl command first appeared in OpenBSD 3.8, it was rewritten for NetBSD 5.0.
AUTHORS
The bioctl interface was written by Marco Peereboom <marco@openbsd.org> and was rewritten with multiple features by
Juan Romero Pardines <xtraeme@NetBSD.org>.
BSD
March 16, 2008 BSD