Sponsored Content
Operating Systems Solaris Solaris 10 Installation - Disks missing, and Raid Post 302663093 by msarro on Wednesday 27th of June 2012 03:41:31 PM
Old 06-27-2012
This looks like exactly what we were looking for - we can gladly use raid 1 instead of raid 10, these boxes are already overprovisioned compared to what we already have in the field. Thank you!

---------- Post updated at 03:41 PM ---------- Previous update was at 02:42 PM ----------

Awesome! it turns out the LSI 1068E controller supports both raid 1E, and hot spares, so I was able to run the following:

raidctl -C "0.0.0 0.1.0 0.2.0 0.3.0 0.4.0 0.5.0 0.6.0" -r 1E 1
which creates a 7 disk Raid 1E

Followed by:
raidctl -a set -g 0.7.0 c1t0d0
which adds a hot spare.

Fantastic, thank you again!
 

10 More Discussions You Might Find Interesting

1. Solaris

Solaris 10 x86 Installation Will Not Boot From CD Disks

Problem: Am trying to install Solaris 10 x86 on a desktop PC (PC details unspecified) from downloaded iso images (5 in all) on 5 CD disks. These were downloaded from the Sun website and unzipped. I install Disk 1of 5 into the CD drive and then restart the machine, thinkng that it will launch... (5 Replies)
Discussion started by: RobSand
5 Replies

2. Red Hat

IBM RAID disks

We have a Red Hat linux server running on IBM x445 hardware. There are external disks in an IBM EXP300 disk enclosure. The system is running RAID 5. One of the four IBM disks (73.4 GB 10k FRU 06P5760) has become faulty. The system is still up and running OK because of the RAID. In that same EXP300... (3 Replies)
Discussion started by: pdudley
3 Replies

3. Solaris

Move disks to different StorEdge, keeping RAID

Hi. I need to move a 5 disk RAID5 array from a SE3310 box to a different SE3310 array. After installing the disks in the "new" StorEdge device, I "would like" ;) to be able have access to the data which is on the RAID. Essentially, the quesion is, how can this be done? :confused: I checked... (5 Replies)
Discussion started by: alexs77
5 Replies

4. Solaris

Solaris not recognizing RAID 5 disks

I've just installed Sol 10 Update 9 on a Sun 4140 server and have a RAID 1 configuration (2 136 Gb drives) for the OS and have created a RAID 5 array (6 136 GB) drives. When i log into the system I am unable to see the RAID 5 disks at all. I've tried using the devfsadm command but no luck and... (9 Replies)
Discussion started by: goose25
9 Replies

5. Linux

If i don't have raid disks can i shut down dmraid device-mapper?

hello my centOS newly installed system loading dmraid modules on startup I did remove all LVM/raid things from system installation menus and after installation too but dmraid is still there and he says: no raid disks found also I did modprobe -r dm_raid45 and it do remove it but only until... (7 Replies)
Discussion started by: tip78
7 Replies

6. AIX

SCSI PCI - X RAID Controller card RAID 5 AIX Disks disappeared

Hello, I have a scsi pci x raid controller card on which I had created a disk array of 3 disks when I type lspv ; I used to see 3 physical disks ( two local disks and one raid 5 disk ) suddenly the raid 5 disk array disappeared ; so the hardware engineer thought the problem was with SCSI... (0 Replies)
Discussion started by: filosophizer
0 Replies

7. Solaris

Disks not reconize during solaris 10 installation

Hi all, I am installing Solaris 10 on a new server Sun Blade X6270. During the installation i get this error: ''disk not found'' I try to replug the disks but the same error eppear. Disks reference: SAS 146GB Seagate 10000RPM Do i need to install disk or controller's driver? or do another... (2 Replies)
Discussion started by: saki_jumeau
2 Replies

8. Solaris

Disks missing from /devices folder. Not sure why.

Help Please! I picked up a V440 and it has 4 disks. I installed Solaris fine to disk 3, but I cannot see the other disks in Solaris. I have run probe-scsi-all from OBP and I see the other disks and they have names in devalias AFAIK. It's just in Solaris they do not appear. I have run... (6 Replies)
Discussion started by: greg1975
6 Replies

9. Solaris

Hardware RAID using three disks

Dear All , Pl find the below command , # raidctl -l Controller: 1 Volume:c1t0d0 Disk: 0.0.0 Disk: 0.1.0 Disk: 0.3.0 # raidctl -l c1t0d0 Volume Size Stripe Status Cache RAID Sub Size ... (10 Replies)
Discussion started by: jegaraman
10 Replies

10. Solaris

Missing ASM Disks in Solaris 11.3 LDOM

Hi Guys, Just a quick question hopefully someone will have seen this before and will be able to enlighten me. I have been doing some Infrastructure Verification Testing and one of the tests was booting the primary domain from alternate disks, this all went well - however on restarting one of... (7 Replies)
Discussion started by: gull04
7 Replies
MPTUTIL(8)						    BSD System Manager's Manual 						MPTUTIL(8)

NAME
mptutil -- Utility for managing LSI Fusion-MPT controllers SYNOPSIS
mptutil version mptutil [-u unit] show adapter mptutil [-u unit] show config mptutil [-u unit] show drives mptutil [-u unit] show events mptutil [-u unit] show volumes mptutil [-u unit] fail drive mptutil [-u unit] online drive mptutil [-u unit] offline drive mptutil [-u unit] name volume name mptutil [-u unit] volume status volume mptutil [-u unit] volume cache volume enable|disable mptutil [-u unit] clear mptutil [-u unit] create type [-q] [-v] [-s stripe_size] drive[,drive[,...]] mptutil [-u unit] delete volume mptutil [-u unit] add drive [volume] mptutil [-u unit] remove drive DESCRIPTION
The mptutil utility can be used to display or modify various parameters on LSI Fusion-MPT controllers. Each invocation of mptutil consists of zero or more global options followed by a command. Commands may support additional optional or required arguments after the command. Currently one global option is supported: -u unit unit specifies the unit of the controller to work with. If no unit is specified, then unit 0 is used. Volumes may be specified in two forms. First, a volume may be identified by its location as [xx:]yy where xx is the bus ID and yy is the target ID. If the bus ID is omitted, the volume is assumed to be on bus 0. Second, on the volume may be specified by the corresponding daX device, such as da0. The mpt(4) controller divides drives up into two categories. Configured drives belong to a RAID volume either as a member drive or as a hot spare. Each configured drive is assigned a unique device ID such as 0 or 1 that is show in show config, and in the first column of show drives. Any drive not associated with a RAID volume as either a member or a hot spare is a standalone drive. Standalone drives are visible to the operating system as SCSI disk devices. As a result, drives may be specified in three forms. First, a configured drive may be identi- fied by its device ID. Second, any drive may be identified by its location as xx:yy where xx is the bus ID and yy is the target ID for each drive as displayed in show drives. Note that unlike volumes, a drive location always requires the bus ID to avoid confusion with device IDs. Third, a standalone drive that is not part of a volume may be identified by its corresponding daX device as displayed in show drives. The mptutil utility supports several different groups of commands. The first group of commands provide information about the controller, the volumes it manages, and the drives it controls. The second group of commands are used to manage the physical drives attached to the con- troller. The third group of commands are used to manage the logical volumes managed by the controller. The fourth group of commands are used to manage the drive configuration for the controller. The informational commands include: version Displays the version of mptutil. show adapter Displays information about the RAID controller such as the model number. show config Displays the volume and drive configuration for the controller. Each volume is listed along with the physical drives that the volume spans. If any hot spare drives are configured, then they are listed as well. show drives Lists all of the physical drives attached to the controller. show events Display all the entries from the controller's event log. Due to lack of documentation this command is not very useful currently and just dumps each log entry in hex. show volumes Lists all of the logical volumes managed by the controller. The physical drive management commands include: fail drive Mark drive as ``failed requested''. Note that this state is different from the ``failed'' state that is used when the firmware fails a drive. Drive must be a configured drive. online drive Mark drive as an online drive. Drive must be part a configured drive in either the ``offline'' or ``failed requested'' states. offline drive Mark drive as offline. Drive must be a configured, online drive. The logical volume management commands include: name volume name Sets the name of volume to name. volume cache volume enable|disable Enables or disables the drive write cache for the member drives of volume. volume status volume Display more detailed status about a single volume including the current progress of a rebuild operation if one is being performed. The configuration commands include: clear Delete the entire configuration including all volumes and spares. All drives will become standalone drives. create type [-q] [-v] [-s stripe_size] drive[,drive[,...]] Create a new volume. The type specifies the type of volume to create. Currently supported types include: raid0 Creates one RAID0 volume spanning the drives listed in the single drive list. raid1 Creates one RAID1 volume spanning the drives listed in the single drive list. raid1e Creates one RAID1E volume spanning the drives listed in the single drive list. Note: Not all volume types are supported by all controllers. If the -q flag is specified after type, then a ``quick'' initialization of the volume will be done. This is useful when the drives do not contain any existing data that need to be preserved. If the -v flag is specified after type, then more verbose output will be enabled. Currently this just provides notification as drives are added to volumes when building the configuration. The -s stripe_size parameter allows the stripe size of the array to be set. By default a stripe size of 64K is used. The list of valid values for a given type are listed in the output of show adapter. delete volume Delete the volume volume. Member drives will become standalone drives. add drive [volume] Mark drive as a hot spare. Drive must not be a member of a volume. If volume is specified, then the hot spare will be dedicated to that volume. Otherwise, drive will be used as a global hot spare backing all volumes for this controller. Note that drive must be as large as the smallest drive in all of the volumes it is going to back. remove drive Remove the hot spare drive from service. It will become a standalone drive. EXAMPLES
Mark the drive at bus 0 target 4 as offline: mptutil offline 0:4 Create a RAID1 array from the two standalone drives da1 and da2: mptutil create raid1 da1,da2 Mark standalone drive da3 as a global hot spare: mptutil add da3 SEE ALSO
mpt(4) HISTORY
The mptutil utility first appeared in FreeBSD 8.0. BUGS
The handling of spare drives appears to be unreliable. The mpt(4) firmware manages spares via spare drive ``pools''. There are eight pools numbered 0 through 7. Each spare drive can only be assigned to a single pool. Each volume can be backed by any combination of zero or more spare pools. The mptutil utility attempts to use the following algorithm for managing spares. Global spares are always assigned to pool 0, and all volumes are always backed by pool 0. For dedicated spares, mptutil assigns one of the remaining 7 pools to each volume and assigns dedicated drives to that pool. In practice however, it seems that assigning a drive as a spare does not take effect until the box has been rebooted. Also, the firmware renumbers the spare pool assignments after a reboot which undoes the effects of the algorithm above. Simple cases such as assigning global spares seem to work ok (albeit requiring a reboot to take effect) but more ``exotic'' configurations may not work reliably. Drive configuration commands result in an excessive flood of messages on the console. The mpt version 1 API that is used by mptutil and mpt(4) does not support volumes above two terabytes. This is a limitation of the API. If you are using this adapter with volumes larger than two terabytes, use the adapter in JBOD mode. Utilize geom(8), zfs(8), or another soft- ware volume manager to work around this limitation. BSD
August 16, 2009 BSD
All times are GMT -4. The time now is 04:55 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy