Sponsored Content
Full Discussion: IBM RAID disks
Operating Systems Linux Red Hat IBM RAID disks Post 302276848 by pdudley on Wednesday 14th of January 2009 08:17:32 PM
Old 01-14-2009
IBM RAID disks

We have a Red Hat linux server running on IBM x445 hardware. There are external disks in an IBM EXP300 disk enclosure. The system is running RAID 5. One of the four IBM disks (73.4 GB 10k FRU 06P5760) has become faulty. The system is still up and running OK because of the RAID. In that same EXP300 disk enclosure we have two other disks - exactly the same specs and FRU number - which are not used and have not been used for a while. They were connected to a system which is no longer being used and have since been disconnected from that system and are sitting in the EXP300 disk enclosure doing nothing. All of these disks are hot swappable.

What I would like to know is - can I use one of these spare disks to replace the faulty one? As they were previously used briefly on another system - if I can use them do they have to be formatted or initialised in some way before use?

Thanks
Paul
 

10 More Discussions You Might Find Interesting

1. AIX

new IBM pseries and new disks

Hello Gurus, i'm quite new person in AIX world, actually i'm SAP consultant. But due to my job faced with AIX. Trying to configure new server for installation SAP. We have 4 disks one of them (hdisk3) has VG rootvg. How i can extend rootvg with another disk? i tried: # extendvg rootvg hdisk0... (2 Replies)
Discussion started by: sapbcer
2 Replies

2. AIX

Horribly old stuff: SMS disks for IBM PC 850?

Hi. I'm trying to reinstall AIX 4.3.3 (ML9) on an IBM PC 850 PowerPC machine but I have trouble getting an SMS diskette that will let me set the CD as bootable. The SMS diskette I have is 1.26 and it either crashes or lists Hard Disk 0 some 20 + times (and nothing else) on the screen you... (2 Replies)
Discussion started by: dlundh
2 Replies

3. AIX

IBM FastT600 SAN - RAID 5 Storage Manager Client v08.33.G5.03 - Recovery?

To summarize the problem: The "IBM FastT Storage Manager Client v8" shows that our Disk Farm is arranged into 6 logical drives each in a RAID 5 configuration. This software also shows that 5 of the 6 logical drives (from Disk Farm) are in a error state: "Failed Logical Drive - Drive Failure".... (1 Reply)
Discussion started by: aix-olympics
1 Replies

4. Solaris

Move disks to different StorEdge, keeping RAID

Hi. I need to move a 5 disk RAID5 array from a SE3310 box to a different SE3310 array. After installing the disks in the "new" StorEdge device, I "would like" ;) to be able have access to the data which is on the RAID. Essentially, the quesion is, how can this be done? :confused: I checked... (5 Replies)
Discussion started by: alexs77
5 Replies

5. Solaris

Solaris not recognizing RAID 5 disks

I've just installed Sol 10 Update 9 on a Sun 4140 server and have a RAID 1 configuration (2 136 Gb drives) for the OS and have created a RAID 5 array (6 136 GB) drives. When i log into the system I am unable to see the RAID 5 disks at all. I've tried using the devfsadm command but no luck and... (9 Replies)
Discussion started by: goose25
9 Replies

6. Linux

If i don't have raid disks can i shut down dmraid device-mapper?

hello my centOS newly installed system loading dmraid modules on startup I did remove all LVM/raid things from system installation menus and after installation too but dmraid is still there and he says: no raid disks found also I did modprobe -r dm_raid45 and it do remove it but only until... (7 Replies)
Discussion started by: tip78
7 Replies

7. AIX

SCSI PCI - X RAID Controller card RAID 5 AIX Disks disappeared

Hello, I have a scsi pci x raid controller card on which I had created a disk array of 3 disks when I type lspv ; I used to see 3 physical disks ( two local disks and one raid 5 disk ) suddenly the raid 5 disk array disappeared ; so the hardware engineer thought the problem was with SCSI... (0 Replies)
Discussion started by: filosophizer
0 Replies

8. Solaris

Solaris 10 Installation - Disks missing, and Raid

Hey everyone. First, let me start by saying I'm primarily focused on linux boxes, and just happened to get pulled into building two T5220's. I'm not super educated on sun boxes. Both T5220's have 8 146GB 15k SAS drives. Inside the service processor, I can run SHOW /SYS/HDD{0-7} and they all come... (2 Replies)
Discussion started by: msarro
2 Replies

9. Red Hat

RAID Configuration for IBM Serveraid-7k SCSI RAID Controller

Hello, I want to delete a RAID configuration an old server has. Since i haven't the chance to work with the specific raid controller in the past can you please help me how to perform the configuraiton? I downloaded IBM ServeRAID Support CD but i wasn't able to configure the video card so i... (0 Replies)
Discussion started by: @dagio
0 Replies

10. Solaris

Hardware RAID using three disks

Dear All , Pl find the below command , # raidctl -l Controller: 1 Volume:c1t0d0 Disk: 0.0.0 Disk: 0.1.0 Disk: 0.3.0 # raidctl -l c1t0d0 Volume Size Stripe Status Cache RAID Sub Size ... (10 Replies)
Discussion started by: jegaraman
10 Replies
MFI(4)							   BSD Kernel Interfaces Manual 						    MFI(4)

NAME
mfi -- LSI MegaRAID SAS driver SYNOPSIS
To compile this driver into the kernel, place the following lines in your kernel configuration file: device pci device mfi Alternatively, to load the driver as a module at boot time, place the following line in loader.conf(5): mfi_load="YES" DESCRIPTION
This driver is for LSI's next generation PCI Express SAS RAID controllers. Access to RAID arrays (logical disks) from this driver is pro- vided via /dev/mfid? device nodes. A simple management interface is also provided on a per-controller basis via the /dev/mfi? device node. The mfi name is derived from the phrase "MegaRAID Firmware Interface", which is substantially different than the old "MegaRAID" interface and thus requires a new driver. Older SCSI and SATA MegaRAID cards are supported by amr(4) and will not work with this driver. Two sysctls are provided to tune the mfi driver's behavior when a request is made to remove a mounted volume. By default the driver will disallow any requests to remove a mounted volume. If the sysctl dev.mfi.%d.delete_busy_volumes is set to 1, then the driver will allow mounted volumes to be removed. A tunable is provided to adjust the mfi driver's behaviour when attaching to a card. By default the driver will attach to all known cards with high probe priority. If the tunable hw.mfi.mrsas_enable is set to 1, then the driver will reduce its probe priority to allow mrsas to attach to the card instead of mfi. HARDWARE
The mfi driver supports the following hardware: o LSI MegaRAID SAS 1078 o LSI MegaRAID SAS 8408E o LSI MegaRAID SAS 8480E o LSI MegaRAID SAS 9240 o LSI MegaRAID SAS 9260 o Dell PERC5 o Dell PERC6 o IBM ServeRAID M1015 SAS/SATA o IBM ServeRAID M1115 SAS/SATA o IBM ServeRAID M5015 SAS/SATA o IBM ServeRAID M5110 SAS/SATA o IBM ServeRAID-MR10i o Intel RAID Controller SRCSAS18E o Intel RAID Controller SROMBSAS18E FILES
/dev/mfid? array/logical disk interface /dev/mfi? management interface DIAGNOSTICS
mfid%d: Unable to delete busy device An attempt was made to remove a mounted volume. SEE ALSO
amr(4), pci(4), mfiutil(8) HISTORY
The mfi driver first appeared in FreeBSD 6.1. AUTHORS
The mfi driver and this manual page were written by Scott Long <scottl@FreeBSD.org>. BUGS
The driver does not support big-endian architectures at this time. BSD
July 15, 2013 BSD
All times are GMT -4. The time now is 07:24 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy