08-05-2010
Cannot see the IBM SAN storage
HI all,
I had recently change the Server storage from EMC to the IBM SAN.
but after the configuration, the IBM success to see the server HBA port and successfully assign a LUN for the server.
When i go to the server, and restarted it. i use the "format" command to check, but din see any IBM SAN.
is it anyway i still need to configure in my server?
kindly advise =)
Thanks
---------- Post updated at 12:02 PM ---------- Previous update was at 11:21 AM ----------
i think the following step, solve my problem
devfsadm -Cv
Quote:
Originally Posted by
kingston
In my case, i have a IBM ESS Shark 800 storage and have multi os clients solaris 5.8 and 5.10 and RHEL 5. There is host attached with the storage and it has a graphical utility to create lun after adding new hard disks. Once the lun is created then i need to assign the lun to the host which has the WWPN ( not WWNN ) ip address, hostname. Then from solaris if i gave "devfsadm -Cv" i could see the lun.
This is what the procedure i follow in my site. It will vary with respect to the vendor. But i hope i may give a idea about assigning a lun to the client.
How configure SAN to server?
9 More Discussions You Might Find Interesting
1. AIX
To summarize the problem: The "IBM FastT Storage Manager Client v8" shows that our Disk Farm is arranged into 6 logical drives each in a RAID 5 configuration. This software also shows that 5 of the 6 logical drives (from Disk Farm) are in a error state: "Failed Logical Drive - Drive Failure".... (1 Reply)
Discussion started by: aix-olympics
1 Replies
2. AIX
Hello,
Does anyone know how to copy SAN Storage logical disks from IBM TotalStorage Software.
I have a SAN Logical Disk of 200GB mounted on my AIX LPAR_1 via fibre channel fcs0
I would like to make an exact copy of the SAN Logical Disk from IBM totalStorage and mount it on AIX LPAR_2
... (4 Replies)
Discussion started by: filosophizer
4 Replies
3. AIX
Hello,
I have AIX 6.1 with TL 4 and it is connected to IBM SAN STORAGE DS4700
After assigning some disks from SAN to AIX, I can see the disks in my AIX as
hdisk2 Available 05-00-02 MPIO Other DS4K Array Disk
hdisk3 Available 05-00-02 MPIO Other DS4K Array Disk
But it should... (0 Replies)
Discussion started by: filosophizer
0 Replies
4. UNIX for Dummies Questions & Answers
hi guys
I installed Centos 5.5 (local disk). I am using 2 HBAs
Now I mapped 5 LUNs from a Storage.
I will be using LVM
just to test I assigned a LUN I've read I have to use multipath to avoid my Centos see the LUN twice
I enabled mdmpd and multipathd...
something else I should do?
... (8 Replies)
Discussion started by: kopper
8 Replies
5. AIX
Hello,
I have IBM SAN STORAGE DS4100 and one of the cache battery for the controller is dead. Suddenly the performance has been degraded and access to SAN disks ( reading and writing ) became very slow ?
My query: Replacing the battery will take 6 days, so in the mean time what are the ways... (1 Reply)
Discussion started by: filosophizer
1 Replies
6. AIX
Can anyone recommend a good book on san storage basics and how it communicates with an AIX server? (1 Reply)
Discussion started by: NycUnxer
1 Replies
7. AIX
Hello,
I have DS4000 IBM SAN Storage ( aka FastT Storage )
One of my disks has failed and I had a hot spare disk covering all the arrays. As the disk failed, immediately the hotspare disk took over the failed disk ( see the JPEG in the attachment )
My Question: How can I make the hotspare... (1 Reply)
Discussion started by: filosophizer
1 Replies
8. AIX
Has anyone tried SAN to SAN mirroring on IBM DS SAN Storage.
DS5020 mentions Enhanced Remote Mirror to multi-LUN applications
I wonder if Oracle High availibility can be setup using Remote Mirror option of SAN ? (1 Reply)
Discussion started by: filosophizer
1 Replies
9. AIX
Hi,
This is follow up to the post https://www.unix.com/aix/233361-san-disk-appearing-double-aix.html
When I connected Pseries Machine HBA Card ( Dual Port ) directly to the SAN Storage DS4300 , I was able to see Host Port Adapter WWN numbers , although I was getting this message... (2 Replies)
Discussion started by: filosophizer
2 Replies
LEARN ABOUT NETBSD
bioctl
BIOCTL(8) BSD System Manager's Manual BIOCTL(8)
NAME
bioctl -- RAID management interface
SYNOPSIS
bioctl device command [arg [...]]
DESCRIPTION
RAID device drivers which support management functionality can register their services with the bio(4) driver. bioctl then can be used to
manage the RAID controller's properties.
COMMANDS
The following commands are supported:
show [disks | volumes]
Without any argument by default bioctl will show information about all volumes and the logical disks used on them. If
disks is specified, only information about physical disks will be shown. If volumes is specified, only information about
the volumes will be shown.
alarm [disable | enable | silence | test]
Control the RAID card's alarm functionality, if supported. By default if no argument is specified, its current state
will be shown. Optionally the disable, enable, silence, or test arguments may be specified to enable, disable, silence,
or test the RAID card's alarm.
blink start channel:target.lun | stop channel:target.lun
Instruct the device at channel:target.lun to start or cease blinking, if there's ses(4) support in the enclosure.
hotspare add channel:target.lun | remove channel:target.lun
Create or remove a hot-spare drive at location channel:target.lun.
passthru add DISKID channel:target.lun | remove channel:target.lun
Create or remove a pass-through device. The DISKID argument specifies the disk that will be used for the new device, and
it will be created at the location channel:target.lun. NOTE: Removing a pass-through device that has a mounted filesys-
tem will lead to undefined behaviour.
check start VOLID | stop VOLID
Start or stop consistency volume check in the volume with index VOLID. NOTE: Not many RAID controllers support this fea-
ture.
create volume VOLID DISKIDs [SIZE] STRIPE RAID_LEVEL channel:target.lun
Create a volume at index VOLID. The DISKIDs argument will specify the first and last disk, i.e.: 0-3 will use the disks
0, 1, 2, and 3. The SIZE argument is optional and may be specified if not all available disk space is wanted (also
dependent of the RAID_LEVEL). The volume will have a stripe size defined in the STRIPE argument and it will be located
at channel:target.lun.
remove volume VOLID channel:target.lun
Remove a volume at index VOLID and located at channel:target.lun. NOTE: Removing a RAID volume that has a mounted
filesystem will lead to undefined behaviour.
EXAMPLES
The following command, executed from the command line, shows the status of the volumes and its logical disks on the RAID controller:
$ bioctl arcmsr0 show
Volume Status Size Device/Label RAID Level Stripe
=================================================================
0 Building 468G sd0 ARC-1210-VOL#00 RAID 6 128KB 0% done
0:0 Online 234G 0:0.0 noencl <WDC WD2500YS-01SHB1 20.06C06>
0:1 Online 234G 0:1.0 noencl <WDC WD2500YS-01SHB1 20.06C06>
0:2 Online 234G 0:2.0 noencl <WDC WD2500YS-01SHB1 20.06C06>
0:3 Online 234G 0:3.0 noencl <WDC WD2500YS-01SHB1 20.06C06>
To create a RAID 5 volume on the SCSI 0:15.0 location on the disks 0, 1, 2, and 3, with stripe size of 64Kb on the first volume ID, using all
available free space on the disks:
$ bioctl arcmsr0 create volume 0 0-3 64 5 0:15.0
To remove the volume 0 previously created at the SCSI 0:15.0 location:
$ bioctl arcmsr0 remove volume 0 0:15.0
SEE ALSO
arcmsr(4), bio(4), cac(4), ciss(4), mfi(4)
HISTORY
The bioctl command first appeared in OpenBSD 3.8, it was rewritten for NetBSD 5.0.
AUTHORS
The bioctl interface was written by Marco Peereboom <marco@openbsd.org> and was rewritten with multiple features by
Juan Romero Pardines <xtraeme@NetBSD.org>.
BSD
March 16, 2008 BSD