12-28-2018
Hi dkmartin,
From the way you describe the layout, I'm going to assume that you're running something like SVC for the management of the LUN's. Are the AIX environment LPAR's or are they standalone machines, or something else of course.
Managing LUN's accross six individual SAN devices is very costly in resource, along with the planning headache when it comes to new system builds - well inless you have plenty of disk. I currently have two devices - a 1.5Pb VMax and a 250Gb VNX and that provides enough confusion.
Where ver possible our approach is to go for a single LUN, but we do have quite a number of systems with large numbers of luns (some are in excess of 200) on AIX, Solaris and RHEL. We do have some very old versions on VMware, but they are slowly being decommissioned.
So in reality I would if time permits look at load and I/O on a system by system basis, where the results indicate there would be a benefit - replicate the existing model. If the results show it would be OK, go for the single LUN, if you're using Metro or Global mirror you may have to delve a bit deeper and look at inter system bandwidth.
Regards
Gull04
5 More Discussions You Might Find Interesting
1. Solaris
Thinking of using our San for network backups..
Have a Netra 240 being installed and planning to get some space on our San.
Do you know what software is used to access the San from my server or what I would need to do? I know how to connect to local storage, disk arrays etc but not sure what... (1 Reply)
Discussion started by: frustrated1
1 Replies
2. Solaris
Hi,
I have configured our SAN Storage to be connected to our new SUN T5220.
On the SAn it looks all fine
on the server I do not see any connection:
cfgadm -al
Ap_Id Type Receptacle Occupant Condition
c1 scsi-bus connected ... (4 Replies)
Discussion started by: manni2
4 Replies
3. SuSE
Hi, Here is the issue. Some more memory has been added from vCenter to the virtual machine. From the virtual machine running SuSE 11 SP3.
# modprobe acpiphp
# modprobe acpi-memhotplug
# grep -v online /sys/devices/system/memory/*/state
#
It looks like there is no offline memory, but free... (1 Reply)
Discussion started by: aixlover
1 Replies
4. UNIX for Dummies Questions & Answers
Good Morning everyone,
I want to know how to allocate unallocated drive space from a SAN to a file system that desperately needs the drive space. Does anyone have any documentation or tips on how to accomplish this? I am running on AIX version 6.1. (2 Replies)
Discussion started by: ryanco
2 Replies
5. Homework & Coursework Questions
Hi Team,
How do i mount or connect the SAN storage to a specific folder. I have tried to mount it but each time i can only mount 900GB of the storage to the folder:
ipmi1 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VolGroup-root_vol
... (0 Replies)
Discussion started by: ElVista
0 Replies
hpfc(7D) Devices hpfc(7D)
NAME
hpfc - Agilent fibre channel host bus adapter
SYNOPSIS
PCI pci103c
DESCRIPTION
The hpfc fibre channel host bus adapter is a SCSA compliant nexus driver that supports all Agilent fibre channel host bus adapters, includ-
ing the HHBA5100x, HHBA5101x, and HHBA5121x models. Agilent host bus adapters support the fibre channel protocol on private fibre channel
arbitrated loops and fabrics. The driver supports up to ten host bus adapters, with a maximum of 125 fibre channel devices on each host bus
adapter. The hpfc driver supports a maximum of 256 LUNs per target.
The hpfc driver does not support the BIOS Int 13 feature, which enables the booting of an operating system. As a result, you should not
install an operating system on devices attached to the hpfc driver.
CONFIGURATION
The hpfc driver attempts to configure itself using the information in the /kernel/drv/hpfc.conf configuration file.
By default, the driver supports only LUN 0 for each target device. To add multiple LUN support, modify the /kernel/drv/sd.conf file.
Before upgrading the hpfc driver, backup the sd.conf file to save customized LUN settings and then use pkgrm(1M) to remove the old version
of the driver.
The host bus adapter port is initialized to FL_Port when connected to a fabric switch. To change it to F_Port, add the init_as_nport=1
entry to the hpfc.conf file and reboot the system.
To conserve system resources, at least one disk drive must be attached to the hpfc driver. If no devices are attached, the driver will not
load.
FILES
/kernel/drv/hpfc 32-bit ELF kernel module
/kernel/drv/sparcv9/hpfc 64-bit ELF kernel module
/kernel/drv/hpfc.conf Driver configuration file
/kernel/drv/sd.conf SCSI disk configuration file
ATTRIBUTES
See attributes(5) for descriptions of the following attributes:
+-----------------------------+-----------------------------+
| ATTRIBUTE TYPE | ATTRIBUTE VALUE |
+-----------------------------+-----------------------------+
|Architecture |x86, SPARC |
+-----------------------------+-----------------------------+
SEE ALSO
luxadm(1M), pkgrm(1M), prtconf(1M), driver.conf(4), attributes(5), ses(7D), ssd(7D)
ANSI X3.272-1996, Fibre Channel Arbitrated Loop (FC-AL),
ANSI X3.269-1996, Fibre Channel Protocol for SCSI (FCP),
ANSI X3.270-1996, SCSI-3 Architecture Model (SAM),
Fibre Channel Private Loop SCSI Direct Attach (FC-PLDA)
SunOS 5.10 10 Oct 2000 hpfc(7D)