10-16-2015
So... generally for a minimal san, as you mentioned there is the storage, the switch and the host side.
I all of those case if the "holes" look like an empty rectangular hole, then those holes are missing SFP+ modules. In some cases you can purchase Twinax cables instead of going fiber to do this, in which case the SFP+ sides come with the cable, because your HP through and through, an HP rep can probabably see you such a cable. It will probably come out cheaper then individual SFP+'s and fiber cabling.
With all that said, the SFP+ is important, as vendors just don't generically handle SFP+'s. So make sure you get the right SFP+ for your devices. HP should be able to help.
And yes, you need connection from HBA (host) to switch and from storage to switch. Often times multiple for redundancy (multipath). You'll want multipath if this for enterprise use. If for home, then just single runs will do for playing around with SAN.
If going "discount" you probably would have done better with something more generic (like Nexsan) vs. an EVA. That way you have less problems with SFP's. My favorite is Qlogic switches, HBAs and Nexsan. But I know that Qlogic isn't going to popular anymore.
Today, I'd go Arista 10GBase-T, and go all copper Cat-6A for iSCSI. More generic, costs a lot less overall. But that's if I'm defining something new for the enterprise. If FC, then my choice (in the near past) would have been Qlogic HBAs, Qlogic switch (at least 8Gbit) and Nexsan. I'm a bit frustrated by the FC world right now, though I like it over iSCSI (better performing, easier to work with). Mainly because of all the SFP+ lock in.
FCoE? Suffers from too many of the same problems as iSCSI. To me these are just about equal. Theoretically easier to setup FCoE vs. iSCSI though.
And yes you can go straight from storage to host HBA. You dont' have to have a SAN.
9 More Discussions You Might Find Interesting
1. Linux Benchmarks
CPU/Speed: PowerPC 603ev 200Mhz
Ram: 92M EDO Ram
Motherboard: Apple
Bus: 2 PCI
Cache: L1 32k and L2 256k
Controller: ATA
Disk: 2GB ATA
Load: 1 user, running httpd, Xwin, various daemons
Kernel: Linux 2.4.22-2f
Kernel ELF?: ???
pgms: gcc versión 3.2.2 20030217 (Yellow Dog Linux 3.0 3.2.2-2a);... (0 Replies)
Discussion started by: clemare
0 Replies
2. HP-UX
I was conveniently excluded from some EVA8000 training, and now find myself with an EVA zoned, but no physical devices :(
My ioscan appears as follows:
ba 1 0/1 lba CLAIMED BUS_NEXUS Local PCI Bus Adapter (782)
fc 0 0/1/0/0 ... (3 Replies)
Discussion started by: mr_manny
3 Replies
3. Solaris
Hi everyone,
I wonder if I can canvas any opinions or thoughts (good or bad) on SAN attaching a SUN V880/490 to an EMC Clarion SAN?
At the moment the 880 is using 12 internal FC-AL disks as a db server and seems to be doing a pretty good job. It is not I/O, CPU or Memory constrained and the... (2 Replies)
Discussion started by: si_linux
2 Replies
4. AIX
I have a question about SAN commands
I have almost 15Tb of disk on my san
but assigned and something else I have almost 11Tb
There is a command to know, what its my real total storage capacity and
another command to know how much I used .?
Thanks again in advance (0 Replies)
Discussion started by: lo-lp-kl
0 Replies
5. UNIX for Advanced & Expert Users
Hi There,
Has anyone had any luck with or know how to get AIX 5+ to boot from a HP EVA 6000 SAN?
The servers used here will be P Class Blades
My initial searches on this so far did not bring results so I am guessing this may not be possible on HP SAN's but please let me know if I am... (0 Replies)
Discussion started by: fazzasx
0 Replies
6. Red Hat
Hey everyone.
I am working on designing a logging solution for a deployment we have going out in a few months. Right now we have a single storage head end, connected via fibre to a SAN. Basically the plan is to create a number of smaller LUNs on the SAN, and then use LVM2 to handle concatenating... (5 Replies)
Discussion started by: msarro
5 Replies
7. Solaris
Hi,
I have a query , is it possible that few LUNs presented to the Solaris OS which are in the unlabeled state can hung EVA frame and cause it to get unresponsive ?
please help.
thanks. (1 Reply)
Discussion started by: mushtaq
1 Replies
8. AIX
I'm having some trouble getting a POC of GPFS up and running. I've read a couple install guides including a couple IBM pdfs but I'm getting stumped on something I think is fairly fundamental.... I'm trying to do this all on a single 795, right now only the VIOs have HBAs so all LUNs are pointed to... (3 Replies)
Discussion started by: kneemoe
3 Replies
9. HP-UX
Hi,
I am very new to HP-UX, and we're going to be doing a SAN migration.
We're going to take down the machine, and zone it to the new SAN.
My question is, will the device names change and will that interfere with the LVM?
If the new disks come in with different device names, how would I... (3 Replies)
Discussion started by: BG_JrAdmin
3 Replies
LEARN ABOUT FREEBSD
iscsi
ISCSI(4) BSD Kernel Interfaces Manual ISCSI(4)
NAME
iscsi -- iSCSI initiator
SYNOPSIS
To compile this driver into the kernel, place the following line in the kernel configuration file:
device iscsi
Alternatively, to load the driver as a module at boot time, place the following line in loader.conf(5):
iscsi_load="YES"
DESCRIPTION
The iscsi subsystem provides the kernel component of an iSCSI initiator. The initiator is the iSCSI client, which connects to an iSCSI tar-
get, providing local access to a remote block device. The userland component is provided by iscsid(8) and both the kernel and userland are
configured using iscsictl(8). The iscsi subsystem is responsible for implementing the "Full Feature Phase" of the iSCSI protocol.
SYSCTL VARIABLES
The following variables are available as both sysctl(8) variables and loader(8) tunables:
kern.iscsi.ping_timeout
The number of seconds to wait for the target to respond to a NOP-Out PDU. In the event that there is no response within that time
the session gets forcibly restarted.
kern.iscsi.iscsid_timeout
The number of seconds to wait for ctld(8) to establish a session. After that time iscsi will abort and retry.
kern.iscsi.login_timeout
The number of seconds to wait for a login attempt to succeed. After that time iscsi will abort and retry.
kern.iscsi.maxtags
The maximum number of outstanding IO requests.
kern.iscsi.fail_on_disconnection
Controls the behavior after an iSCSI connection has been dropped due to network problems. When set to 1, a dropped connection causes
the iSCSI device nodes to be destroyed. After reconnecting, they will be created again. By default, the device nodes are left
intact. While the connection is down all input/output operations are suspended, to be retried after the connection is reestablished.
SEE ALSO
iscsi.conf(5), iscsictl(8), iscsid(8)
HISTORY
The iscsi subsystem first appeared in FreeBSD 10.0.
AUTHORS
The iscsi subsystem was developed by Edward Tomasz Napierala <trasz@FreeBSD.org> under sponsorship from the FreeBSD Foundation.
BSD
September 11, 2014 BSD