Sponsored Content
Special Forums Hardware Filesystems, Disks and Memory Some question about SAN on HP 4400 EVA Post 302957951 by cjcox on Friday 16th of October 2015 07:06:06 PM
Old 10-16-2015
So... generally for a minimal san, as you mentioned there is the storage, the switch and the host side.

I all of those case if the "holes" look like an empty rectangular hole, then those holes are missing SFP+ modules. In some cases you can purchase Twinax cables instead of going fiber to do this, in which case the SFP+ sides come with the cable, because your HP through and through, an HP rep can probabably see you such a cable. It will probably come out cheaper then individual SFP+'s and fiber cabling.

With all that said, the SFP+ is important, as vendors just don't generically handle SFP+'s. So make sure you get the right SFP+ for your devices. HP should be able to help.


And yes, you need connection from HBA (host) to switch and from storage to switch. Often times multiple for redundancy (multipath). You'll want multipath if this for enterprise use. If for home, then just single runs will do for playing around with SAN.

If going "discount" you probably would have done better with something more generic (like Nexsan) vs. an EVA. That way you have less problems with SFP's. My favorite is Qlogic switches, HBAs and Nexsan. But I know that Qlogic isn't going to popular anymore.

Today, I'd go Arista 10GBase-T, and go all copper Cat-6A for iSCSI. More generic, costs a lot less overall. But that's if I'm defining something new for the enterprise. If FC, then my choice (in the near past) would have been Qlogic HBAs, Qlogic switch (at least 8Gbit) and Nexsan. I'm a bit frustrated by the FC world right now, though I like it over iSCSI (better performing, easier to work with). Mainly because of all the SFP+ lock in.

FCoE? Suffers from too many of the same problems as iSCSI. To me these are just about equal. Theoretically easier to setup FCoE vs. iSCSI though.

And yes you can go straight from storage to host HBA. You dont' have to have a SAN.
 

9 More Discussions You Might Find Interesting

1. Linux Benchmarks

PowerMac 4400 YLD 3.0

CPU/Speed: PowerPC 603ev 200Mhz Ram: 92M EDO Ram Motherboard: Apple Bus: 2 PCI Cache: L1 32k and L2 256k Controller: ATA Disk: 2GB ATA Load: 1 user, running httpd, Xwin, various daemons Kernel: Linux 2.4.22-2f Kernel ELF?: ??? pgms: gcc versión 3.2.2 20030217 (Yellow Dog Linux 3.0 3.2.2-2a);... (0 Replies)
Discussion started by: clemare
0 Replies

2. HP-UX

Any EVA admin's out there?

I was conveniently excluded from some EVA8000 training, and now find myself with an EVA zoned, but no physical devices :( My ioscan appears as follows: ba 1 0/1 lba CLAIMED BUS_NEXUS Local PCI Bus Adapter (782) fc 0 0/1/0/0 ... (3 Replies)
Discussion started by: mr_manny
3 Replies

3. Solaris

Thoughts/experiences of SAN attaching V880 to EMC SAN

Hi everyone, I wonder if I can canvas any opinions or thoughts (good or bad) on SAN attaching a SUN V880/490 to an EMC Clarion SAN? At the moment the 880 is using 12 internal FC-AL disks as a db server and seems to be doing a pretty good job. It is not I/O, CPU or Memory constrained and the... (2 Replies)
Discussion started by: si_linux
2 Replies

4. AIX

Question about IBM San Ds4500

I have a question about SAN commands I have almost 15Tb of disk on my san but assigned and something else I have almost 11Tb There is a command to know, what its my real total storage capacity and another command to know how much I used .? Thanks again in advance (0 Replies)
Discussion started by: lo-lp-kl
0 Replies

5. UNIX for Advanced & Expert Users

Can AIX 5.3 - 6 Boot From HP EVA 6000 SAN

Hi There, Has anyone had any luck with or know how to get AIX 5+ to boot from a HP EVA 6000 SAN? The servers used here will be P Class Blades My initial searches on this so far did not bring results so I am guessing this may not be possible on HP SAN's but please let me know if I am... (0 Replies)
Discussion started by: fazzasx
0 Replies

6. Red Hat

SAN/LVM question

Hey everyone. I am working on designing a logging solution for a deployment we have going out in a few months. Right now we have a single storage head end, connected via fibre to a SAN. Basically the plan is to create a number of smaller LUNs on the SAN, and then use LVM2 to handle concatenating... (5 Replies)
Discussion started by: msarro
5 Replies

7. Solaris

EVA storage HUNG / Solaris issue

Hi, I have a query , is it possible that few LUNs presented to the Solaris OS which are in the unlabeled state can hung EVA frame and cause it to get unresponsive ? please help. thanks. (1 Reply)
Discussion started by: mushtaq
1 Replies

8. AIX

GPFS initial setup, but perhaps this is a SAN/VIO question

I'm having some trouble getting a POC of GPFS up and running. I've read a couple install guides including a couple IBM pdfs but I'm getting stumped on something I think is fairly fundamental.... I'm trying to do this all on a single 795, right now only the VIOs have HBAs so all LUNs are pointed to... (3 Replies)
Discussion started by: kneemoe
3 Replies

9. HP-UX

SAN Migration question

Hi, I am very new to HP-UX, and we're going to be doing a SAN migration. We're going to take down the machine, and zone it to the new SAN. My question is, will the device names change and will that interfere with the LVM? If the new disks come in with different device names, how would I... (3 Replies)
Discussion started by: BG_JrAdmin
3 Replies
ISCSID(8)						      System Manager's Manual							 ISCSID(8)

NAME
iscsid - establish iSCSI connections SYNOPSIS
iscsid [ -b bindingfile ] [ -d ] [ -f configfile ] [ -l basedir ] [ -m mode ] [ -n ] DESCRIPTION
iscsid establishes connections with iSCSI targets defined in /etc/iscsi.conf. Once the Linux iSCSI driver is activated, a discovery process for iSCSI storage devices will proceed as follows: - The iSCSI daemon requests available iSCSI targets from the iSCSI target, and passes the information discovered to the iSCSI kernel module. - The iSCSI kernel module establishes connections to the targets. - Linux queries targets for device information. - Linux creates a mapping from SCSI device nodes to iSCSI targets. iscsid should be started after networking is configured and stopped after all iSCSI devices have been unmounted. Warning: Data corruption can occur if you do not unmount iSCSI devices before disabling network interfaces! DEVICE NAMES
Because Linux assigns SCSI device nodes dynamically whenever a SCSI logical unit is detected, the mapping from device nodes (e.g /dev/sda, /dev/sdb) to iSCSI targets and logical units may vary. Variations in process scheduling and network delay may result in iSCSI targets being mapped to different SCSI device nodes every time the driver is started. Because of this variability, configuring applications or operating system utilities to use the standard SCSI device nodes to access iSCSI devices may result in SCSI commands being sent to the wrong target or logical unit. To provide a more reliable namespace, the iSCSI driver will scan the system to determine the mapping from SCSI device nodes to iSCSI tar- gets, and then create a tree of directories and symbolic links under /dev/iscsi to make it easier to use a particular iSCSI target's logi- cal units. TARGET BINDINGS
The iSCSI driver automatically maintains a bindings file /var/iscsi/bindings. This file contains persistent bindings to ensure that the same iSCSI bus and target id number are used for every iSCSI session to a particular iSCSI TargetName, no matter how many times the driver is restarted. This feature ensures that the SCSI numbers in the device symlinks described above will always map to the same iSCSI target. Note that because of the way Linux dynamically allocates SCSI device nodes as SCSI devices are found, the driver does not and can not ensure that any particular SCSI device node (e.g. /dev/sda) will always map to the same iSCSI TargetName. The symlinks described in the section on Device Names are intended to provide a persistent device mapping for use by applications and fstab files, and should be used instead of direct references to particular SCSI device nodes. If the bindings file grows too large, lines for targets that no longer exist may be manually removed by editing the file. Manual editing should not normally be needed, since the driver can maintain up to 65535 different bindings. OPTIONS
-b bindingfile Specify an alternative bindings file instead of /var/iscsi/bindings, which is the default. -d Turns on debug mode. Each occurence of -d will increment the debug level by one. The default is zero (off). -f configfile Specify an alternative configuration file instead of /etc/iscsi.conf, which is the default. -l basedir Specify the base directory under which to build a tree of directories containing symlinks to SCSI device nodes, in a manner similar to the devfs Linux kernel option. Using these symlinks hides variations in the mapping from SCSI device nodes to SCSI device id numbers. -m mode Specify the directory permission mode (in octal) to use when creating directories. -n Avoid auto-backgrounding. -v Print version and exit. SIGNALS
iscsid reacts to a set of signals. You may easily send a signal to iscsid using the following: kill -SIGNAL `cat /var/run/iscsid.pid` SIGTERM The daemon and all of it's children will die. SIGHUP sent to the main daemon process will restart all discovery processes and reprobe LUNs on all targets. iscsid and all of it's chil- dren will die after shutting down all of the kernel's iSCSI sessions. SIGCHLD Wait for children. NOTES
The iSCSI Driver for Linux provides IP access to a maximum of sixteen remote SCSI targets. Each target will be probed for up to 256 LUNs, until the Linux kernel's limit of SCSI devices has been reached. The iSCSI drivers, README files, and example configuration files are available on the Linux-iSCSI homepage at: http://linux-iscsi.sourceforge.com/ <http://linux-iscsi.sourceforge.com/> FILES
/etc/iscsi.conf target address and LUN configuration /var/run/iscsi.pid the process id of the running daemon /var/iscsi/bindings persistent bus and target id bindings for iSCSI TargetNames /proc/scsi/iscsi information about iSCSI devices /dev/iscsi a directory tree containing symlinks to iSCSI device nodes. SEE ALSO
iscsi.conf(5) $Revision: 1.8 $ $Date: 2002/09/20 19:27:32 $ ISCSID(8)
All times are GMT -4. The time now is 07:17 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy