Sponsored Content
Full Discussion: SAN vs. Local disk.
Top Forums UNIX for Beginners Questions & Answers SAN vs. Local disk. Post 303031279 by bakunin on Monday 25th of February 2019 09:27:32 AM
Old 02-25-2019
Quote:
Originally Posted by ikx
I am in the market looking to purchase a new E950 server and I am trying to decide between using local SSD drives or SSD based SAN. The application that will be running on this server is read-intensive so I am looking for the most optimal configuration to support this application. There are no other servers or applications that will use this SAN (if I decide to go that route.) The deciding factor to me is Performance regardless of the hardware cost (granted I don't want to pay for something I end up not using.) SAN or local SSD (both running RAID 10)? This is really the question I am trying to answer before I pull the trigger and complete this purchase. Any insights from this community is greatly appreciated.
There are a few more points to consider IMHO:

1) disks (regardless which technology) will over time malfunction and need to be replaced. There is some effort involved in such a replacement. SAN systems have usually ways to - more or less - "effortless" replacing disks built in because they usually are used to deal with a lot of disks and the chance that one disk malfunctions rises with the number of disks involved. You might want to do some risk calculation based on how often on average you expect a disk to break (there is usually a "MTBF" - "meantime between failure" number available), how long you expect the change to take and how much a downtime of the expected duration will cost.

2) SANs and local disks do differ in the way they are attached to the system. Local disks may use SCSI or the "M.2" interface. Notice that you cannot have several local (SSD-) disks attached via M.2. SAN, onthe other hand, may use a FC-connection or even several FC-connections in parallel (FC-drivers allow this for redundancy as well as load-balancing). The capacity of such FC-connections may by far exceed the speed of local disks. On the other hand you will need not only a SAN but also a FC-switch (Brocade are the most wide-spread) and the administration ("zoning") will be more complicated.

3) SANs - if set up with this in mind - may add redundancy and thus high-availability to the system. Again, it depends on the system, its purpose, etc.. to calculate properly the risk of it failing for some amount of time. Calculate how much it would set you back to have the system offline for: 1 hour / 1day / 1 week and this will give you an idea how much money spent on preventing these kinds of desasters is worthwhile.

4) SAN systems for themselves are rather expensive. To alleviate this they become cheaper and cheaper (in comparison to local disks) the more you virtualise and the more systems use it. So, a plan to buy a SAN system for a single system only might be on the expensive side but with a expectation of adding other systems using it too the costs may still be reasonable. So you may want to rethink your immediate problem in a more gobal context.

5) Notice that for a system optimised for speed you need an adequate backup solution. For this you also need a disaster scenario plan to estimate what the costs could be and hence how much the prevention may cost. Then you know how fast a recovery needs to be and therefore which technologies you need to employ to get that speed. You can use a SAN also for snapshots (perhaps quickest way of recovering a "point in time"), very fast medium to put an online backup to, only then migrating it to slower media like tapes and similar solutions. You might want to take that also into consideration.

I hope this helps.

bakunin
 

10 More Discussions You Might Find Interesting

1. AIX

AIX disk less with SAN

Hi All, I have mirrored SAN volume on my B80 rootvg. Can I just remove the mirror and "Remove a P V from a V G" and it will be a diskless AIX? Is that going to boot on SAN rootvg volume? Thanks in advance, itik (3 Replies)
Discussion started by: itik
3 Replies

2. Filesystems, Disks and Memory

disk errors with san snapshot

so we have a solrais 9 system attached to an HP SAN. we are using sssu to do snap clones every hour. the only problem is the we get write errors on the solrais system every time we do a snap. from /var/adm/messages Apr 21 14:37:48 svr001 scsi: WARNING:... (0 Replies)
Discussion started by: robsonde
0 Replies

3. AIX

hard disk and san

Hello everyone I got several aix boxes with aix 5.3 I got a ibm san ds4500 My question is How can I do a match between my disks on aix and the san? I try to do a match with the LUN but for example. In my san I got several 1 LUN and on one of my aix box I got this If I type lscfg... (4 Replies)
Discussion started by: lo-lp-kl
4 Replies

4. Solaris

SAN disk failure

hi all, have a solaris 9 OS and a SAN disk which used to work fine is not getting picked up by my machine. can anyone point out things to check in order to troubleshoot this ?? thanks in advance. (3 Replies)
Discussion started by: cesarNZ
3 Replies

5. Filesystems, Disks and Memory

SAN Disk w/o Cluster

Scenario: I've got 2 M5000's connected to a 9985 SAN storage array. I have configured the SAN disks with stmsboot, format and newfs. I can access the same SAN space from both systems. I have created files from both systems on the SAN space. Question: Why can't I see the file created... (3 Replies)
Discussion started by: bluescreen
3 Replies

6. UNIX for Dummies Questions & Answers

SAN and Disk I/O ... do we care?

Hi all, My main function is as a DBA. Another person manages the server and the SAN. I just want to know if I should be worried about high disk I/O or is it irrelevant as the I/O "load balancing" will be "taken care" of by the SAN? For example, I have hdisk1-5 and I can see that there are... (2 Replies)
Discussion started by: newbie_01
2 Replies

7. Solaris

I/O Error on SAN Disk

Hi, I have a production solaris 10 SPARC system (portal). Yesterday legato/Networker gave an I/O Error on one of the files on its SAN mounted disk. I went to that particular file on the system, did an ls and it showed the file. However, ls -l did not work and it said IO error. ... (6 Replies)
Discussion started by: Mack1982
6 Replies

8. UNIX for Advanced & Expert Users

Oracle VM / T3 SAN or Local for LDOMS?

I have some T3's we just purchased and we are looking to carve these up into LDOMS's. Just wondering if anyone can give me a quick run down or pro / cons on SAN vs local internal for the LDOM itself. These will external SAN storage for their applications. (0 Replies)
Discussion started by: jlouki01
0 Replies

9. Red Hat

Sharing SAN disk with multiple severs

Hi , I had a requirement to share a san disk between two rhel severs. I am planning to discover the same disk in two rhel nodes and mount it. Is it a feasible solution? and what kind of issues we may encounter mounting same disk in two OS's parallel ? (2 Replies)
Discussion started by: nanduri
2 Replies

10. AIX

SAN Disk Appearing double in AIX

Hello Folks, I have directly connected my IBM Pseries AIX machine to SAN Storage I have a dual port Fibre Channel Adapter Connected both fiber ports to SAN Box IBM DS4500 Controller A & Controller B Using IBM DS storage manager client 10 -- created one logical disk and assigned to a... (18 Replies)
Discussion started by: filosophizer
18 Replies
CCD(4)							   BSD Kernel Interfaces Manual 						    CCD(4)

NAME
ccd -- Concatenated Disk driver SYNOPSIS
device ccd DESCRIPTION
The ccd driver provides the capability of combining one or more disks/partitions into one virtual disk. This document assumes that you are familiar with how to generate kernels, how to properly configure disks and devices in a kernel configura- tion file, and how to partition disks. In order to compile in support for the ccd, you must add a line similar to the following to your kernel configuration file: device ccd # concatenated disk devices As of the FreeBSD 3.0 release, you do not need to configure your kernel with ccd but may instead use it as a kernel loadable module. Simply running ccdconfig(8) will load the module into the kernel. A ccd may be either serially concatenated or interleaved. To serially concatenate the partitions, specify the interleave factor of 0. Note that mirroring may not be used with an interleave factor of 0. There is a run-time utility that is used for configuring ccds. See ccdconfig(8) for more information. The Interleave Factor If a ccd is interleaved correctly, a ``striping'' effect is achieved, which can increase sequential read/write performance. The interleave factor is expressed in units of DEV_BSIZE (usually 512 bytes). For large writes, the optimum interleave factor is typically the size of a track, while for large reads, it is about a quarter of a track. (Note that this changes greatly depending on the number and speed of disks.) For instance, with eight 7,200 RPM drives on two Fast-Wide SCSI buses, this translates to about 128 for writes and 32 for reads. A larger interleave tends to work better when the disk is taking a multitasking load by localizing the file I/O from any given process onto a single disk. You lose sequential performance when you do this, but sequential performance is not usually an issue with a multitasking load. An interleave factor must be specified when using a mirroring configuration, even when you have only two disks (i.e., the layout winds up being the same no matter what the interleave factor). The interleave factor will determine how I/O is broken up, however, and a value 128 or greater is recommended. ccd has an option for a parity disk, but does not currently implement it. The best performance is achieved if all component disks have the same geometry and size. Optimum striping cannot occur with different disk types. For random-access oriented workloads, such as news servers, a larger interleave factor (e.g., 65,536) is more desirable. Note that there is not much ccd can do to speed up applications that are seek-time limited. Larger interleave factors will at least reduce the chance of having to seek two disk-heads to read one directory or a file. Disk Mirroring You can configure the ccd to ``mirror'' any even number of disks. See ccdconfig(8) for how to specify the necessary flags. For example, if you have a ccd configuration specifying four disks, the first two disks will be mirrored with the second two disks. A write will be run to both sides of the mirror. A read will be run to either side of the mirror depending on what the driver believes to be most optimal. If the read fails, the driver will automatically attempt to read the same sector from the other side of the mirror. Currently ccd uses a dual seek zone model to optimize reads for a multi-tasking load rather than a sequential load. In an event of a disk failure, you can use dd(1) to recover the failed disk. Note that a one-disk ccd is not the same as the original partition. In particular, this means if you have a file system on a two-disk mir- rored ccd and one of the disks fail, you cannot mount and use the remaining partition as itself; you have to configure it as a one-disk ccd. You cannot replace a disk in a mirrored ccd partition without first backing up the partition, then replacing the disk, then restoring the partition. Linux Compatibility The Linux compatibility mode does not try to read the label that Linux' md(4) driver leaves on the raw devices. You will have to give the order of devices and the interleave factor on your own. When in Linux compatibility mode, ccd will convert the interleave factor from Linux terminology. That means you give the same interleave factor that you gave as chunk size in Linux. If you have a Linux md(4) device in ``legacy'' mode, do not use the CCDF_LINUX flag in ccdconfig(8). Use the CCDF_NO_OFFSET flag instead. In that case you have to convert the interleave factor on your own, usually it is Linux' chunk size multiplied by two. Using a Linux RAID this way is potentially dangerous and can destroy the data in there. Since FreeBSD does not read the label used by Linux, changes in Linux might invalidate the compatibility layer. However, using this is reasonably safe if you test the compatibility before mounting a RAID read-write for the first time. Just using ccdconfig(8) without mounting does not write anything to the Linux RAID. Then you do a fsck.ext2fs (ports/sysutils/e2fsprogs) on the ccd device using the -n flag. You can mount the file system read-only to check files in there. If all this works, it is unlikely that there is a problem with ccd. Keep in mind that even when the Linux compatibility mode in ccd is working correctly, bugs in FreeBSD's ex2fs implemen- tation would still destroy your data. WARNINGS
If just one (or more) of the disks in a ccd fails, the entire file system will be lost unless you are mirroring the disks. If one of the disks in a mirror is lost, you should still be able to back up your data. If a write error occurs, however, data read from that sector may be non-deterministic. It may return the data prior to the write or it may return the data that was written. When a write error occurs, you should recover and regenerate the data as soon as possible. Changing the interleave or other parameters for a ccd disk usually destroys whatever data previously existed on that disk. FILES
/dev/ccd* ccd device special files SEE ALSO
dd(1), ccdconfig(8), config(8), disklabel(8), fsck(8), gvinum(8), mount(8), newfs(8) HISTORY
The concatenated disk driver was originally written at the University of Utah. BSD
August 9, 1995 BSD
All times are GMT -4. The time now is 06:00 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy