Sponsored Content
Full Discussion: SAN vs. Local disk.
Top Forums UNIX for Beginners Questions & Answers SAN vs. Local disk. Post 303031279 by bakunin on Monday 25th of February 2019 09:27:32 AM
Old 02-25-2019
Quote:
Originally Posted by ikx
I am in the market looking to purchase a new E950 server and I am trying to decide between using local SSD drives or SSD based SAN. The application that will be running on this server is read-intensive so I am looking for the most optimal configuration to support this application. There are no other servers or applications that will use this SAN (if I decide to go that route.) The deciding factor to me is Performance regardless of the hardware cost (granted I don't want to pay for something I end up not using.) SAN or local SSD (both running RAID 10)? This is really the question I am trying to answer before I pull the trigger and complete this purchase. Any insights from this community is greatly appreciated.
There are a few more points to consider IMHO:

1) disks (regardless which technology) will over time malfunction and need to be replaced. There is some effort involved in such a replacement. SAN systems have usually ways to - more or less - "effortless" replacing disks built in because they usually are used to deal with a lot of disks and the chance that one disk malfunctions rises with the number of disks involved. You might want to do some risk calculation based on how often on average you expect a disk to break (there is usually a "MTBF" - "meantime between failure" number available), how long you expect the change to take and how much a downtime of the expected duration will cost.

2) SANs and local disks do differ in the way they are attached to the system. Local disks may use SCSI or the "M.2" interface. Notice that you cannot have several local (SSD-) disks attached via M.2. SAN, onthe other hand, may use a FC-connection or even several FC-connections in parallel (FC-drivers allow this for redundancy as well as load-balancing). The capacity of such FC-connections may by far exceed the speed of local disks. On the other hand you will need not only a SAN but also a FC-switch (Brocade are the most wide-spread) and the administration ("zoning") will be more complicated.

3) SANs - if set up with this in mind - may add redundancy and thus high-availability to the system. Again, it depends on the system, its purpose, etc.. to calculate properly the risk of it failing for some amount of time. Calculate how much it would set you back to have the system offline for: 1 hour / 1day / 1 week and this will give you an idea how much money spent on preventing these kinds of desasters is worthwhile.

4) SAN systems for themselves are rather expensive. To alleviate this they become cheaper and cheaper (in comparison to local disks) the more you virtualise and the more systems use it. So, a plan to buy a SAN system for a single system only might be on the expensive side but with a expectation of adding other systems using it too the costs may still be reasonable. So you may want to rethink your immediate problem in a more gobal context.

5) Notice that for a system optimised for speed you need an adequate backup solution. For this you also need a disaster scenario plan to estimate what the costs could be and hence how much the prevention may cost. Then you know how fast a recovery needs to be and therefore which technologies you need to employ to get that speed. You can use a SAN also for snapshots (perhaps quickest way of recovering a "point in time"), very fast medium to put an online backup to, only then migrating it to slower media like tapes and similar solutions. You might want to take that also into consideration.

I hope this helps.

bakunin
 

10 More Discussions You Might Find Interesting

1. AIX

AIX disk less with SAN

Hi All, I have mirrored SAN volume on my B80 rootvg. Can I just remove the mirror and "Remove a P V from a V G" and it will be a diskless AIX? Is that going to boot on SAN rootvg volume? Thanks in advance, itik (3 Replies)
Discussion started by: itik
3 Replies

2. Filesystems, Disks and Memory

disk errors with san snapshot

so we have a solrais 9 system attached to an HP SAN. we are using sssu to do snap clones every hour. the only problem is the we get write errors on the solrais system every time we do a snap. from /var/adm/messages Apr 21 14:37:48 svr001 scsi: WARNING:... (0 Replies)
Discussion started by: robsonde
0 Replies

3. AIX

hard disk and san

Hello everyone I got several aix boxes with aix 5.3 I got a ibm san ds4500 My question is How can I do a match between my disks on aix and the san? I try to do a match with the LUN but for example. In my san I got several 1 LUN and on one of my aix box I got this If I type lscfg... (4 Replies)
Discussion started by: lo-lp-kl
4 Replies

4. Solaris

SAN disk failure

hi all, have a solaris 9 OS and a SAN disk which used to work fine is not getting picked up by my machine. can anyone point out things to check in order to troubleshoot this ?? thanks in advance. (3 Replies)
Discussion started by: cesarNZ
3 Replies

5. Filesystems, Disks and Memory

SAN Disk w/o Cluster

Scenario: I've got 2 M5000's connected to a 9985 SAN storage array. I have configured the SAN disks with stmsboot, format and newfs. I can access the same SAN space from both systems. I have created files from both systems on the SAN space. Question: Why can't I see the file created... (3 Replies)
Discussion started by: bluescreen
3 Replies

6. UNIX for Dummies Questions & Answers

SAN and Disk I/O ... do we care?

Hi all, My main function is as a DBA. Another person manages the server and the SAN. I just want to know if I should be worried about high disk I/O or is it irrelevant as the I/O "load balancing" will be "taken care" of by the SAN? For example, I have hdisk1-5 and I can see that there are... (2 Replies)
Discussion started by: newbie_01
2 Replies

7. Solaris

I/O Error on SAN Disk

Hi, I have a production solaris 10 SPARC system (portal). Yesterday legato/Networker gave an I/O Error on one of the files on its SAN mounted disk. I went to that particular file on the system, did an ls and it showed the file. However, ls -l did not work and it said IO error. ... (6 Replies)
Discussion started by: Mack1982
6 Replies

8. UNIX for Advanced & Expert Users

Oracle VM / T3 SAN or Local for LDOMS?

I have some T3's we just purchased and we are looking to carve these up into LDOMS's. Just wondering if anyone can give me a quick run down or pro / cons on SAN vs local internal for the LDOM itself. These will external SAN storage for their applications. (0 Replies)
Discussion started by: jlouki01
0 Replies

9. Red Hat

Sharing SAN disk with multiple severs

Hi , I had a requirement to share a san disk between two rhel severs. I am planning to discover the same disk in two rhel nodes and mount it. Is it a feasible solution? and what kind of issues we may encounter mounting same disk in two OS's parallel ? (2 Replies)
Discussion started by: nanduri
2 Replies

10. AIX

SAN Disk Appearing double in AIX

Hello Folks, I have directly connected my IBM Pseries AIX machine to SAN Storage I have a dual port Fibre Channel Adapter Connected both fiber ports to SAN Box IBM DS4500 Controller A & Controller B Using IBM DS storage manager client 10 -- created one logical disk and assigned to a... (18 Replies)
Discussion started by: filosophizer
18 Replies
DMC(1)																	    DMC(1)

NAME
dmc - controls the Disk Mount Conditioner SYNOPSIS
dmc start mount [profile-name|profile-index [-boot]] dmc stop mount dmc status mount [-json] dmc show profile-name|profile-index dmc list dmc select mount profile-name|profile-index dmc configure mount type access-time read-throughput write-throughput [ioqueue-depth maxreadcnt maxwritecnt segreadcnt segwritecnt] dmc help | -h DESCRIPTION
dmc(1) configures the Disk Mount Conditioner. The Disk Mount Conditioner is a kernel provided service that can degrade the disk I/O being issued to specific mount points, providing the illusion that the I/O is executing on a slower device. It can also cause the conditioned mount point to advertise itself as a different device type, e.g. the disk type of an SSD could be set to an HDD. This behavior consequently changes various parameters such as read-ahead settings, disk I/O throttling, etc., which normally have different behavior depending on the underlying device type. COMMANDS
Common command parameters: o mount - the mount point to be used in the command o profile-name - the name of a profile as shown in dmc list o profile-index - the index of a profile as shown in dmc list dmc start mount [profile-name|profile-index [-boot]] Start the Disk Mount Conditioner on the given mount point with the current settings (from dmc status) or the given profile, if pro- vided. Optionally configure the profile to remain enabled across reboots, if -boot is supplied. dmc stop mount Disable the Disk Mount Conditioner on the given mount point. Also disables any settings that persist across reboot via the -boot flag provided to dmc start, if any. dmc status mount [-json] Display the current settings (including on/off state), optionally as JSON dmc show profile-name|profile-index Display the settings of the given profile dmc list Display all profile names and indices dmc select mount profile-name|profile-index Choose a different profile for the given mount point without enabling or disabling the Disk Mount Conditioner dmc configure mount type access-time read-throughput write-throughput [ioqueue-depth maxreadcnt maxwritecnt segreadcnt segwritecnt] Select custom parameters for the given mount point rather than using the settings provided by a default profile. See dmc list for example parameter settings for various disk presets. o type - 'SSD' or 'HDD'. The type determines how various system behaviors like disk I/O throttling and read-ahead algorithms affect the issued I/O. Additionally, choosing 'HDD' will attempt to simulate seek times, including drive spin-up from idle. o access-time - latency in microseconds for a single I/O. For SSD types this latency is applied exactly as specified to all I/O. For HDD types, the latency scales based on a simulated seek time (thus making the access-time the maximum latency or seek penalty). o read-throughput - integer specifying megabytes-per-second maximum throughput for disk reads o write-throughput - integer specifying megabytes-per-second maxmimu throughput for disk writes o ioqueue-depth - maximum number of commands that a device can accept o maxreadcnt - maximum byte count per read o maxwritecnt - maximum byte count per write o segreadcnt - maximum physically disjoint segments processed per read o segwritecnt - maximum physically disjoint segments processed per write dmc help | -h Display help text EXAMPLES
dmc start / '5400 HDD' Turn on the Disk Mount Conditioner for the boot volume, acting like a 5400 RPM hard drive. dmc configure /Volumes/ExtDisk SSD 100 100 50 Configure an external disk to use custom parameters to degrade performance as if it were a slow SSD with 100 microsecond latencies, 100MB/s read throughput, and 50MB/s write throughput. IMPORTANT
The Disk Mount Conditioner is not a 'simulator'. It can only degrade (or 'condition') the I/O such that a faster disk device behaves like a slower device, not vice-versa. For example, a 5400 RPM hard drive cannot be conditioned to act like a SSD that is capable of a higher throughput than the theoretical limitations of the hard disk. In addition to running dmc stop, rebooting is also a sufficient way to clear any existing settings and disable Disk Mount Conditioner on all mount points (unless started with -boot). SEE ALSO
nlc(1) January 2018 DMC(1)
All times are GMT -4. The time now is 06:05 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy