02-25-2019
It all comes down to the homonyms cash & cache.
How much is you budget for cache? That's the key really. SSD is slightly slower than cache.
For write operations, you would have to balance off the time to commit the update to real disk (even if it is SSD) between the two. if you pass the update to a SAN, it will respond very quickly to say that you have written it, but it will actually write the data in its own time. The update is cached for write but you can continue. There will be cache batteries for power loss before it's really written. For local disk, it depends. Does the RAID controller have a good cache allocation and would therefore behave in the same way? If not, you (the operating system) must ensure that the write is complete before you proceed (costing CPU Sys time I think) and that can confusingly make local IO slower.
You have, of course, stated that this is a read intensive server so your other thing to consider is cache/RAM in the server. The server will fill up with the data you read in normally anyway, but if you wish, you could pre-read the data to give it a head-start. Beware that you need to have lots of memory for this else you will just drop it again. You can just do a find for the files of data you want and cat them to /dev/null so that they get read. Is 512Gb sufficient for your data? You don't say how much you have.
I hoe that my thoughts help,
Robin
10 More Discussions You Might Find Interesting
1. AIX
Hi All,
I have mirrored SAN volume on my B80 rootvg. Can I just remove the mirror and "Remove a P V from a V G" and it will be a diskless AIX?
Is that going to boot on SAN rootvg volume?
Thanks in advance,
itik (3 Replies)
Discussion started by: itik
3 Replies
2. Filesystems, Disks and Memory
so we have a solrais 9 system attached to an HP SAN.
we are using sssu to do snap clones every hour.
the only problem is the we get write errors on the solrais system every time we do a snap.
from /var/adm/messages
Apr 21 14:37:48 svr001 scsi: WARNING:... (0 Replies)
Discussion started by: robsonde
0 Replies
3. AIX
Hello everyone
I got several aix boxes with aix 5.3
I got a ibm san ds4500
My question is
How can I do a match between my disks on aix and the san?
I try to do a match with the LUN but for example. In my san I got several 1 LUN and on one of my aix box I got this
If I type lscfg... (4 Replies)
Discussion started by: lo-lp-kl
4 Replies
4. Solaris
hi all,
have a solaris 9 OS and a SAN disk which used to work fine is not getting picked up by my machine. can anyone point out things to check in order to troubleshoot this ??
thanks in advance. (3 Replies)
Discussion started by: cesarNZ
3 Replies
5. Filesystems, Disks and Memory
Scenario:
I've got 2 M5000's connected to a 9985 SAN storage array. I have configured the SAN disks with stmsboot, format and newfs. I can access the same SAN space from both systems. I have created files from both systems on the SAN space.
Question:
Why can't I see the file created... (3 Replies)
Discussion started by: bluescreen
3 Replies
6. UNIX for Dummies Questions & Answers
Hi all,
My main function is as a DBA. Another person manages the server and the SAN.
I just want to know if I should be worried about high disk I/O or is it irrelevant as the I/O "load balancing" will be "taken care" of by the SAN?
For example, I have hdisk1-5 and I can see that there are... (2 Replies)
Discussion started by: newbie_01
2 Replies
7. Solaris
Hi,
I have a production solaris 10 SPARC system (portal). Yesterday legato/Networker gave an I/O Error on one of the files on its SAN mounted disk.
I went to that particular file on the system, did an ls and it showed the file. However, ls -l did not work and it said IO error.
... (6 Replies)
Discussion started by: Mack1982
6 Replies
8. UNIX for Advanced & Expert Users
I have some T3's we just purchased and we are looking to carve these up into LDOMS's.
Just wondering if anyone can give me a quick run down or pro / cons on SAN vs local internal for the LDOM itself. These will external SAN storage for their applications. (0 Replies)
Discussion started by: jlouki01
0 Replies
9. Red Hat
Hi ,
I had a requirement to share a san disk between two rhel severs. I am planning to discover the same disk in two rhel nodes and mount it. Is it a feasible solution? and what kind of issues we may encounter mounting same disk in two OS's parallel ? (2 Replies)
Discussion started by: nanduri
2 Replies
10. AIX
Hello Folks,
I have directly connected my IBM Pseries AIX machine to SAN Storage
I have a dual port Fibre Channel Adapter
Connected both fiber ports to SAN Box IBM DS4500 Controller A & Controller B
Using IBM DS storage manager client 10 -- created one logical disk and assigned to a... (18 Replies)
Discussion started by: filosophizer
18 Replies
i2o_bs(7D) Devices i2o_bs(7D)
NAME
i2o_bs - Block Storage OSM for I2O
SYNOPSIS
disk@local target id#:a through u
disk@local target id#:a through u raw
DESCRIPTION
The I2O Block Storage OSM abstraction (BSA, which also is referred to as block storage class) layer is the primary interface that Solaris
operating environments use to access block storage devices. A block storage device provides random access to a permanent storage medium.
The i2o_bs device driver uses I2O Block Storage class messages to control the block device; and provides the same functionality (ioctls,
for example) that is present in the Solaris device driver like 'cmdk, dadk' on x86 for disk. The maximum size disk supported by i2o_bs is
the same as what is available on x86.
The i2o_bs is currently implemented version 1.5 of Intelligent IO specification.
The block files access the disk using the system's normal buffering mechanism and are read and written without regard to physical disk
records. There is also a "raw" interface that provides for direct transmission between the disk and the user's read or write buffer. A
single read or write call usually results in one I/O operation; raw I/O is therefore considerably more efficient when many bytes are
transmitted. The names of the block files are found in /dev/dsk; the names of the raw files are found in /dev/rdsk.
I2O associates each block storage device with a unique ID called a local target id that is assigned by I2O hardware. This information can
be acquired by the block storage OSM through I2O Block Storage class messages. For Block Storage OSM, nodes are created in
/devices/pci#/pci# which include the local target ID as one component of device name that the node refers to. However the /dev names and
the names in /dev/dsk and /dev/rdsk do not encode the local target id in any part of the name.
For example, you might have the following:
/devices/ /dev/dsk name
---------------------------------------------------------------
/devices/pci@0,0/pci101e,0@10,1/disk@10:a /dev/dsk/c1d0s0
I/O requests to the disk must have an offset and transfer length that is a multiple of 512 bytes or the driver returns an EINVAL error.
Slice 0 is normally used for the root file system on a disk, slice 1 is used as a paging area (for example, swap), and slice 2 for backing
up the entire fdisk partition for Solaris software. Other slices may be used for usr file systems or system reserved area.
Fdisk partition 0 is to access the entire disk and is generally used by the fdisk(1M) program.
FILES
/dev/dsk/cndn[s|p]n block device
/dev/rdsk/cndn[s|p]n raw device
where:
cn controller n
dn instance number
sn UNIX system slice n (0-15)
pn fdisk partition(0)
/kernel/drv/i2o_bs i2o_bs driver
/kernel/drv/i2o_bs.conf Configuration file
ATTRIBUTES
See attributes(5)
for descriptions of the following attributes:
+-----------------------------+-----------------------------+
|ATTRIBUTE TYPE |ATTRIBUTE VALUE
+-----------------------------+-----------------------------+
|Architecture |x86 |
+-----------------------------+-----------------------------+
SEE ALSO
fdisk(1M), format(1M)mount(1M),lseek(2), read(2), write(2), readdir(3C), vfstab(4), acct.h(3HEAD), attributes(5), dkio(7I)
SunOS 5.10 21 Jul 1998 i2o_bs(7D)