Quote:
Originally Posted by
jlliagre
@achenle Yes, the ARC size should not be left unlimited when running an Oracle database. 1GB seems to be quite aggressive with a 32GB server though. That might waste memory and will likely affect overall performance.
Shouldn't impact performance at all. Database IO isn't going to go through the ARC - it'll be either direct IO or synchronous. Log files are generally streamed and forgotten about, so not having cached data from writes there isn't a big deal. And once the cache gets beyond a few tens of MB, the effective filesystem cache hit rate isn't going to change much anyway given the usage patterns on a pure DB server.
In my experience, performance often gets better because there's actually some free RAM on the server, so response to transient demands is a helluva lot faster.
If you want the DB to cache data, create a larger-than-default SGA for that, and create larger buffer and redo log pools in it as needed.
To get the max performance out of an Oracle DB running on Solaris, you really do have to get the ZFS ARC out of the way. (And you also have to be really careful about how your DB job processes behave - you do not want to have your DB trying to start or stop several thousand processes all at the same time...)
(I spent a few years consulting for a customer using multiple large Oracle RAC clusters on SPARC servers - one of my main jobs was getting the best possible performance out of the servers. Oracle on Solaris is as good as it gets for performance and reliability - yes, better than Linux for a lot of reasons - but there are some quirks - and the ARC is one of them.)