ZFS Filesystem


 
Thread Tools Search this Thread
Operating Systems Solaris ZFS Filesystem
# 22  
Old 08-28-2015
Quote:
Originally Posted by jlliagre
@achenle Yes, the ARC size should not be left unlimited when running an Oracle database. 1GB seems to be quite aggressive with a 32GB server though. That might waste memory and will likely affect overall performance.
Shouldn't impact performance at all. Database IO isn't going to go through the ARC - it'll be either direct IO or synchronous. Log files are generally streamed and forgotten about, so not having cached data from writes there isn't a big deal. And once the cache gets beyond a few tens of MB, the effective filesystem cache hit rate isn't going to change much anyway given the usage patterns on a pure DB server.

In my experience, performance often gets better because there's actually some free RAM on the server, so response to transient demands is a helluva lot faster.

If you want the DB to cache data, create a larger-than-default SGA for that, and create larger buffer and redo log pools in it as needed.

To get the max performance out of an Oracle DB running on Solaris, you really do have to get the ZFS ARC out of the way. (And you also have to be really careful about how your DB job processes behave - you do not want to have your DB trying to start or stop several thousand processes all at the same time...)

(I spent a few years consulting for a customer using multiple large Oracle RAC clusters on SPARC servers - one of my main jobs was getting the best possible performance out of the servers. Oracle on Solaris is as good as it gets for performance and reliability - yes, better than Linux for a lot of reasons - but there are some quirks - and the ARC is one of them.)
# 23  
Old 08-28-2015
Quote:
Originally Posted by achenle
Shouldn't impact performance at all. Database IO isn't going to go through the ARC - it'll be either direct IO or synchronous.
The metadata will still need to be cached though. Note that zfs_arc_max is being deprecated with Solaris 11.2. The new dynamically tunable user_reserve_hint_pct allows much more flexibility.
# 24  
Old 09-05-2015
Hi achenle/jlliagre,
Thanks for feedbacks. I really appreciate it. The vendor also proposing on limiting the zfs_arc parameter value. Currently I am trying use perfstat script to capture log when DB backup to find out the culprit. Will share here once I got the output.
Is that zfs_arc_max =0*0 mean unlimited?
# 25  
Old 09-05-2015
0 means there is no user configured limit.

In such case, the OS is limiting the ARC size to either 75% or the RAM if the machine has less than 4 GB of RAM, and "total RAM minus 1 GB" otherwise.
# 26  
Old 09-07-2015
Hi Jlliagre,achenle,
I have set the zfs_arc_max = 4GB but still when run the backup the DB seems hangs and got error "WARNING: aiowait timed out 1 times". The memstat during backup as below;

Code:
 Page Summary                Pages                MB    %Tot
------------           ----------------  ----------------  ----
Kernel                     291354              2276           7%
ZFS File Data              519224              4056       12%
Anon                      1900341             14846        45%
Exec and libs               13193               103         0%
Page cache                    612                 4            0%
Free (cachelist)             8058                62          0%
Free (freelist)           1461522             11418      35%
Total                       4194304             32768

# 27  
Old 09-07-2015
Aggressively reducing the arc size likely affected ZFS performance. 11 GB were unused, i.e. wasted, when the statistic was captured. You should monitor the memory usage during a long enough period of time to figure out the system peak memory needs.
# 28  
Old 09-08-2015
4 GB cache is more than enough. I seriously doubt it impacted performance.

How are the backups being run? And what exactly is emitting the "aiowait timed out" errors?
Login or Register to Ask a Question

Previous Thread | Next Thread

7 More Discussions You Might Find Interesting

1. UNIX for Beginners Questions & Answers

How to finish expanding a zfs filesystem?

I have a esxi 6.7 server running a Solaris 10 x86 vm (actually a bunch of them). The VM uses zfs for the pools (of course). I expand the underlying ESX logical disk, for example from 50GB to 100gb, then I set autoexpand=on <pool> that belongs to the esx logical disk. what am i missing to... (2 Replies)
Discussion started by: mrmurdock
2 Replies

2. Solaris

Extend zfs storage filesystem

Hello, Need to ask the question regarding extending the zfs storage file system. currently after using the command, df -kh u01-data-pool/data 600G 552 48G 93% /data /data are only 48 gb remaining and it has occupied 93% for total storage. zpool u01-data-pool has more then 200 gb... (14 Replies)
Discussion started by: shahzad53
14 Replies

3. Solaris

Please explain why ZFS is said to be a hybrid filesystem and a volume manager also

Hi guys! How come ZFS is said to be not just a filesystem but a hybrid filesystem and also a volume manager? Please explain. I will appreciate your replies. Hope you can help me figure this out. Thanks in advance! (1 Reply)
Discussion started by: Klyde
1 Replies

4. Filesystems, Disks and Memory

Howto Convert a filesystem from Veritas to ZFS?

Hi Folks, Looking for info here more than any actual HowTo, does anyone know if there is an actual way of converting a Veritas or UFS filesystem to ZFS leaving the resident data intact. All that I have been able to find, including the commercial products seem to require the FS backed up from... (1 Reply)
Discussion started by: gull04
1 Replies

5. Solaris

Mount old zfs filesystem

Hey all, I have a machine with 16 drive slots. Two of the drives have a ZFS mirror of the operating system, the other 14 contain the storage raidz. So, after installing Opensolaris on the OS drives, how can I remount the storage raid? TIA (11 Replies)
Discussion started by: PatrickBaer
11 Replies

6. Solaris

Why does the # of blocks change for a file on a ZFS filesystem?

I created a zpool and zfs filesystem in OpenSolaris. I made two NFS mount points: > zpool history History for 'raidpool': 2009-01-15.17:12:48 zpool create -f raidpool raidz1 c4t1d0 c4t2d0 c4t3d0 c4t4d0 c4t5d0 2009-01-15.17:15:54 zfs create -o mountpoint=/vol01 -o sharenfs=on -o... (0 Replies)
Discussion started by: sqa777
0 Replies

7. Solaris

How do I export a zfs filesystem that I created?

I created a zpool and two ZFS volumes in OpenSolaris. I would like both ZFS volumes to be exportable. However, I don't know how to set that up. These are the steps I did: 1) Create the zpool using raidz1 across five disks. I have six disks and created a zpool across 5 of them. c4t0d0... (3 Replies)
Discussion started by: sqa777
3 Replies
Login or Register to Ask a Question