ZFS Filesystem


 
Thread Tools Search this Thread
Operating Systems Solaris ZFS Filesystem
# 29  
Old 09-08-2015
Quote:
Originally Posted by achenle
4 GB cache is more than enough. I seriously doubt it impacted performance.
Well, the 4GB ARC is now full but while uncapped, it used to contain three times more data (12.9GB).

It is not possible to know how much this cache size reduction does affect performance without getting at least hits and miss statistics but if there is a large increase in misses, the performance impact might be huge.
# 30  
Old 09-09-2015
Most of the ZFS ARC on Oracle DB servers is log file data that will never be read back. It'll just sit in the ARC until something else needs the RAM - there's no point in caching that data. I bet the cache hit rate is still > 99%.

Brendan's blog » Activity of the ZFS ARC

Anything over 95% doesn't really improve performance.

I'm thinking however the OP is doing backups is causing ZFS to block IO for a while, thus the DB aio_wait() calls time out. Maybe the backup process used is to take a snapshot of the ZFS file system will the DB is active, and then use ZFS send on the snapshot? (Although that really won't produce a viable backup of an Oracle DB....)

As I posted earlier, if you want to use the memory to speed up an Oracle DB, use it as the DB's SGA.
# 31  
Old 09-09-2015
Yes, the closer the cache is to the DB, the better.
# 32  
Old 09-17-2015
Hi achenlejlliagre,
We are using RMAN backup. The DB is LDOM which we hosted in CDOM host. Futher check find out that, even LDOM level no io queue but saw io queue at CDOM hostserver while backup running. Is that normal
# 33  
Old 09-17-2015
Please show the iostat output on control domain for specified disks when the backup is running ?
Code:
/usr/bin/iostat -xcnzCTd 3 10

Adjust the frequency (10) to your needs (run it when the problem appears)

Regards
Peasant.
Login or Register to Ask a Question

Previous Thread | Next Thread

7 More Discussions You Might Find Interesting

1. UNIX for Beginners Questions & Answers

How to finish expanding a zfs filesystem?

I have a esxi 6.7 server running a Solaris 10 x86 vm (actually a bunch of them). The VM uses zfs for the pools (of course). I expand the underlying ESX logical disk, for example from 50GB to 100gb, then I set autoexpand=on <pool> that belongs to the esx logical disk. what am i missing to... (2 Replies)
Discussion started by: mrmurdock
2 Replies

2. Solaris

Extend zfs storage filesystem

Hello, Need to ask the question regarding extending the zfs storage file system. currently after using the command, df -kh u01-data-pool/data 600G 552 48G 93% /data /data are only 48 gb remaining and it has occupied 93% for total storage. zpool u01-data-pool has more then 200 gb... (14 Replies)
Discussion started by: shahzad53
14 Replies

3. Solaris

Please explain why ZFS is said to be a hybrid filesystem and a volume manager also

Hi guys! How come ZFS is said to be not just a filesystem but a hybrid filesystem and also a volume manager? Please explain. I will appreciate your replies. Hope you can help me figure this out. Thanks in advance! (1 Reply)
Discussion started by: Klyde
1 Replies

4. Filesystems, Disks and Memory

Howto Convert a filesystem from Veritas to ZFS?

Hi Folks, Looking for info here more than any actual HowTo, does anyone know if there is an actual way of converting a Veritas or UFS filesystem to ZFS leaving the resident data intact. All that I have been able to find, including the commercial products seem to require the FS backed up from... (1 Reply)
Discussion started by: gull04
1 Replies

5. Solaris

Mount old zfs filesystem

Hey all, I have a machine with 16 drive slots. Two of the drives have a ZFS mirror of the operating system, the other 14 contain the storage raidz. So, after installing Opensolaris on the OS drives, how can I remount the storage raid? TIA (11 Replies)
Discussion started by: PatrickBaer
11 Replies

6. Solaris

Why does the # of blocks change for a file on a ZFS filesystem?

I created a zpool and zfs filesystem in OpenSolaris. I made two NFS mount points: > zpool history History for 'raidpool': 2009-01-15.17:12:48 zpool create -f raidpool raidz1 c4t1d0 c4t2d0 c4t3d0 c4t4d0 c4t5d0 2009-01-15.17:15:54 zfs create -o mountpoint=/vol01 -o sharenfs=on -o... (0 Replies)
Discussion started by: sqa777
0 Replies

7. Solaris

How do I export a zfs filesystem that I created?

I created a zpool and two ZFS volumes in OpenSolaris. I would like both ZFS volumes to be exportable. However, I don't know how to set that up. These are the steps I did: 1) Create the zpool using raidz1 across five disks. I have six disks and created a zpool across 5 of them. c4t0d0... (3 Replies)
Discussion started by: sqa777
3 Replies
Login or Register to Ask a Question