Hi,
Recently we have new server T5 Oracle. We set up it for our database. For out database files we set one zfs filesystem. When i use iostat -xc the output as below. As you see the value for vdc4 is quite high.
When i look on the output for memstat. The ZFS filesystem taking high mem percentange.
Is this normal? When we database full backup the db will hang although the server load is normal during backup. Is that maybe related to the zfs filesystem setting? Hope can engligthen me on these.
Last edited by Don Cragun; 08-24-2015 at 01:37 AM..
Reason: Change ICODE tags to CODE tags, add ICODE tags.
It has been a long time since I worked with a ZFS filesystem, but I don't think it is unusual for ZFS to consume memory that is otherwise unused as a cache for ZFS disk data.
Reading 33Mb/s and writing 7.5Mb/s may seem high, but with 0 wait time on the device, it doesn't appear to be a problem.
Are you seeing a high swap rate (or any indication that running processes are running poorly due to a lack of available memory)?
It is not unusual for ZFS to eat almost all available memory.
You don't want that with database. Even if you are running on ZFS filesystems.
I would not recommend running databases on ZFS filesystems, since it requires alot of tuning to get it right. There is also an unresolved issue of fragmentation and for large implementation i would avoid ZFS for DB. ASM is the law
Are those FC or internal disks ?
What is the patchset you are running at (hypervisor & ldom - since i see it is a ldom) ?
Can you please tell what are the values kernel parameters :
Can you post output of following command during the problem ?
Take a look at the avque, i suspect it is very high during the non responsive period.
If not, possibly your issue resides with arc_max (confirm that the machine is not swaping as Don suggested). Lower it to a sane value so your database doesn't run out of PGA space (it will start swapping then, causing extreme slowness).
In short, you will need multiple zpools on different spindles with different setups for various DB functionality (REDO, ARCH, DATA) and keep them under 80% (this is very important).
It is not unusual for ZFS to eat almost all available memory.
There is a lot of misunderstanding around this topic. All file systems will eat as much memory as they find useful, not just ZFS, unused memory being wasted memory anyway.
The big differences are:
- ZFS memory, including the ARC, is reported as used/unavailable while other file systems memory, the buffer cache and the page cache, is reported as free/available.
- ZFS memory is released asynchronously and gradually by observing RAM demand while other file system's memory is released synchronously and (almost) instantaneously. Where that matters is when an application requests a very large amount of non pageable memory as the allocation might fail. The arc_max tuning prevents ZFS to use all the RAM helping these allocations to succeed.
also. snapshots. minimize them. more snapshots will mean more i/o. I had issues similar to this a while back and it all came down to snapshots and zfs_arc_max.
I have a esxi 6.7 server running a Solaris 10 x86 vm (actually a bunch of them). The VM uses zfs for the pools (of course). I expand the underlying ESX logical disk, for example from 50GB to 100gb, then I set autoexpand=on <pool> that belongs to the esx logical disk.
what am i missing to... (2 Replies)
Hello,
Need to ask the question regarding extending the zfs storage file system.
currently after using the command, df -kh
u01-data-pool/data 600G 552 48G 93% /data
/data are only 48 gb remaining and it has occupied 93% for total storage.
zpool u01-data-pool has more then 200 gb... (14 Replies)
Hi guys!
How come ZFS is said to be not just a filesystem but a hybrid filesystem and also a volume manager? Please explain.
I will appreciate your replies. Hope you can help me figure this out.
Thanks in advance! (1 Reply)
Hi Folks,
Looking for info here more than any actual HowTo, does anyone know if there is an actual way of converting a Veritas or UFS filesystem to ZFS leaving the resident data intact.
All that I have been able to find, including the commercial products seem to require the FS backed up from... (1 Reply)
Hey all,
I have a machine with 16 drive slots. Two of the drives have a ZFS mirror of the operating system, the other 14 contain the storage raidz.
So, after installing Opensolaris on the OS drives, how can I remount the storage raid?
TIA (11 Replies)
I created a zpool and zfs filesystem in OpenSolaris. I made two NFS mount points:
> zpool history
History for 'raidpool':
2009-01-15.17:12:48 zpool create -f raidpool raidz1 c4t1d0 c4t2d0 c4t3d0 c4t4d0 c4t5d0
2009-01-15.17:15:54 zfs create -o mountpoint=/vol01 -o sharenfs=on -o... (0 Replies)
I created a zpool and two ZFS volumes in OpenSolaris. I would like both ZFS volumes to be exportable. However, I don't know how to set that up.
These are the steps I did:
1) Create the zpool using raidz1 across five disks.
I have six disks and created a zpool across 5 of them. c4t0d0... (3 Replies)