Sponsored Content
Full Discussion: ZFS Filesystem
Operating Systems Solaris ZFS Filesystem Post 302952872 by Peasant on Monday 24th of August 2015 01:55:15 AM
Old 08-24-2015
It is not unusual for ZFS to eat almost all available memory.
You don't want that with database. Even if you are running on ZFS filesystems.

I would not recommend running databases on ZFS filesystems, since it requires alot of tuning to get it right. There is also an unresolved issue of fragmentation and for large implementation i would avoid ZFS for DB. ASM is the law Smilie

Are those FC or internal disks ?
What is the patchset you are running at (hypervisor & ldom - since i see it is a ldom) ?

Can you please tell what are the values kernel parameters :

Code:
ssd:ssd_max_throttle
zfs:zfs_vdev_max_pending
zfs:zfs_arc_max

Can you post output of following command during the problem ?
Code:
sar -d 2 10

Take a look at the avque, i suspect it is very high during the non responsive period.
If not, possibly your issue resides with arc_max (confirm that the machine is not swaping as Don suggested). Lower it to a sane value so your database doesn't run out of PGA space (it will start swapping then, causing extreme slowness).

Quote:
For out database files we set one zfs filesystem.
This is wrong, please take a look at the following documentation, and read it well :
Tuning ZFS for Database Products - Oracle Solaris 11.1 Tunable Parameters Reference Manual

In short, you will need multiple zpools on different spindles with different setups for various DB functionality (REDO, ARCH, DATA) and keep them under 80% (this is very important).
 

7 More Discussions You Might Find Interesting

1. Solaris

How do I export a zfs filesystem that I created?

I created a zpool and two ZFS volumes in OpenSolaris. I would like both ZFS volumes to be exportable. However, I don't know how to set that up. These are the steps I did: 1) Create the zpool using raidz1 across five disks. I have six disks and created a zpool across 5 of them. c4t0d0... (3 Replies)
Discussion started by: sqa777
3 Replies

2. Solaris

Why does the # of blocks change for a file on a ZFS filesystem?

I created a zpool and zfs filesystem in OpenSolaris. I made two NFS mount points: > zpool history History for 'raidpool': 2009-01-15.17:12:48 zpool create -f raidpool raidz1 c4t1d0 c4t2d0 c4t3d0 c4t4d0 c4t5d0 2009-01-15.17:15:54 zfs create -o mountpoint=/vol01 -o sharenfs=on -o... (0 Replies)
Discussion started by: sqa777
0 Replies

3. Solaris

Mount old zfs filesystem

Hey all, I have a machine with 16 drive slots. Two of the drives have a ZFS mirror of the operating system, the other 14 contain the storage raidz. So, after installing Opensolaris on the OS drives, how can I remount the storage raid? TIA (11 Replies)
Discussion started by: PatrickBaer
11 Replies

4. Filesystems, Disks and Memory

Howto Convert a filesystem from Veritas to ZFS?

Hi Folks, Looking for info here more than any actual HowTo, does anyone know if there is an actual way of converting a Veritas or UFS filesystem to ZFS leaving the resident data intact. All that I have been able to find, including the commercial products seem to require the FS backed up from... (1 Reply)
Discussion started by: gull04
1 Replies

5. Solaris

Please explain why ZFS is said to be a hybrid filesystem and a volume manager also

Hi guys! How come ZFS is said to be not just a filesystem but a hybrid filesystem and also a volume manager? Please explain. I will appreciate your replies. Hope you can help me figure this out. Thanks in advance! (1 Reply)
Discussion started by: Klyde
1 Replies

6. Solaris

Extend zfs storage filesystem

Hello, Need to ask the question regarding extending the zfs storage file system. currently after using the command, df -kh u01-data-pool/data 600G 552 48G 93% /data /data are only 48 gb remaining and it has occupied 93% for total storage. zpool u01-data-pool has more then 200 gb... (14 Replies)
Discussion started by: shahzad53
14 Replies

7. UNIX for Beginners Questions & Answers

How to finish expanding a zfs filesystem?

I have a esxi 6.7 server running a Solaris 10 x86 vm (actually a bunch of them). The VM uses zfs for the pools (of course). I expand the underlying ESX logical disk, for example from 50GB to 100gb, then I set autoexpand=on <pool> that belongs to the esx logical disk. what am i missing to... (2 Replies)
Discussion started by: mrmurdock
2 Replies
GRUB-FSTEST(3)						     Library Functions Manual						    GRUB-FSTEST(3)

NAME
grub-fstest -- Debug tool for GRUB's filesystem driver. SYNOPSIS
grub-fstest [-c | --diskcount=NUM] [-C | --crypto] [-d | --debug=STRING] [-K | --zfs-key=FILE|prompt] [-n | --length=NUM] [-r | --root=DEVICE_NAME] [-s | --skip=NUM] [-u | --uncompress] [-v | --verbose] IMAGE_PATH <blocklist FILE | cat FILE | cmp FILE LOCAL | cp FILE LOCAL | crc FILE | hex FILE | ls PATH | xnu_uuid DEVICE> DESCRIPTION
grub-fstest is a tool for testing GRUB's filesystem drivers. You should not normally need to run this program. OPTIONS
--diskcount=NUM Specify the number of input files. --crypto Mount cryptographic devices. --debug=STRING Set debug environment variable. --zfs-key=FILE|prompt Load ZFS cryptographic key. --length=NUM Handle NUM bytes in output file. --root=DEVICE_NAME Set root device. --skip=NUM Skip NUM bytes from output file. --uncompress Uncompress data. --verbose Print verbose messages. COMMANDS
blocklist FILE Display block list of FILE. cat FILE Display FILE on standard output. cmp FILE LOCAL Compare FILE with local file LOCAL. cp FILE LOCAL Copy FILE to local file LOCAL. crc FILE Display the CRC-32 checksum of FILE. hex FILE Display contents of FILE in hexidecimal. ls PATH List files at PATH. xnu_uuid DEVICE Display the XNU UUID of DEVICE. SEE ALSO
info grub Wed Feb 26 2014 GRUB-FSTEST(3)
All times are GMT -4. The time now is 07:24 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy