Sponsored Content
Full Discussion: ZFS Filesystem
Operating Systems Solaris ZFS Filesystem Post 302952872 by Peasant on Monday 24th of August 2015 01:55:15 AM
Old 08-24-2015
It is not unusual for ZFS to eat almost all available memory.
You don't want that with database. Even if you are running on ZFS filesystems.

I would not recommend running databases on ZFS filesystems, since it requires alot of tuning to get it right. There is also an unresolved issue of fragmentation and for large implementation i would avoid ZFS for DB. ASM is the law Smilie

Are those FC or internal disks ?
What is the patchset you are running at (hypervisor & ldom - since i see it is a ldom) ?

Can you please tell what are the values kernel parameters :

Code:
ssd:ssd_max_throttle
zfs:zfs_vdev_max_pending
zfs:zfs_arc_max

Can you post output of following command during the problem ?
Code:
sar -d 2 10

Take a look at the avque, i suspect it is very high during the non responsive period.
If not, possibly your issue resides with arc_max (confirm that the machine is not swaping as Don suggested). Lower it to a sane value so your database doesn't run out of PGA space (it will start swapping then, causing extreme slowness).

Quote:
For out database files we set one zfs filesystem.
This is wrong, please take a look at the following documentation, and read it well :
Tuning ZFS for Database Products - Oracle Solaris 11.1 Tunable Parameters Reference Manual

In short, you will need multiple zpools on different spindles with different setups for various DB functionality (REDO, ARCH, DATA) and keep them under 80% (this is very important).
 

7 More Discussions You Might Find Interesting

1. Solaris

How do I export a zfs filesystem that I created?

I created a zpool and two ZFS volumes in OpenSolaris. I would like both ZFS volumes to be exportable. However, I don't know how to set that up. These are the steps I did: 1) Create the zpool using raidz1 across five disks. I have six disks and created a zpool across 5 of them. c4t0d0... (3 Replies)
Discussion started by: sqa777
3 Replies

2. Solaris

Why does the # of blocks change for a file on a ZFS filesystem?

I created a zpool and zfs filesystem in OpenSolaris. I made two NFS mount points: > zpool history History for 'raidpool': 2009-01-15.17:12:48 zpool create -f raidpool raidz1 c4t1d0 c4t2d0 c4t3d0 c4t4d0 c4t5d0 2009-01-15.17:15:54 zfs create -o mountpoint=/vol01 -o sharenfs=on -o... (0 Replies)
Discussion started by: sqa777
0 Replies

3. Solaris

Mount old zfs filesystem

Hey all, I have a machine with 16 drive slots. Two of the drives have a ZFS mirror of the operating system, the other 14 contain the storage raidz. So, after installing Opensolaris on the OS drives, how can I remount the storage raid? TIA (11 Replies)
Discussion started by: PatrickBaer
11 Replies

4. Filesystems, Disks and Memory

Howto Convert a filesystem from Veritas to ZFS?

Hi Folks, Looking for info here more than any actual HowTo, does anyone know if there is an actual way of converting a Veritas or UFS filesystem to ZFS leaving the resident data intact. All that I have been able to find, including the commercial products seem to require the FS backed up from... (1 Reply)
Discussion started by: gull04
1 Replies

5. Solaris

Please explain why ZFS is said to be a hybrid filesystem and a volume manager also

Hi guys! How come ZFS is said to be not just a filesystem but a hybrid filesystem and also a volume manager? Please explain. I will appreciate your replies. Hope you can help me figure this out. Thanks in advance! (1 Reply)
Discussion started by: Klyde
1 Replies

6. Solaris

Extend zfs storage filesystem

Hello, Need to ask the question regarding extending the zfs storage file system. currently after using the command, df -kh u01-data-pool/data 600G 552 48G 93% /data /data are only 48 gb remaining and it has occupied 93% for total storage. zpool u01-data-pool has more then 200 gb... (14 Replies)
Discussion started by: shahzad53
14 Replies

7. UNIX for Beginners Questions & Answers

How to finish expanding a zfs filesystem?

I have a esxi 6.7 server running a Solaris 10 x86 vm (actually a bunch of them). The VM uses zfs for the pools (of course). I expand the underlying ESX logical disk, for example from 50GB to 100gb, then I set autoexpand=on <pool> that belongs to the esx logical disk. what am i missing to... (2 Replies)
Discussion started by: mrmurdock
2 Replies
AMZFS-SENDRECV(8)					  System Administration Commands					 AMZFS-SENDRECV(8)

NAME
amzfs-sendrecv - Amanda script to create zfs sendrecv DESCRIPTION
amzfs-sendrecv is an Amanda application implementing the Application API. It should not be run by users directly. It create a zfs snapshot of the filesystem and backup the snapshot with 'zfs send'. Snapshot are kept after the backup is done, this increase the disk space use on the client but it is neccesary to be able do to incremental backup. If you want only full backup, you can disable this feature by setting the KEEP-SNAPSHOT property to 'NO'. Only the restoration of the complete backup is allowed, it is impossible to restore a single file. The application is run as the amanda user, it must have many zfs priviledge: zfs allow -ldu AMANDA_USER mount,create,rename,snapshot,destroy,send,receive FILESYSTEM Some system doesn't have "zfs allow", but you can give the Amanda backup user the rights to manipulate ZFS filesystems by using the following command: usermod -P "ZFS File System Management,ZFS Storage Management" AMANDA_USER This will require that your run zfs under pfexec, set the PFEXEC property to YES. The format of the diskdevice in the disklist (DLE) must be one of: Desciption Example ---------- ------- Mountpoint /data ZFS pool name datapool ZFS filesystem datapool/database ZFS logical volume datapool/dbvol The filesystem doesn't need to be mounted. PROPERTIES
This section lists the properties that control amzfs-sendrecv's functionality. See amanda-applications(7) for information on the Application API, application configuration. DF-PATH Path to the 'df' binary, search in $PATH by default. KEEP-SNAPSHOT If "YES" (the default), snapshot are kept after the backup, if set to "NO" then snapshot are no kept and incremental backup will fail. ZFS-PATH Path to the 'zfs' binary, search in $PATH by default. PFEXEC-PATH Path to the 'pfexec' binary, search in $PATH by default. PFEXEC If "NO" (the default), pfexec is not used, if set to "YES" then pfexec is used. EXAMPLE
In this example, a dumptype is defined to use amzfs-sendrecv application to backup a zfs filesystem. define application-tool amzfs_sendrecv { comment "amzfs-sendrecv" plugin "amzfs-sendrecv" #property "DF-PATH" "/usr/sbin/df" #property "KEEP-SNAPSHOT" "YES" #property "ZFS-PATH" "/usr/sbin/zfs" #property "PFEXEC-PATH" "/usr/sbin/pfexec" #property "PFEXEC" "NO" } define dumptype user-zfs-sendrecv { program "APPLICATION" application "amzfs_sendrecv" } SEE ALSO
amanda(8), amanda.conf(5), amanda-client.conf(5), amanda-applications(7) The Amanda Wiki: : http://wiki.zmanda.com/ AUTHOR
Jean-Louis Martineau <martineau@zmanda.com> Zmanda, Inc. (http://www.zmanda.com) Amanda 3.3.3 01/10/2013 AMZFS-SENDRECV(8)
All times are GMT -4. The time now is 08:04 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy