ZFS Filesystem


 
Thread Tools Search this Thread
Operating Systems Solaris ZFS Filesystem
# 15  
Old 08-25-2015
Did you read the document DukeNuke2 posted?
# 16  
Old 08-25-2015
Hi jlliagre,
I have read it and found need to change some zfs value parameter. Is that safe to change those parameters recommended? Is that will affect the data in that filesystem?
# 17  
Old 08-25-2015
Which ones?
What is the busy file system used for?
# 18  
Old 08-25-2015
Hi jlliagre,

Currently we use the affected filesystem to store database related files such tables,indexes.

Below is the our current ZFS setting. Mostly i heard arc_max,zfs:zfs_vdev_max_pending,ssd:ssd_max_throttle parameter need to fine tune. Is that rite?


Code:
arc_reduce_dnlc_percent = 0x3
zfs_arc_max = 0x0
zfs_arc_min = 0x0
arc_shrink_shift = 0x7
zfs_mdcomp_disable = 0x0
zfs_prefetch_disable = 0x0
zfetch_max_streams = 0x8
zfetch_min_sec_reap = 0x2
zfetch_block_cap = 0x100
zfetch_array_rd_sz = 0x100000
zfs_default_bs = 0x9
zfs_default_ibs = 0xe
metaslab_aliquot = 0x80000
mdb: variable reference_tracking_enable not found: unknown symbol name
mdb: variable reference_history not found: unknown symbol name
spa_max_replication_override = 0x3
spa_mode_global = 0x3
zfs_flags = 0x0
zfs_txg_synctime_ms = 0x1388
zfs_txg_timeout = 0x1e
zfs_write_limit_min = 0x2000000
zfs_write_limit_max = 0xfb4d0c00
zfs_write_limit_shift = 0x3
zfs_write_limit_override = 0x0
zfs_no_write_throttle = 0x0
zfs_vdev_cache_max = 0x4000
zfs_vdev_cache_size = 0x0
zfs_vdev_cache_bshift = 0x10
vdev_mirror_shift = 0x15
zfs_vdev_max_pending = 0xa
zfs_vdev_min_pending = 0x4
zfs_vdev_future_pending = 0xa
zfs_scrub_limit = 0xa
zfs_no_scrub_io = 0x0
zfs_no_scrub_prefetch = 0x0
zfs_vdev_time_shift = 0x6
zfs_vdev_ramp_rate = 0x2
zfs_vdev_aggregation_limit = 0x20000
fzap_default_block_shift = 0xe
zfs_immediate_write_sz = 0x8000
zfs_read_chunk_size = 0x100000
zfs_nocacheflush = 0x0
zil_replay_disable = 0x0
metaslab_gang_threshold = 0x100001
metaslab_df_alloc_threshold = 0x100000
metaslab_df_free_pct = 0x4
zio_injection_enabled = 0x0
zvol_immediate_write_sz = 0x8000

One more thing the documentation say need to change the logbias setting for database filesystem

My current database fileystem as below;
Code:
NAME           PROPERTY       VALUE         SOURCE
ora2pool/ora2  primarycache   all           default
ora2pool/ora2  recordsize     128K          default
ora2pool/ora2  compressratio  1.00x         -
ora2pool/ora2  compression    off           default
ora2pool/ora2  available      351G          -
ora2pool/ora2  used           484G          -
ora2pool/ora2  quota          none          default
ora2pool/ora2  logbias        latency       default

# 19  
Old 08-25-2015
You missed the "Number One rule".

As the file systems stores tables and indexes, tune the recordsize setting. It should probably be 8k vs 128k but it is too late for the parameter to affect the existing files. Look for "Important Note:" in the white paper for a workaround.
Properly tuning the record size is know to dramatically reduce the number of I/Os in some use cases although not necessarily with yours.
# 20  
Old 08-28-2015
Quote:
Originally Posted by jlliagre
...

- ZFS memory is released asynchronously and gradually by observing RAM demand while other file system's memory is released synchronously and (almost) instantaneously. Where that matters is when an application requests a very large amount of non pageable memory as the allocation might fail. The arc_max tuning prevents ZFS to use all the RAM helping these allocations to succeed.
A little late here, but...

It's much worse than that on a server running Oracle database instance(s). The ZFS ARC does not play nice with Oracle databases. At all:

1. ZFS ARC expands to use all free memory - as 4k pages.
2. Oracle DB has a transient demand for memory - but it requests large pages (4 MB IIRC).
3. Entire server comes to an effective screeching halt while VM management is hung coalescing large pages.
4. Oracle DB releases the large pages, ZFS ARC grabs them and fragments them.
5. Repeat.

If the server is used just as a database server, limit the ARC to under 1 GB, if not smaller. After rebooting, check to be sure the ARC is actually limited to what you specified - if you go too small your limit will be ignored.
# 21  
Old 08-28-2015
@achenle Yes, the ARC size should not be left unlimited when running an Oracle database. 1GB seems to be quite aggressive with a 32GB server though. That might waste memory and will likely affect overall performance.
Login or Register to Ask a Question

Previous Thread | Next Thread

7 More Discussions You Might Find Interesting

1. UNIX for Beginners Questions & Answers

How to finish expanding a zfs filesystem?

I have a esxi 6.7 server running a Solaris 10 x86 vm (actually a bunch of them). The VM uses zfs for the pools (of course). I expand the underlying ESX logical disk, for example from 50GB to 100gb, then I set autoexpand=on <pool> that belongs to the esx logical disk. what am i missing to... (2 Replies)
Discussion started by: mrmurdock
2 Replies

2. Solaris

Extend zfs storage filesystem

Hello, Need to ask the question regarding extending the zfs storage file system. currently after using the command, df -kh u01-data-pool/data 600G 552 48G 93% /data /data are only 48 gb remaining and it has occupied 93% for total storage. zpool u01-data-pool has more then 200 gb... (14 Replies)
Discussion started by: shahzad53
14 Replies

3. Solaris

Please explain why ZFS is said to be a hybrid filesystem and a volume manager also

Hi guys! How come ZFS is said to be not just a filesystem but a hybrid filesystem and also a volume manager? Please explain. I will appreciate your replies. Hope you can help me figure this out. Thanks in advance! (1 Reply)
Discussion started by: Klyde
1 Replies

4. Filesystems, Disks and Memory

Howto Convert a filesystem from Veritas to ZFS?

Hi Folks, Looking for info here more than any actual HowTo, does anyone know if there is an actual way of converting a Veritas or UFS filesystem to ZFS leaving the resident data intact. All that I have been able to find, including the commercial products seem to require the FS backed up from... (1 Reply)
Discussion started by: gull04
1 Replies

5. Solaris

Mount old zfs filesystem

Hey all, I have a machine with 16 drive slots. Two of the drives have a ZFS mirror of the operating system, the other 14 contain the storage raidz. So, after installing Opensolaris on the OS drives, how can I remount the storage raid? TIA (11 Replies)
Discussion started by: PatrickBaer
11 Replies

6. Solaris

Why does the # of blocks change for a file on a ZFS filesystem?

I created a zpool and zfs filesystem in OpenSolaris. I made two NFS mount points: > zpool history History for 'raidpool': 2009-01-15.17:12:48 zpool create -f raidpool raidz1 c4t1d0 c4t2d0 c4t3d0 c4t4d0 c4t5d0 2009-01-15.17:15:54 zfs create -o mountpoint=/vol01 -o sharenfs=on -o... (0 Replies)
Discussion started by: sqa777
0 Replies

7. Solaris

How do I export a zfs filesystem that I created?

I created a zpool and two ZFS volumes in OpenSolaris. I would like both ZFS volumes to be exportable. However, I don't know how to set that up. These are the steps I did: 1) Create the zpool using raidz1 across five disks. I have six disks and created a zpool across 5 of them. c4t0d0... (3 Replies)
Discussion started by: sqa777
3 Replies
Login or Register to Ask a Question