sys_attrs_vm(5) File Formats Manual sys_attrs_vm(5)
NAME
sys_attrs_vm - system attributes for the vm kernel subsystem
DESCRIPTION
This reference page describes system attributes for the Virtual Memory (vm) kernel subsystem.
Do not directly edit the system configuration file to directly change the value of the system parameter; use the dxkerneltuner application,
the /sbin/sysconfig -r command, or the sysconfigdb command to change the value of the attribute. See dxkerneltuner(8), sysconfig(8), and
sysconfigdb(8) for more information about your options for configuring kernel subsystems. The System Administration and System Configura-
tion and Tuning books also discuss this topic.
In the following list, an asterisk (*) precedes the names of attributes whose values you can change while the system is running. Changes to
values of attributes whose names are not preceded by an asterisk take effect only when the system is rebooted.
A value that sets no limit (0), a soft limit (1), or a hard limit (2) on the resident set size of a process.
Default value: 0 (no limit)
By default, applications can set a process-specific limit on the number of pages resident in memory by specifying the RLIMIT_RSS
resource value in a setrlimit() call. However, applications are not required to limit the resident set size of a process and there
is no system-wide default limit. Therefore, the resident set size for a process is limited only by system memory restrictions. If
the demand for memory exceeds the number of free pages, processes with large resident set sizes are likely candidates for swapping.
The anon_rss_enforce attribute enables different levels of control over process set sizes and when the pages that a process is using
in anonymous memory are swapped out (blocking the process) during times of contention for free pages. Setting anon_rss_enforce to
either 1 or 2, allows you to enforce a system-wide limit on resident set size for a process through the vm_rss_max_percent
attribute. Setting anon_rss_enforce to 1 (a soft limit), enables finer control over process blocking and paging of anonymous memory
by allowing you to set the vm_rss_block_target and vm_rss_wakeup_target attributes.
When anon_rss_enforce is set to 2, the resident set size for a process cannot exceed the system-wide limit set by the
vm_rss_max_percent attribute or a process-specific limit, if any, that is set by an application's setrlimit() call. When the resi-
dent set size exceeds either of these limits, the system starts to swap out pages of anonymous memory that the process is already
using to keep the resident set size within the specified limit.
When anon_rss_enforce is set to 1, any system-default and process-specific limits on resident set size still apply and will cause
swapping to occur when exceeded. Otherwise, a process's pages are swapped out when the number of free pages is less than the value
of the vm_rss_block_target attribute. The process remains blocked until the number of free pages reaches the value of the
vm_rss_wakeup_target.
This attribute supports diskless systems and enables the pager to be more responsive. It functions under the following conditions:
The diskless driver is loaded and configured. Diskless system services are part of the Dataless Management Services (DMS). DMS
enables systems to run the operating system from a server without requiring a local hard disk on each client system. The server is
serving a real-time pre-emptive kernel.
Default value: 0 (off)
Maximum value: 1 (on)
A value that enables (1) or disables (0) writing pages from the user page table to a crash dump.
Default value: 0 (disabled)
It is recommended that you start to use the dump_user_pte_pages attribute in the generic subsystem rather than the vm subsystem.
Attributes in both subsystems change the same address value; however, dump_user_pte_pages will not be visible as a vm subsystem
attribute in a future release of the operating system.
A value that enables (1) or disables (0) a soft guard page on the program stack. This allows an application to enter a signal han-
dler on stack overflows, which otherwise would cause a core dump.
Default value: 0 (disabled)
The enable_yellow_zone attribute is intended for use by systems programmers who are debugging kernel applications, such as device
drivers.
Number of 4-MB chunks of memory reserved at boot time for shared memory use. This memory cannot be used for any other purpose, nor
can it be returned to the system or reclaimed when not being used. On NUMA-aware systems (GS80, GS160, and GS320), the gh_chunks
attribute affects only the first Resource Affinity Domain (RAD). See the entry for rad_gh_regions for more information.
Default value: 0 (chunks) (The zero value means that use of granularity hints is disabled.)
Minimum value: 0
Maximum value: 9,223,372,036,854,775,807
The attributes associated with "granularity hints" (the gh_*attributes) are sometimes recommended specifically for database servers.
Using segmented shared memory (SSM) is the alternative to using granularity hints and is recommended for most systems. Therefore, if
the gh_chunks attribute is not set to zero, the ssm_threshold attribute of the ipc subsystem should be set to zero. If the gh_chunks
attribute is set to zero, the ssm_threshold attribute should not be set to zero.
See your database product documentation and the System Configuration and Tuning manual for more information about using granularity
hints or SSM.
A value that enables (1) or disables (0) a failure return by the shmget function under certain conditions when granularity hints is
in use. When this attribute is set to 1, the shmget() function returns a failure if the requested segment size is larger than the
value of the gh-min-seg-size attribute and if there is insufficient memory allocated by the gh-chunks attribute to satisfy the
request.
Default value: 1 (enabled)
A value that specifies whether the memory reserved for granularity hints is (1) or is not (0) allocated from low physical memory
addresses. Allocation from low physical memory addresses is useful if you have an odd number of memory boards.
Default value: 1 (allocation from low physical memory addresses)
Specifies whether the memory reserved for granularity hints is (1) or is not (0) sorted.
Default value: 0 (not sorted)
Size, in bytes, of the segment in which shared memory is allocated from the memory reserved for shared memory, according to the
value of the gh-chunks attribute.
Default value: 8,388,608 (bytes, or 8 MB)
Minimum value: 0
Maximum value: 9,223,372,036,854,775,807
Number of pages per thread that are used for stack space in kernel mode.
Default value: 2 (pages per thread)
Minimum value: 2
Maximum value: 3
The sysconfig command may display 0 (zero) when the actual setting is 2. This error will be corrected in a release following Tru64
UNIX Version 5.0.
It is strongly recommended that you not modify kernel_stack_pages unless directed to do so by your support representative. In the
event of a kernel stack not valid halt error that is caused by a kernel stack overflow problem, increasing the value of ker-
nel_stack_pages may work around the problem. This workaround will not be successful if the error occurred because the stack pointer
became corrupted. In any event, a kernel stack not valid halt error is always an unexpected error that should be reported to your
support representative for further investigation.
Number of freed kernel stack pages that are saved for reuse. Above this limit, freed kernel stack pages are immediately deallocated.
Default value: 5 (pages)
Minimum value: 0
Maximum value: 2,147,483,647
Deallocation of freed kernel stack pages ensures that memory is available for other operations. However, the processor time required
for deallocating freed kernel stack pages has a negative performance impact that might be more noticeable on NUMA-enabled systems
(GS80, GS160, GS320) than on other systems. You can use the kstack_free_target value to make the most appropriate tradeoff between
increased memory consumption and time spent by CPUs in a purge operation.
You can change the value of the kstack_free_target attribute while the system is running.
A value that enables (1) or disables (0) caching of malloc memory on a per CPU basis.
Default value: 1
Do not modify the default setting for this attribute unless instructed to do so by support personnel or by patch kit documentation.
Default value: 1 (on)
Do not modify the default setting for this attribute unless instructed to do so by support personnel or by patch kit documentation.
Percentage of the secondary cache that is reserved for anonymous (nonshared) memory. Increasing the cache for anonymous memory
reduces the cache space available for file-backed memory (shared). This attribute is useful only for benchmarking.
Default value: 0 (percent)
Minimum value: 0
Maximum value: 100
For NUMA-aware systems (GS80, GS160, and GS320), the granularity hints chunk size (in megabytes) for the Resource Affinity Domain
(RAD) identified by n. There are 64 elements in the attribute array, rad_gh_regions[0] to rad_gh_regions[63]. Although all elements
in the array are visible on all systems, the kernel uses only the element values corresponding to RADs that exist on the system.
See the entry for the gh_chunks attribute for general information about granularity hints memory allocation.
Default value: 0 (MB) (Granularity hints is disabled.)
The array of rad_gh_regions[n] attributes replace the gh_chunks attribute, which affects only the first or (for non-NUMA systems)
only RAD (rad_gh_regions[0]) supported by the system. Although gh_chunks and the set of rad_gh_regions attributes both specify how
much memory is manipulated through granularity hints memory allocation, the unit of measurement for the former is 4-megabyte units
whereas the unit of measurement for the latter is megabytes. Therefore:
rad_gh_regions[0] = gh_chunks * 4
Setting the gh_chunks attribute, not the rad_gh_regions[0] attribute, is recommended if you want to use granularity hints memory
allocation on non-NUMA systems.
A value that controls whether user text can or cannot be replicated on multiple CPUs of a NUMA-enabled system (GS80, GS160,
GS320). When the value is 1, replication of user text is enabled. When the value is 0, replication of user text is disabled. This
attribute is sometimes used by kernel developers when debugging software for NUMA-enabled systems; however, the attribute is not for
general use. (The value must be 0 on non-NUMA systems and changing it to 0 on NUMA systems will degrade performance.)
Default value: 1, on a NUMA-enabled system; otherwise, 0.
Do not change the value of this attribute unless instructed to do so by support personnel or patch kit instructions.
The device partitions reserved for swapping. This is a comma-separated string, for example /dev/disk/dsk0g,/dev/disk/dsk0d that can
be up to 256 bytes in length.
Percentage of memory above which the UBC is only borrowing memory from the virtual memory subsystem. Paging does not occur until
the UBC has returned all its borrowed pages.
Default value: 20 (percent)
Minimum value: 0
Maximum value: 100
Increasing this value may increase UBC cache effectiveness and improve throughput; however, the cost is a likely degradation of sys-
tem response time during a low memory condition.
Enables (1) or disables (0) the faulting of Unified Buffer Cache (UBC) pages off the free list. When ubc_ffl is enabled, a UBC page
freed by the system is cached and can quickly be reclaimed from the free list before it is allocated for another use.
Default value: 1 (enabled)
Temporarily setting ubc_ffl to 0 while vm_ffl remains at 1 (or the reverse) sometimes provides information that is useful to operat-
ing system developers when debugging system problems. However, do not modify the default setting for ubc_ffl unless instructed to
do so by support personnel.
Number of I/O operations (per second) that the virtual memory subsystem performs when the number of dirty (modified) pages in the
UBC exceeds the value of the vm-ubcdirtypercent attribute.
Default value: 5 (operations per second)
Minimum value: 0
Maximum value: 2,147,483,647
Maximum percentage of physical memory that the UBC can use at one time.
Default value: 100 (percent)
Minimum value: 0
Maximum value: 100
Minimum percentage of physical memory that the UBC can use.
Default value: 10 (percent)
Minimum value: 0
Maximum value: 100
A value that enables (1) or disables (0) the ability of the task swapper to aggressively swap out idle tasks.
Default value: 0 (disabled)
Setting this attribute to 1 helps prevent a low-memory condition from occurring and allows more jobs to be run simultaneously. How-
ever, interactive response times are likely to be longer on a system that is excessively paging and swapping.
The number of asynchronous I/O requests per swap partition that can be outstanding at one time. Asynchronous swap requests are used
for pageout operations and for prewriting modified pages.
Default value: 4 (requests)
Minimum value: 0
Maximum value: 2,147,483,647
Size, in bytes, of the kernel cluster submap, which is used to allocate the scatter/gather map for clustered file and swap I/O.
Default value: 1,048,576 (bytes, or 1 MB)
Minimum value: 0
Maximum value: 922,337,203,854,775,807
Maximum size, in bytes, of a single scatter/gather map for a clustered I/O request.
Default value: 65,536 (bytes, or 64 KB)
Minimum value: 0
Maximum value: 922,337,203,854,775,807
Number of times that the pages of an anonymous object are copy-on-write faulted after a fork operation but before they are copied as
part of the fork operation.
Default value: 4 (faults)
Minimum value: 0
Maximum value: 2,147,483,647
Size, in bytes, of the kernel copy submap.
Default value: 1,048,576 (bytes, or 1 MB)
Minimum value: 0
Maximum value: 922,337,203,854,775,807
Enables (1) or disables (0) the faulting of Virtual Memory (VM) pages off the free list. When vm_ffl is enabled, a VM page freed by
the system is cached and can quickly be reclaimed from the free list before it is allocated for another use.
Default value: 1 (enabled)
Temporarily setting vm_ffl to 0 while ubc_ffl remains at 1 (or the reverse) sometimes provides information that is useful to operat-
ing system developers when debugging kernel problems. Do not modify the default setting for vm_ffl unless instructed to do so by
support personnel.
Minimum amount of time, in seconds, that a task remains in the inswapped state before it is considered a candidate for outswapping.
Default value: 1 (second)
Minimum value: 0
Maximum value: 60
Size, in bytes, of the largest pagein (read) cluster that is passed to the swap device.
Default value: 16,384 (bytes) (16 KB)
Minimum value: 8192
Maximum value: 131,072
Size, in bytes, of the largest pageout (write) cluster that is passed to the swap device.
Default value: 32,768 (bytes) (32 KB)
Minimum value: 8192
Maximum value: 131,072
Base address of the kernel's virtual address space. The value can be either Oxffffffff80000000 or Oxfffffffe00000000, which sets
the size of the kernel's virtual address space to either 2 GB or 8 GB, respectively.
Default value: 18,446,744,073,709,551,615 (2 to the power of 64)
You may need to increase the kernel's virtual address space on very large memory (VLM) systems (for example, systems with several
gigabytes of physical memory and several thousand large processes).
The threshold value that stops page swapping. When the number of pages on the free list reaches this value, paging stops.
Default value: Varies, depending on physical memory size; about 16 times the value of vm_page_free_target
Minimum value: 0
Maximum value: 2,147,483,647
The vm_page_free_hardswap value is computed from the vm_page_free_target value, which by default scales with physical memory size.
If you change vm_page_free_target, your change affects vm_page_free_hardswap as well.
The threshold value that starts page swapping. When the number of pages on the free page list falls below this value, paging starts.
Default value: 20 (pages, or twice the amount of vm_page_free_reserved)
Minimum value: 0
Maximum value: 2,147,483,647
The threshold value that begins hard swapping. When the number of pages on the free list falls below this value for five seconds,
hard swapping begins.
Default value: Automatically scaled by using this formula:
vm_page_free_min + ((vm_page_free_target - vm_page_free_min) / 2)
Minimum value: 0 (pages)
Maximum value: 2,147,483,647
The threshold value that determines when memory is limited to privileged tasks. When the number of pages on the free page list
falls below this value, only privileged tasks can get memory.
Default value: 10 (pages)
Minimum value: 1
Maximum value: 2,147,483,647
The threshold value that begins swapping of idle tasks. When the number of pages on the free page list falls below this value, idle
task swapping begins.
Default value: Automatically scaled by using this formula:
vm_page_free_min + ((vm_page_free_target - vm_page_free_min) / 2)
Minimum value: 0
Maximum value: 2,147,483,647
The threshold value that stops paging, When the number of pages on the free page list reaches this value, paging stops.
Default value: Based on the amount of managed memory that is available on the system, as shown in the following table:
---------------------------------------------------
Available Memory (M) vm_page_free_target (pages)
---------------------------------------------------
Less than 512 128
512 to 1023 256
1024 to 2047 512
2048 to 4095 768
4096 and higher 1024
---------------------------------------------------
Minimum value: 0 (pages)
Maximum value: 2,147,483,647
Maximum number of modified UBC pages that the vm subsystem will prewrite to disk if it anticipates running out of memory. The
prewritten pages are the least recently used (LRU) pages.
Default value: vm_page_free_target * 2
Minimum value: 0
Maximum value: 2,147,483,647
A threshold number of free pages that will start swapping of anonymous memory from the resident set of a process. Paging of anony-
mous memory starts when the number of free pages meets or exceeds this value. The process is blocked until the number of free pages
reaches the value set by the vm_rss_wakeup_target attribute.
Default value: Same as vm_page_free_optimal
Minimum value: 0
Maximum value: 2,147,483,647
The default value of the vm_rss_block_target attribute is the same as the default value of the vm_page_free_optimal attribute that
controls the threshold value for hard swapping.
You can increase the value of vm_rss_block_target to start paging of anonymous memory earlier than when hard swapping occurs or
decrease the value to delay paging of anonymous memory beyond the point at which hard swapping occurs.
A percentage of the total pages of anonymous memory on the system that is the system-wide limit on the resident set size for any
process. The value of this attribute has an effect only if anon_rss_enforce is set to 1 or 2.
Default value: 100 (percent)
Minimum value: 1
Maximum value: 100
You can decrease this percentage to enforce a system-wide limit on the resident set size for any process. Be aware, however, that
this limit applies to privileged, as well as unprivileged, processes and will override a larger resident set size that may be speci-
fied for a process through the setrlimit() call.
A threshold number of free pages that will unblock a process whose anonymous memory is swapped out. The process is unblocked when
the number of free pages meets this value.
Default value: Same is vm_page_free_optimal
Minimum value: 0
Maximum value: 2,147,483,647
The default value of the vm_rss_wakeup_target attribute is the same as the default value of the vm_page_free_optimal attribute that
controls the threshold value for hard swapping.
You can increase the value of vm_rss_wakeup_target to free more memory before unblocking a process or decrease the value to unblock
the process sooner (with less freed memory).
Number of text segments that can be cached in the segment cache. (Applies only if you enable segmentation)
Default value: 50 (segments)
The vm subsystem uses the segment cache to cache inactive executables and shared libraries. Because objects in the segment cache
can be accessed by mapping a page table entry, this cache eliminates I/O delays for repeated executions and reloads.
Reducing the number of segments in the segment cache can free memory and help to reduce paging overhead. (The size of each segment
depends on the text size of the executable or the shared library that is being cached.)
A value that enables (1) or disables (0) the ability of shared regions of user address space to also share the page tables that map
to those shared regions.
Default value: 1 (enabled)
In a TruCluster environment, this value must be the same on all cluster members.
Specifies the swap allocation mode, which can be immediate mode (1) or deferred mode (0).
Default value: 1 (immediate mode)
The number of synchronous I/O requests that can be outstanding to the swap partitions at one time. Synchronous swap requests are
used for pagein operations and task swapping.
Default value: 128 (requests)
Minimum value: 1
Maximum value: 2,147,483,647
Maximum percentage of physical memory that can be dynamically wired. The kernel and user processes use this memory for dynamically
allocated data structures and address space, respectively.
Default value: 80 (percent)
Minimum value: 1
Maximum value: 100
Maximum percentage of physical memory that can be dynamically wired. The kernel and user processes use this memory for dynamically
allocated data structures and address space, respectively.
Default value: 80 (percent)
Minimum value: 1
Maximum value: 100
Enables, disables, and tunes the trolling rate for the memory troller on systems supported by the memory troller.
When enabled, the memory troller continually reads the system's memory to proactively discover and handle memory errors. The troll
rate is expressed as a percentage of the system's total memory trolled per hour and you can change it at any time. Valid troll rate
settings are: Default value: 4 percent per hour
This default value applies if you do not specify any value for vm_troll_percent in the /etc/sysconfigtab. At the default troll
rate, each 8 kilobyte memory page is trolled once every 24 hours. Disable value: 0 (zero)
Specify a value of 0 (zero) to disable memory trolling. Range: 1 - 100 percent
Specify a value in the range 1 to 100 to set the troll rate to a percentage of memory to troll per hour. For example, a troll rate
of 50 reads half the total memory in one hour. After all memory is read, the troller starts a new pass at the beginning of memory.
Accelerated trolling: 101 percent
Specify a value greater than 100 percent to invoke one pass accelerated trolling. At this rate, all system memory is trolled at a
rate of approximately 6000 pages per second, where one page equals 8 kilobytes. Trolling is then automatically disabled after a sin-
gle pass. This mode is intended for trolling all memory quickly during off peak hours.
Low troll rates, such as the 4 percent default, have a negligible impact on system performance. Processor usage for memory trolling
increases as the troll rate is increased. Refer to the System Administration guide for additional performance information and memory
troller usage instructions.
Specifies the number of I/O operations that can be outstanding while purging dirty (modified) pages from the UBC. The dirty pages
are flushed to disk to reclaim memory. The UBC purge daemon will stop flushing dirty pages when the number of I/Os reaches the
vm_ubcbuffers limit or there are no more dirty pages in the UBC. AdvFS software does not use this attribute; only UFS software uses
it.
Default value: 256 (I/Os)
Minimum value: 0
Maximum value: 2,147,483,647
For systems running at capacity and on which many interactive users are performing write operations to UFS file systems, users might
detect slowed response times if many pages are flushed to disk each time the UBC buffers are purged. Decreasing the value of
vm_ubcbuffers causes shorter but more frequent purge operations, thereby smoothing out system response times. Do not, however,
decrease vm_ubcbuffers to a value that completely disables purging of dirty pages. One I/O for certain file systems might be associ-
ated with many pages because of write clustering of dirty pages.
Note
The sysconfig display indicates that vm_ubcbuffers can be changed while the system is running. This is misleading because changes to
this attribute only take affect when made at boot time.
You can also set the smoothsync_age attribute of the vfs kernel subsystem to address response-time delays that can occur during
periods of intense write activity. The smoothsync_age attribute uses a different metric (age of dirty pages rather than number of
I/Os) to balance the frequency and duration time of purge operations and therefore does not support the ability of UFS to flush all
dirty pages for the same write operation at the same time. However, smoothsync_age can be changed while the system is running and is
used by AdvFS as well as UFS software. See sys_attrs_vfs(5) for information about the smoothsync_age attribute.
The percentage of pages that must be dirty (modified) before the UBC starts writing them to disk.
Default value: 10 (percent)
Minimum value: 0
Maximum value: 100
In the context of an application thread, the number of pages that must be dirty (modified) before the UBC update daemon starts writ-
ing them. This value is for internal use only.
The minimum number of pages to be available for file expansion. When the number of available pages falls below this number, the UBC
steals additional pages to anticipate the file's expansion demands.
Default value: 24 (file pages)
Minimum value: 0
Maximum value: 2,147,483,647
The maximum percentage of UBC memory that can be used to cache a single file.
Default value: 10 (percent)
Minimum value: 0
Maximum value: 100
A threshold value that determines when the UBC starts to recognize sequential file access and steal the UBC LRU pages for a file to
satisfy its demand for pages. This value is the size of the UBC in terms of its percentage of physical memory.
Default value: 50 (percent)
Minimum value: 0
Maximum value: 100
SEE ALSO
Commands: dxkerneltuner(8), sysconfig(8), sysconfigdb(8)
Others: sys_attrs(5)
System Configuration and Tuning
System Administration
sys_attrs_vm(5)