Linux and UNIX Man Pages

Linux & Unix Commands - Search Man Pages

pae(4) [debian man page]

PAE(4)							 BSD/i386 Kernel Interfaces Manual						    PAE(4)

NAME
PAE -- Physical Address Extensions SYNOPSIS
options PAE DESCRIPTION
The PAE option provides support for the physical address extensions capability of the Intel Pentium Pro and above CPUs, and allows for up to 64 gigabytes of memory to be used in systems capable of supporting it. With the PAE option, memory above 4 gigabytes is simply added to the general page pool. The system makes no distinction between memory above or below 4 gigabytes, and no specific facility is provided for a process or the kernel to access more memory than they would otherwise be able to access, through a sliding window or otherwise. SEE ALSO
smp(4), tuning(7), config(8), bus_dma(9) HISTORY
The PAE option first appeared in FreeBSD 4.9 and FreeBSD 5.1. AUTHORS
Jake Burkholder <jake@FreeBSD.org> BUGS
Since KLD modules are not compiled with the same options headers that the kernel is compiled with, they must not be loaded into a kernel com- piled with the PAE option. Many devices or their device drivers are not capable of direct memory access to physical addresses above 4 gigabytes. In order to make use of direct memory access IO in a system with more than 4 gigabytes of memory when the PAE option is used, these drivers must use a facility for remapping or substituting physical memory which is not accessible to the device. One such facility is provided by the busdma interface. Device drivers which do not account for such devices will not work reliably in a system with more than 4 gigabytes of memory when the PAE option is used, and may cause data corruption. The PAE kernel configuration file includes the PAE option, and explicitly excludes all device drivers which are known to not work or have not been tested in a system with the PAE option and more than 4 gigabytes of memory. Many parameters which determine how memory is used in the kernel are based on the amount of physical memory. The formulas used to determine the values of these parameters for specific memory configurations may not take into account the fact there may be more than 4 gigabytes of memory, and may not scale well to these memory configurations. In particular, it may be necessary to increase the amount of virtual address space available to the kernel, or to reduce the amount of a specific resource that is heavily used, in order to avoid running out of virtual address space. The KVA_PAGES option may be used to increase the kernel virtual address space, and the kern.maxvnodes sysctl(8) may be used to decrease the number of vnodes allowed, an example of a resource that the kernel is likely to overallocate in large memory configurations. For optimal performance and stability it may be necessary to consult the tuning(7) manual page, and make adjustments to the parameters docu- mented there. BSD
April 8, 2003 BSD

Check Out this Related Man Page

XEN(4)							   BSD Kernel Interfaces Manual 						    XEN(4)

NAME
xen -- Xen Hypervisor Guest (DomU) Support SYNOPSIS
To compile para-virtualized (PV) Xen guest support into an i386 kernel, place the following lines in your kernel configuration file: options PAE options XEN nooptions NATIVE To compile hardware-assisted virtualization (HVM) Xen guest support with para-virtualized drivers into an amd64 kernel, place the following lines in your kernel configuration file: options XENHVM device xenpci DESCRIPTION
The Xen Hypervisor allows multiple virtual machines to be run on a single computer system. When first released, Xen required that i386 ker- nels be compiled "para-virtualized" as the x86 instruction set was not fully virtualizable. Primarily, para-virtualization modifies the vir- tual memory system to use hypervisor calls (hypercalls) rather than direct hardware instructions to modify the TLB, although para-virtualized device drivers were also required to access resources such as virtual network interfaces and disk devices. With later instruction set extensions from AMD and Intel to support fully virtualizable instructions, unmodified virtual memory systems can also be supported; this is referred to as hardware-assisted virtualization (HVM). HVM configurations may either rely on transparently emu- lated hardware peripherals, or para-virtualized drivers, which are aware of virtualization, and hence able to optimize certain behaviors to improve performance or semantics. FreeBSD supports a fully para-virtualized (PV) kernel on the i386 architecture using options XEN and nooptions NATIVE; currently, this requires use of a PAE kernel, enabled via options PAE. FreeBSD supports hardware-assisted virtualization (HVM) on both the i386 and amd64 kernels; however, PV device drivers with an HVM kernel are only supported on the amd64 architecture, and require options XENHVM and device xenpci. Para-virtualized device drivers are required in order to support certain functionality, such as processing management requests, returning idle physical memory pages to the hypervisor, etc. Xen DomU device drivers Xen para-virtualized drivers are automatically added to the kernel if a PV kernel is compiled using options XEN; for HVM environments, options XENHVM and device xenpci are required. The follow drivers are supported: balloon Allow physical memory pages to be returned to the hypervisor as a result of manual tuning or automatic policy. blkback Exports local block devices or files to other Xen domains where they can then be imported via blkfront. blkfront Import block devices from other Xen domains as local block devices, to be used for file systems, swap, etc. console Export the low-level system console via the Xen console service. control Process management operations from Domain 0, including power off, reboot, suspend, crash, and halt requests. evtchn Expose Xen events via the /dev/xen/evtchn special device. netback Export local network interfaces to other Xen domains where they can be imported via netfront. netfront Import network interfaces from other Xen domains as local network interfaces, which may be used for IPv4, IPv6, etc. pcifront Allow physical PCI devices to be passed through into a PV domain. xenpci Represents the Xen PCI device, an emulated PCI device that is exposed to HVM domains. This device allows detection of the Xen hypervisor, and provides interrupt and shared memory services required to interact with the hypervisor. Performance considerations In general, PV drivers will perform better than emulated hardware, and are the recommended configuration for HVM installations. Using a hypervisor introduces a second layer of scheduling that may limit the effectiveness of certain FreeBSD scheduling optimisations. Among these is adaptive locking, which is no longer able to determine whether a thread holding a lock is in execution. It is recommended that adaptive locking be disabled when using Xen: options NO_ADAPTIVE_MUTEXES options NO_ADAPTIVE_RWLOCKS options NO_ADAPTIVE_SX SEE ALSO
pae(4) HISTORY
Support for xen first appeared in FreeBSD 8.1. AUTHORS
FreeBSD support for Xen was first added by Kip Macy <kmacy@FreeBSD.org> and Doug Rabson <dfr@FreeBSD.org>. Further refinements were made by Justin Gibbs <gibbs@FreeBSD.org>, Adrian Chadd <adrian@FreeBSD.org>, and Colin Percival <cperciva@FreeBSD.org>. This manual page was written by Robert Watson <rwatson@FreeBSD.org>. BUGS
FreeBSD is only able to run as a Xen guest (DomU) and not as a Xen host (Dom0). A fully para-virtualized (PV) kernel is only supported on i386, and not amd64. Para-virtualized drivers under hardware-assisted virtualization (HVM) kernel are only supported on amd64, not i386. As of this release, Xen PV DomU support is not heavily tested; instability has been reported during VM migration of PV kernels. Certain PV driver features, such as the balloon driver, are under-exercised. BSD
December 17, 2010 BSD
Man Page