9 More Discussions You Might Find Interesting
1. Virtualization and Cloud Computing
Hi,
I wanted to know if there was a way to get a unique UUID for KVM guest from the guest OS which isn't easily modifiable. I have a software that I would like to run inside a KVM guest and want to do some license protection on it using a unique UUID. Does KVM allow multiple VMs on the same... (0 Replies)
Discussion started by: shivanik
0 Replies
2. Red Hat
Background : - Need to create addition 40G storage for VM guest.
1. I have created new KVM - VM guest on RHEL 5.8 server hosting server.
2. Hosting server has occupied all size with LV and there is not space to create new LV.
3. I tried to achieve this requirement by creating 40G file size and... (1 Reply)
Discussion started by: Nats
1 Replies
3. UNIX for Dummies Questions & Answers
hi guys
I want to test this environment I really have some doubts and I need to read a little bit more
I have two physical servers (where Vmware ESXi used to be installed) I was thinking install KVM (if it turns out to be difficult have KVM up and running I will use ESXi) and deploy the... (1 Reply)
Discussion started by: karlochacon
1 Replies
4. UNIX and Linux Applications
Is KVM supported on suse SLES10 (0 Replies)
Discussion started by: shekhar_ssm
0 Replies
5. Red Hat
Hi every body
Umm. I have 4 physical servers and i need to make the 1 machine from 4 server by KVM like Vmware ESX ?
Do KVM can consolidate physical servers to 1 machine ? If KVM can do it. can i find manual to do it from where
thk.... (3 Replies)
Discussion started by: infjustice
3 Replies
6. Linux
Hi
Is there an application for KVM which it can save guest's registers and system calls informations?
Thanks a lot in advance!
Setareh (0 Replies)
Discussion started by: setareh92
0 Replies
7. Red Hat
Hi All,
I have RHEL 5u4 physical system with 2 Qlogic fc cards. It hosts 2 KVM virtual machines which are also running RHEL 5u4 OS. After all these I have created a virutal HBA (refered in google) successfully on the base OS. But the same is not visible to guest OS.
My question here is,
... (1 Reply)
Discussion started by: Vichu
1 Replies
8. Virtualization and Cloud Computing
Hi folks,
Host - Ubuntu 9.10 64bit
Virtualizer - KVM
I followed;
Virtualization With KVM On Ubuntu 9.10
Virtualization With KVM On Ubuntu 9.10 | HowtoForge - Linux Howtos and Tutorials
to install this Virtual Machine. The steps worked without problem. But I have following points... (0 Replies)
Discussion started by: satimis
0 Replies
9. Solaris
We have 3 V880's and we need to purchase a KVM for them. I've never purchased one before. Does anyone have a recommendation for one? (3 Replies)
Discussion started by: dangral
3 Replies
BUF(9) BSD Kernel Developer's Manual BUF(9)
NAME
buf -- kernel buffer I/O scheme used in FreeBSD VM system
DESCRIPTION
The kernel implements a KVM abstraction of the buffer cache which allows it to map potentially disparate vm_page's into contiguous KVM for
use by (mainly file system) devices and device I/O. This abstraction supports block sizes from DEV_BSIZE (usually 512) to upwards of several
pages or more. It also supports a relatively primitive byte-granular valid range and dirty range currently hardcoded for use by NFS. The
code implementing the VM Buffer abstraction is mostly concentrated in /usr/src/sys/kern/vfs_bio.c.
One of the most important things to remember when dealing with buffer pointers (struct buf) is that the underlying pages are mapped directly
from the buffer cache. No data copying occurs in the scheme proper, though some file systems such as UFS do have to copy a little when deal-
ing with file fragments. The second most important thing to remember is that due to the underlying page mapping, the b_data base pointer in
a buf is always *page* aligned, not *block* aligned. When you have a VM buffer representing some b_offset and b_size, the actual start of
the buffer is (b_data + (b_offset & PAGE_MASK)) and not just b_data. Finally, the VM system's core buffer cache supports valid and dirty
bits (m->valid, m->dirty) for pages in DEV_BSIZE chunks. Thus a platform with a hardware page size of 4096 bytes has 8 valid and 8 dirty
bits. These bits are generally set and cleared in groups based on the device block size of the device backing the page. Complete page's
worth are often referred to using the VM_PAGE_BITS_ALL bitmask (i.e., 0xFF if the hardware page size is 4096).
VM buffers also keep track of a byte-granular dirty range and valid range. This feature is normally only used by the NFS subsystem. I am
not sure why it is used at all, actually, since we have DEV_BSIZE valid/dirty granularity within the VM buffer. If a buffer dirty operation
creates a 'hole', the dirty range will extend to cover the hole. If a buffer validation operation creates a 'hole' the byte-granular valid
range is left alone and will not take into account the new extension. Thus the whole byte-granular abstraction is considered a bad hack and
it would be nice if we could get rid of it completely.
A VM buffer is capable of mapping the underlying VM cache pages into KVM in order to allow the kernel to directly manipulate the data associ-
ated with the (vnode,b_offset,b_size). The kernel typically unmaps VM buffers the moment they are no longer needed but often keeps the
'struct buf' structure instantiated and even bp->b_pages array instantiated despite having unmapped them from KVM. If a page making up a VM
buffer is about to undergo I/O, the system typically unmaps it from KVM and replaces the page in the b_pages[] array with a place-marker
called bogus_page. The place-marker forces any kernel subsystems referencing the associated struct buf to re-lookup the associated page. I
believe the place-marker hack is used to allow sophisticated devices such as file system devices to remap underlying pages in order to deal
with, for example, re-mapping a file fragment into a file block.
VM buffers are used to track I/O operations within the kernel. Unfortunately, the I/O implementation is also somewhat of a hack because the
kernel wants to clear the dirty bit on the underlying pages the moment it queues the I/O to the VFS device, not when the physical I/O is
actually initiated. This can create confusion within file system devices that use delayed-writes because you wind up with pages marked clean
that are actually still dirty. If not treated carefully, these pages could be thrown away! Indeed, a number of serious bugs related to this
hack were not fixed until the 2.2.8/3.0 release. The kernel uses an instantiated VM buffer (i.e., struct buf) to place-mark pages in this
special state. The buffer is typically flagged B_DELWRI. When a device no longer needs a buffer it typically flags it as B_RELBUF. Due to
the underlying pages being marked clean, the B_DELWRI|B_RELBUF combination must be interpreted to mean that the buffer is still actually
dirty and must be written to its backing store before it can actually be released. In the case where B_DELWRI is not set, the underlying
dirty pages are still properly marked as dirty and the buffer can be completely freed without losing that clean/dirty state information.
(XXX do we have to check other flags in regards to this situation ???)
The kernel reserves a portion of its KVM space to hold VM Buffer's data maps. Even though this is virtual space (since the buffers are
mapped from the buffer cache), we cannot make it arbitrarily large because instantiated VM Buffers (struct buf's) prevent their underlying
pages in the buffer cache from being freed. This can complicate the life of the paging system.
HISTORY
The buf manual page was originally written by Matthew Dillon and first appeared in FreeBSD 3.1, December 1998.
BSD December 22, 1998 BSD