UNMAP_MAPPING_RANGE(9) Memory Management in Linux UNMAP_MAPPING_RANGE(9)NAME
unmap_mapping_range - unmap the portion of all mmaps in the specified address_space corresponding to the specified page range in the
underlying file.
SYNOPSIS
void unmap_mapping_range(struct address_space * mapping, loff_t const holebegin, loff_t const holelen, int even_cows);
ARGUMENTS
mapping
the address space containing mmaps to be unmapped.
holebegin
byte in first page to unmap, relative to the start of the underlying file. This will be rounded down to a PAGE_SIZE boundary. Note that
this is different from truncate_pagecache, which must keep the partial page. In contrast, we must get rid of partial pages.
holelen
size of prospective hole in bytes. This will be rounded up to a PAGE_SIZE boundary. A holelen of zero truncates to the end of the file.
even_cows
1 when truncating a file, unmap even private COWed pages; but 0 when invalidating pagecache, don't throw away private data.
COPYRIGHT Kernel Hackers Manual 3.10 June 2014 UNMAP_MAPPING_RANGE(9)
Check Out this Related Man Page
GET_USER_PAGES(9) Memory Management in Linux GET_USER_PAGES(9)NAME
get_user_pages - pin user pages in memory
SYNOPSIS
int get_user_pages(struct task_struct * tsk, struct mm_struct * mm, unsigned long start, int nr_pages, int write, int force,
struct page ** pages, struct vm_area_struct ** vmas);
ARGUMENTS
tsk
task_struct of target task
mm
mm_struct of target mm
start
starting user address
nr_pages
number of pages from start to pin
write
whether pages will be written to by the caller
force
whether to force write access even if user mapping is readonly. This will result in the page being COWed even in MAP_SHARED mappings.
You do not want this.
pages
array that receives pointers to the pages pinned. Should be at least nr_pages long. Or NULL, if caller only intends to ensure the pages
are faulted in.
vmas
array of pointers to vmas corresponding to each page. Or NULL if the caller does not require them.
DESCRIPTION
Returns number of pages pinned. This may be fewer than the number requested. If nr_pages is 0 or negative, returns 0. If no pages were
pinned, returns -errno. Each page returned must be released with a put_page call when it is finished with. vmas will only remain valid
while mmap_sem is held.
Must be called with mmap_sem held for read or write.
get_user_pages walks a process's page tables and takes a reference to each struct page that each user address corresponds to at a given
instant. That is, it takes the page that would be accessed if a user thread accesses the given user virtual address at that instant.
This does not guarantee that the page exists in the user mappings when get_user_pages returns, and there may even be a completely different
page there in some cases (eg. if mmapped pagecache has been invalidated and subsequently re faulted). However it does guarantee that the
page won't be freed completely. And mostly callers simply care that the page contains data that was valid *at some point in time*.
Typically, an IO or similar operation cannot guarantee anything stronger anyway because locks can't be held over the syscall boundary.
If write=0, the page must not be written to. If the page is written to, set_page_dirty (or set_page_dirty_lock, as appropriate) must be
called after the page is finished with, and before put_page is called.
get_user_pages is typically used for fewer-copy IO operations, to get a handle on the memory by some means other than accesses via the user
virtual addresses. The pages may be submitted for DMA to devices or accessed via their kernel linear mapping (via the kmap APIs). Care
should be taken to use the correct cache flushing APIs.
See also get_user_pages_fast, for performance critical applications.
COPYRIGHT Kernel Hackers Manual 2.6. July 2010 GET_USER_PAGES(9)