Linux and UNIX Man Pages

Linux & Unix Commands - Search Man Pages

mpage_readpages(9) [centos man page]

MPAGE_READPAGES(9)						   The Linux VFS						MPAGE_READPAGES(9)

NAME
mpage_readpages - populate an address space with some pages & start reads against them SYNOPSIS
int mpage_readpages(struct address_space * mapping, struct list_head * pages, unsigned nr_pages, get_block_t get_block); ARGUMENTS
mapping the address_space pages The address of a list_head which contains the target pages. These pages have their ->index populated and are otherwise uninitialised. The page at pages->prev has the lowest file offset, and reads should be issued in pages->prev to pages->next order. nr_pages The number of pages at *pages get_block The filesystem's block mapper function. DESCRIPTION
This function walks the pages and the blocks within each page, building and emitting large BIOs. If anything unusual happens, such as: - encountering a page which has buffers - encountering a page which has a non-hole after a hole - encountering a page with non-contiguous blocks then this code just gives up and calls the buffer_head-based read function. It does handle a page which has holes at the end - that is a common case: the end-of-file on blocksize < PAGE_CACHE_SIZE setups. BH_BOUNDARY EXPLANATION There is a problem. The mpage read code assembles several pages, gets all their disk mappings, and then submits them all. That's fine, but obtaining the disk mappings may require I/O. Reads of indirect blocks, for example. So an mpage read of the first 16 blocks of an ext2 file will cause I/O to be SUBMITTED IN THE FOLLOWING ORDER
12 0 1 2 3 4 5 6 7 8 9 10 11 13 14 15 16 because the indirect block has to be read to get the mappings of blocks 13,14,15,16. Obviously, this impacts performance. So what we do it to allow the filesystem's get_block function to set BH_Boundary when it maps block 11. BH_Boundary says: mapping of the block after this one will require I/O against a block which is probably close to this one. So you should push what I/O you have currently accumulated. This all causes the disk requests to be issued in the correct order. COPYRIGHT
Kernel Hackers Manual 3.10 June 2014 MPAGE_READPAGES(9)

Check Out this Related Man Page

GET_USER_PAGES(9)					    Memory Management in Linux						 GET_USER_PAGES(9)

NAME
get_user_pages - pin user pages in memory SYNOPSIS
int get_user_pages(struct task_struct * tsk, struct mm_struct * mm, unsigned long start, int nr_pages, int write, int force, struct page ** pages, struct vm_area_struct ** vmas); ARGUMENTS
tsk task_struct of target task mm mm_struct of target mm start starting user address nr_pages number of pages from start to pin write whether pages will be written to by the caller force whether to force write access even if user mapping is readonly. This will result in the page being COWed even in MAP_SHARED mappings. You do not want this. pages array that receives pointers to the pages pinned. Should be at least nr_pages long. Or NULL, if caller only intends to ensure the pages are faulted in. vmas array of pointers to vmas corresponding to each page. Or NULL if the caller does not require them. DESCRIPTION
Returns number of pages pinned. This may be fewer than the number requested. If nr_pages is 0 or negative, returns 0. If no pages were pinned, returns -errno. Each page returned must be released with a put_page call when it is finished with. vmas will only remain valid while mmap_sem is held. Must be called with mmap_sem held for read or write. get_user_pages walks a process's page tables and takes a reference to each struct page that each user address corresponds to at a given instant. That is, it takes the page that would be accessed if a user thread accesses the given user virtual address at that instant. This does not guarantee that the page exists in the user mappings when get_user_pages returns, and there may even be a completely different page there in some cases (eg. if mmapped pagecache has been invalidated and subsequently re faulted). However it does guarantee that the page won't be freed completely. And mostly callers simply care that the page contains data that was valid *at some point in time*. Typically, an IO or similar operation cannot guarantee anything stronger anyway because locks can't be held over the syscall boundary. If write=0, the page must not be written to. If the page is written to, set_page_dirty (or set_page_dirty_lock, as appropriate) must be called after the page is finished with, and before put_page is called. get_user_pages is typically used for fewer-copy IO operations, to get a handle on the memory by some means other than accesses via the user virtual addresses. The pages may be submitted for DMA to devices or accessed via their kernel linear mapping (via the kmap APIs). Care should be taken to use the correct cache flushing APIs. See also get_user_pages_fast, for performance critical applications. COPYRIGHT
Kernel Hackers Manual 2.6. July 2010 GET_USER_PAGES(9)
Man Page