This is what I found in pmap of bash process, suppose if another bash process is started will these areas will be shared, how they are shared while new bash is created. (is there any info that kernel keeps to know that these files are loaded at these parts)
It's done with memory mapping.
Code:
$ cat owls.c
int main(void)
{
int fd=open("filename", O_RDWR, 0660);
// Map the first page of bytes into 'mem'.
// getpagesize() is 4096 or 8192 bytes on most systems.
void *mem=mmap(NULL, getpagesize(), PROT_READ|PROT_WRITE, MAP_SHARED, fd, 0);
// Make sure the file is long enough.
// The empty space in the file will be filled with NULLs.
ftruncate(fd, getpagesize());
if(mem == MAP_FAILED)
{
perror("Couldn't map");
close(fd);
return(1);
}
printf("Old string was: '%s'\n", mem);
strcpy(mem, "THE OWLS ARE NOT WHAT THEY SEEM\n");
munmap(mem, getpagesize());
close(fd);
return(0);
}
$ rm -f filename
$ gcc owls.c -o owls
$ ./owls
Old string was: ''
$ ./owls
Old string was: 'THE OWLS ARE NOT WHAT THEY SEEM
'
$ cat filename
THE OWLS ARE NOT WHAT THEY SEEM
$
Any dynamically-linked code is loaded in this fashion, though it'd be mapped read-only, not read-write.
I suppose the kernel would just need to track the inode number and partition id of memory mapped from files. If someone tries to map the same inode on the same partition, and the area being mapped intersects, some or all of it may be shared.
The memory savings is deeper than just not reloading the same library 23 times. The kernel uses hardware features of the CPU itself to be notified when a process tries to access mapped pages of memory -- like a segmentation fault, except instead of killing the process, the kernel freezes it. Once the memory's loaded, the kernel lets it continue. This allows the kernel to only load memory pages which you're actually using, rather than blindly loading the entire file.
Memory may eventually be paged back out if it falls into disuse, as well. In this manner, mapped segments can operate on things larger than the entire available memory on your system. Many large things like databases use memory mapping to operate on their files.
Last edited by Corona688; 09-22-2011 at 02:37 PM..
Ok, I'm kind of in between newbie and experienced somewhere. I'm an advanced computer user but only have a little experience with linux and a lot of problems with it. Currently I'm using Linux-Mandrake 8.0 (I know, I know, but let's not go into the many reasons why it's not even close to the best... (2 Replies)
Well Guys, will anybody solve my problem?
I have installed Win XP and RH Linux 9 (Dual Boot) on an Intel x86 Machine. Everything is going fine except that I cannot share files among the two operating systems. For example, if I download a PDF file from internet and save it in my Win XP partition... (1 Reply)
I have RedHat 9.0 installed on three of my servers (PIII - 233MHz) and want that they share a common IP address so that any request made reaches each of the servers.
Can anyone suggest how should I setup my LAN. I'm new to networking in Linux so please elaborate and would be thankful for a timely... (2 Replies)
Sometimes you get the tiger...but sometimes he get you and this latest home network “project” of mine has gnawed on me pretty badly. Perhaps you can offer some technical help. It will be heartily appreciated.
I have a small home network initially comprising two computers running Windows... (1 Reply)
I keep getting a access denied error when I install from the XP wizard my Linux shared printer.
CUPS CONFIGURATION FILE
<Location /admin>
#
# You definitely will want to limit access to the administration functions.
# The default configuration requires a local connection from a user who
#... (2 Replies)
Hi,
Anyone can tell me how do i create Linux manual page.
I copied ls manual page from /usr/share/man/man1/ls.1.gz
and unziped ls.1.gz
got one file ls.1
If i apply man ls.1 it is displaying man page of ls.
Similarly i want to create myExe.1 file ,
man myExe.1 should display... (2 Replies)
HI
i have setuped a nfs between a AIX system and a linux os ,keeping AIX as sever
i need to share the CDrom in AIX server with the linux operating system. ie. linux os
machine does't have a cdrom . will i be able be share ..i tried a lot but it gives a message which resembles NFS access... (3 Replies)
Versions : RHEL 6.xx /OL 6.xx
I am trying to understand what a page is in Linux? The concept should be same in Unix as well, I guess
The below doc says "A page is a block of virtual memory. A typical block size on Linux operating system is 4KB "
... (4 Replies)
I'd like to share some experiences and what I found for NIS migration from Solaris 8 NIS to Linux platform.
I'm not an expert for both platforms, it's just when I tested both systems and found something really tricky. That might takes a lot of time for you to find the root cause. So, I think I can... (11 Replies)
Discussion started by: bestard
11 Replies
LEARN ABOUT OPENSOLARIS
numa_maps
NUMA(7) Linux Programmer's Manual NUMA(7)NAME
numa - overview of Non-Uniform Memory Architecture
DESCRIPTION
Non-Uniform Memory Access (NUMA) refers to multiprocessor systems whose memory is divided into multiple memory nodes. The access time of a
memory node depends on the relative locations of the accessing CPU and the accessed node. (This contrasts with a symmetric multiprocessor
system, where the access time for all of the memory is the same for all CPUs.) Normally, each CPU on a NUMA system has a local memory node
whose contents can be accessed faster than the memory in the node local to another CPU or the memory on a bus shared by all CPUs.
NUMA system calls
The Linux kernel implements the following NUMA-related system calls: get_mempolicy(2), mbind(2), migrate_pages(2), move_pages(2), and
set_mempolicy(2). However, applications should normally use the interface provided by libnuma; see "Library Support" below.
/proc/[number]/numa_maps (since Linux 2.6.14)
This file displays information about a process's NUMA memory policy and allocation.
Each line contains information about a memory range used by the process, displaying--among other information--the effective memory policy
for that memory range and on which nodes the pages have been allocated.
numa_maps is a read-only file. When /proc/<pid>/numa_maps is read, the kernel will scan the virtual address space of the process and
report how memory is used. One line is displayed for each unique memory range of the process.
The first field of each line shows the starting address of the memory range. This field allows a correlation with the contents of the
/proc/<pid>/maps file, which contains the end address of the range and other information, such as the access permissions and sharing.
The second field shows the memory policy currently in effect for the memory range. Note that the effective policy is not necessarily the
policy installed by the process for that memory range. Specifically, if the process installed a "default" policy for that range, the
effective policy for that range will be the process policy, which may or may not be "default".
The rest of the line contains information about the pages allocated in the memory range, as follows:
N<node>=<nr_pages>
The number of pages allocated on <node>. <nr_pages> includes only pages currently mapped by the process. Page migration and memory
reclaim may have temporarily unmapped pages associated with this memory range. These pages may show up again only after the process
has attempted to reference them. If the memory range represents a shared memory area or file mapping, other processes may currently
have additional pages mapped in a corresponding memory range.
file=<filename>
The file backing the memory range. If the file is mapped as private, write accesses may have generated COW (Copy-On-Write) pages in
this memory range. These pages are displayed as anonymous pages.
heap Memory range is used for the heap.
stack Memory range is used for the stack.
huge Huge memory range. The page counts shown are huge pages and not regular sized pages.
anon=<pages>
The number of anonymous page in the range.
dirty=<pages>
Number of dirty pages.
mapped=<pages>
Total number of mapped pages, if different from dirty and anon pages.
mapmax=<count>
Maximum mapcount (number of processes mapping a single page) encountered during the scan. This may be used as an indicator of the
degree of sharing occurring in a given memory range.
swapcache=<count>
Number of pages that have an associated entry on a swap device.
active=<pages>
The number of pages on the active list. This field is shown only if different from the number of pages in this range. This means
that some inactive pages exist in the memory range that may be removed from memory by the swapper soon.
writeback=<pages>
Number of pages that are currently being written out to disk.
CONFORMING TO
No standards govern NUMA interfaces.
NOTES
The Linux NUMA system calls and /proc interface are available only if the kernel was configured and built with the CONFIG_NUMA option.
Library support
Link with -lnuma to get the system call definitions. libnuma and the required <numaif.h> header are available in the numactl package.
However, applications should not use these system calls directly. Instead, the higher level interface provided by the numa(3) functions in
the numactl package is recommended. The numactl package is available at <ftp://oss.sgi.com/www/projects/libnuma/download/>. The package
is also included in some Linux distributions. Some distributions include the development library and header in the separate numactl-devel
package.
SEE ALSO get_mempolicy(2), mbind(2), move_pages(2), set_mempolicy(2), numa(3), cpuset(7), numactl(8)COLOPHON
This page is part of release 4.15 of the Linux man-pages project. A description of the project, information about reporting bugs, and the
latest version of this page, can be found at https://www.kernel.org/doc/man-pages/.
Linux 2012-08-05 NUMA(7)