Sponsored Content
Full Discussion: unix memory management
Top Forums UNIX for Advanced & Expert Users unix memory management Post 302070194 by gfhgfnhhn on Monday 3rd of April 2006 01:42:53 AM
Old 04-03-2006
Error unix memory management

i am looking for the books or web-sites which
explains the unix memory management in detail.

do you know any useful material?
 

7 More Discussions You Might Find Interesting

1. Programming

Programming for Memory Management

Hi I am relatively new to programming on UNIX platform. I was wondering if there is any system call so that a process can access systems page table or swap pages from main memory by specifying the page number. I am trying to implement various page replacement algorithms like LRU, OPT, FIFO etc.... (1 Reply)
Discussion started by: jayesch
1 Replies

2. UNIX for Advanced & Expert Users

virtual memory management, swapping paging

can anybody explain me the concepts virtual memory mangement, swapping and paging? although i roughly know what they are , i need more solid distinction between them, and also i want to figure out the relations between them? do you have any well-defined definitons for this concepts? (2 Replies)
Discussion started by: gfhgfnhhn
2 Replies

3. Solaris

Memory management in zones

whats the difference between setting zone capped-memory from zoncfg and setting rctl: name: zone.max-locked-memory .. if changed the zone.max-locked-memory with prctl it does not change in rcapstat .. but if change with rcapadm it reflects in rcapstat o/p (0 Replies)
Discussion started by: fugitive
0 Replies

4. UNIX for Advanced & Expert Users

kde memory management

Hi everyone! I am running KDE 3.5 on a Slackware 12.1 with 1.5Gb of RAM and have the following question: Running ps on regular intervals of 1 min, I see that 1.3Gb of RAM are being used, leaving me with 0.2Gb of free memory. I tried locating the most greedy app running, which was Kontact and... (0 Replies)
Discussion started by: kerb41
0 Replies

5. Shell Programming and Scripting

Memory management

Hello all. I have a script that uses two arrays in the beginning. Saves certain values that i am extracting from df -h command. array1 and array2 where i is from 0 to 9. It then goes on and saves the values of the arrays into variables. for i 0 to 9 , tmp= array2 // I am no writing the... (4 Replies)
Discussion started by: Junaid Subhani
4 Replies

6. Solaris

Solaris 10 - memory management confusion

Hello i have a Problem - my Server is running with following MEM Information (from TOP): Memory: 32G phys mem, 4195M free mem, 10G total swap, 9788M free swap So i think - no problem, 4GB Free, not swapin. So - our programmer wants to know what process taking how much memory - i... (5 Replies)
Discussion started by: roorbacj
5 Replies

7. UNIX for Advanced & Expert Users

Efficient UNIX Memory management for Running MapReduce Jobs.

We are trying to set up a single node cloudera hadoop cluster having 16 GB as RAM on linux machine. We are setting up 5.4.2 version. Now when we check statistics post the installation and run the top command we find that only 1 -2 GB is available. when we trigger map reduce sample job - no... (2 Replies)
Discussion started by: ketankirange
2 Replies
RADRELAY(8)							 FreeRADIUS Daemon						       RADRELAY(8)

NAME
radrelay -- Deprecated command. DESCRIPTION
The functions of radrelay have been added to radiusd. One benefit is that one instance of radiusd can read multiple detail files, among others. The rlm_sql_log module does something similar, but for SQL queries. See it's man page for details. REPLICATION FOR BACKUPS
Many sites run multiple radius servers; at least one primary and one backup server. When the primary goes down, most NASes detect that and switch to the backup server. That will cause your accounting packets to go the the backup server - and some NASes don't even switch back to the primary server when it comes back up. The result is that accounting records are missed, and/or the administrator must jump through hoops in order to combine the different detail files from multiple servers. It also means that the session database ("radutmp", used for radwho and simultaneous use detection) gets out of sync. We solve this issue by "relaying" packets from one server to another, so they both have the same set of accounting data. See raddb/sites-available/buffered-sql for more information. BUFFERING FOR HIGH-LOAD SERVERS If the RADIUS server suddenly receives a many accounting packets, there may be insufficient CPU power to process them all in a timely man- ner. This problem is especially noticable when the accounting packets are going to a back-end database. Similarly, you may have one database that tracks "live" sessions, and another that tracks historical accounting data. In that case, accessing the first database is fast, as it is small. Accessing the second database many be slower, as it may contain multiple gigabytes of data. In addition, writing to the first database in a timely manner is important, while data may be written to the second database with a few minutes delay, without any harm being done. See raddb/sites-available/copy-to-home-server for more information. SEE ALSO
radiusd(8), rlm_sql_log(5) AUTHOR
The FreeRADIUS Server Project 23 October 2007 RADRELAY(8)
All times are GMT -4. The time now is 03:41 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy