Sponsored Content
Operating Systems AIX CPU Load balancing in AIX 5.2, Oracle Post 302631927 by Scrutinizer on Saturday 28th of April 2012 12:32:36 PM
Old 04-28-2012
There isn't much to explain. I just added up the CPU busy cycles (column 2 and 3) and the idle cycles (column 4 and 5). I thought perhaps that might give a clearer impression of the load on the various CPU's...
 

10 More Discussions You Might Find Interesting

1. UNIX for Dummies Questions & Answers

Question about load balancing

If you have two or more servers load balancing, are the servers mirroring one another? If images, etc., are uploaded, will they be stored on all the servers so that if one server goes down, the images will be served up by another server? (1 Reply)
Discussion started by: wvmlt
1 Replies

2. AIX

idea on hacmp load balancing

Hi All, Any idea about load balancing on hacmp? Or load balancing is only on lpar. Any idea or link info will do. Thanks in advance. (2 Replies)
Discussion started by: itik
2 Replies

3. Ubuntu

perlbal and load balancing

Hi guys, I wonder if someone would be able to assist with my problem. I have just set up a load balancer for a company I am working for. HTTP redirection is working fine, however they also want to load balance SSH and FTP too. At the moment the perlbal config looks like; CREATE POOL webhttp ... (1 Reply)
Discussion started by: JayC89
1 Replies

4. Solaris

Load balancing with IPMP

Is it possible to do a load balancing ( incoming and outgoing )with with IPMP in solaris 10 like sun trunking ? If yes what are the steps involved in it , i know how to do the failover IPMP both link based and probe based but i 'm looking for possible load balancing (3 Replies)
Discussion started by: fugitive
3 Replies

5. Web Development

Load Balancing in Apache

Hi All, I have one webserver which has an application for a set of internal users can be accessed by _http://server1.com I am planning to load balance this application. For that I have cloned this server and build a new one which can be accessed using _http://server2.com]Server2.com. Also i... (2 Replies)
Discussion started by: Tuxidow
2 Replies

6. Linux

HTTP load balancing.

Hi, We have 2 pools of servers. Lets call them A and B and they would contain 2 servers each. Pool A will be hosting www.example.com/app/v1 and pool B will be hosting www.example.com/app/v2. Clients will be requesting right url (/v1 or /v2) but will be hitting just one IP. I'd like to: 1)... (3 Replies)
Discussion started by: chrisfb
3 Replies

7. IP Networking

Load Balancing ppp

Hello everybody How can i Load Balance two slow ppp(gprs) connections with iptables . (4 Replies)
Discussion started by: rink
4 Replies

8. UNIX for Advanced & Expert Users

Help in MQ load balancing

Hi, Currently we have 3 old and 3 new servers catering to Live traffic. As my component move from legacy interfaces to MQ one, we want to have load balancing of old interfaces available on MQ interface as well. For this, we want to send only 30% of all MQ traffic on 3 OLD Live servers, and want... (1 Reply)
Discussion started by: senkerth
1 Replies

9. UNIX for Advanced & Expert Users

Load balancing in Autosys

Hi, I am working on development project where I have to migrate many jobs from Tidal to Autosys R11. During this project we came across the following requirements. 1. There are 3 real machines. There could be many jobs activated simultaneously, but only one job should execute at a time and... (0 Replies)
Discussion started by: sujeetp
0 Replies

10. Shell Programming and Scripting

Load Balancing in UNIX

Dear All, Can any one help me for this request? There is a case. I have 20 files which I need to FTP to 5 servers. I want to know if there is any possibility to make a load balancer which transfers files in round robin manner to 5 servers. As per theoretical algorithm, what I think, flow can... (9 Replies)
Discussion started by: Zaib
9 Replies
POLLING(4)						   BSD Kernel Interfaces Manual 						POLLING(4)

NAME
polling -- device polling support SYNOPSIS
options DEVICE_POLLING DESCRIPTION
Device polling (polling for brevity) refers to a technique that lets the operating system periodically poll devices, instead of relying on the devices to generate interrupts when they need attention. This might seem inefficient and counterintuitive, but when done properly, polling gives more control to the operating system on when and how to handle devices, with a number of advantages in terms of system respon- siveness and performance. In particular, polling reduces the overhead for context switches which is incurred when servicing interrupts, and gives more control on the scheduling of the CPU between various tasks (user processes, software interrupts, device handling) which ultimately reduces the chances of livelock in the system. Principles of Operation In the normal, interrupt-based mode, devices generate an interrupt whenever they need attention. This in turn causes a context switch and the execution of an interrupt handler which performs whatever processing is needed by the device. The duration of the interrupt handler is potentially unbounded unless the device driver has been programmed with real-time concerns in mind (which is generally not the case for FreeBSD drivers). Furthermore, under heavy traffic load, the system might be persistently processing interrupts without being able to com- plete other work, either in the kernel or in userland. Device polling disables interrupts by polling devices at appropriate times, i.e., on clock interrupts and within the idle loop. This way, the context switch overhead is removed. Furthermore, the operating system can control accurately how much work to spend in handling device events, and thus prevent livelock by reserving some amount of CPU to other tasks. Enabling polling also changes the way software network interrupts are scheduled, so there is never the risk of livelock because packets are not processed to completion. Enabling polling Currently only network interface drivers support the polling feature. It is turned on and off with help of ifconfig(8) command. The historic kern.polling.enable, which enabled polling for all interfaces, can be replaced with the following code: for i in `ifconfig -l` ; do ifconfig $i polling; # use -polling to disable done MIB Variables The operation of polling is controlled by the following sysctl(8) MIB variables: kern.polling.user_frac When polling is enabled, and provided that there is some work to do, up to this percent of the CPU cycles is reserved to userland tasks, the remaining fraction being available for polling processing. Default is 50. kern.polling.burst Maximum number of packets grabbed from each network interface in each timer tick. This number is dynamically adjusted by the kernel, according to the programmed user_frac, burst_max, CPU speed, and system load. kern.polling.each_burst The burst above is split into smaller chunks of this number of packets, going round-robin among all interfaces registered for polling. This prevents the case that a large burst from a single interface can saturate the IP interrupt queue (net.inet.ip.intr_queue_maxlen). Default is 5. kern.polling.burst_max Upper bound for kern.polling.burst. Note that when polling is enabled, each interface can receive at most (HZ * burst_max) packets per second unless there are spare CPU cycles available for polling in the idle loop. This number should be tuned to match the expected load (which can be quite high with GigE cards). Default is 150 which is adequate for 100Mbit network and HZ=1000. kern.polling.idle_poll Controls if polling is enabled in the idle loop. There are no reasons (other than power saving or bugs in the scheduler's handling of idle priority kernel threads) to disable this. kern.polling.reg_frac Controls how often (every reg_frac / HZ seconds) the status registers of the device are checked for error conditions and the like. Increasing this value reduces the load on the bus, but also delays the error detection. Default is 20. kern.polling.handlers How many active devices have registered for polling. kern.polling.short_ticks kern.polling.lost_polls kern.polling.pending_polls kern.polling.residual_burst kern.polling.phase kern.polling.suspect kern.polling.stalled Debugging variables. SUPPORTED DEVICES
Device polling requires explicit modifications to the device drivers. As of this writing, the bge(4), dc(4), em(4), fwe(4), fwip(4), fxp(4), ixgb(4), nfe(4), nge(4), re(4), rl(4), sf(4), sis(4), ste(4), stge(4), vge(4), vr(4), and xl(4) devices are supported, with others in the works. The modifications are rather straightforward, consisting in the extraction of the inner part of the interrupt service routine and writing a callback function, *_poll(), which is invoked to probe the device for events and process them. (See the conditionally compiled sections of the devices mentioned above for more details.) As in the worst case the devices are only polled on clock interrupts, in order to reduce the latency in processing packets, it is not advis- able to decrease the frequency of the clock below 1000 Hz. HISTORY
Device polling first appeared in FreeBSD 4.6 and FreeBSD 5.0. AUTHORS
Device polling was written by Luigi Rizzo <luigi@iet.unipi.it>. BSD
April 6, 2007 BSD
All times are GMT -4. The time now is 10:55 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy