Quote:
Originally Posted by
demigod85
Whenever a user level thread requests a system call, instead of performing a system call I take a note of that system call and then I make that user thread sleep. Only when there are a considerable number of system calls waiting for me, i do a context switch and then let the kernel threads execute all those system calls. That ways, I am able to batch perform system calls and reduce the number of context switches.
Every system call is a context switch -- always. The very first thing that happens when you make a system call is the suspension of the calling thread or process. You can't avoid it. You can't prevent it.
Waiting for a mutex is a context switch, too. If you can grab it without waiting, that's usually not.
So threads sleep either way. You're actually putting more threads to sleep, for longer periods of time.
Without knowing what system calls you're actually doing when why, I'm only making wild guesses why this works better. But if I had to guess: Too many threads running simultaneously. Excessive context switching or cache synchronization eats into their time. Queueing some reduces the number of running threads, letting the rest get more work done with less switches or cache synchronization. As long as there's still more running threads than cores at all times, CPU isn't wasted.
If these system calls are disk-related, it could also be system cache effects.