turbostat reports C-states of all CPU cores, and includes entries for each hyper-threaded core as well. Often enough the two logical cores on a single physical core will list different C state percentages. Does that make any sense?
Is this reporting the c-states of the few duplicated parts that support hyperthreading, vs the actual computing units in the single physical core?
This isn't a turbostat specific question, that just happens to be the tool I used to display that info. Its more a question about hyperthreading in general.
Edit: CPU is an Intel 5820K hexacore if that matters. Its my first hyperthreaded CPU.
Last edited by agentrnge; 10-24-2014 at 10:21 AM..
Virtual cores aren't real cores but Linux treats them as such to simplify its scheduler, to the point they appear in /proc/cpuinfo. As such, they sometimes get tallied in ways that don't make perfect sense.
I don't have a hyperthreaded core to compare with, but I suspect that exploring the structure inside /sys/ would reveal the true, more complex, grouping.
the info in cpuinfo as well as the output of turbostat shows how the virtual/logical cores relate to physical. I would expect logical core 0 and 6, on physical core 0, to have the exact same C state time/percentages. A lot of the time they are. But then sometimes not. Puzzled. Curious.
Speaking of linux scheduling. I have also noticed that the scheduler will sometimes put two tasks on the same physical core, but leave another physical core idle. I guess when deciding what core is most available, two virt cores might be idle, while another core is still finishing up something.. Not sure how fast load should ( if it is at all) be reballanced. Gotta break out the os internals books and refresh.
Last edited by agentrnge; 10-24-2014 at 02:08 PM..
Hyperthreading allows two threads to use different parts of one core -- one might be using the ALU for math, while another does something floating point, or reads from memory, etc. It's still just one core, but sometimes it can slip in an extra cycle here and there using parts of itself which happen to be free.
I'd just like to chuck in my two cents worth on this, I've fallen victim to the perfomance issues that "cache thrashing" can cause and it took me some time to work out what the issue actually was.
Although the issue was in my case "Solaris" based and was due to my configuration of the system - down to me I'm afraid. The system in question a Sun "T" series had been domained and I had set up some containers/zones, due to my lack of understanding I set up a small domain across core boundaries - with the result that the four "VCPU's" actually threads spent a high percentage of time sending cache from core to core.
A lesson well learned at the time, although I think in the later versions of the OS related software and the firmware the impact of such a mistake is reduced - I tend to shy away from configuring domains or VM's - particularly small ones over core boundaries.