Hi,
I came across the following point in a CPU Best practice document.
"Having virtual machines configured with VCPUs that are not used still imposes resource requirements on the ESX server. In some guest operating systems, the unused virtual CPU still consumes timer inerrups and executes the idle loop of the guest OS, which translates into real CPU consumption."
Can someone provide some input on the bolded section.
Why is it that if the idle loop executes timer interrupts and the idle loop of the guest OS will cause real CPU consumption?
What example OS can do this?
Is it true that "generally" an app that does not understand SMP will run slower on a multiple CPU hardware?