Hi,
I dont have a solid ground in infrastructure however i like to dabble at home with virtualisation. At my work one of our app's that we "own" uses a SQL server guest which is constantly suffering performance issues. It constantly sits on 100% CPU for long periods of time and batch requests pile up..
Host:
12x2.666ghz cores
24 threads
(if you guys want the specifics i can try dig it up for you)
please forgive any incorrect terms!
Guests on the host total 19vCPU. no custom share or reserve/limit's set
Now after reading part of the booklet (chapters 1,2,3,4) it is my understanding that Esxi 5 is intelligent and aware of the hyperthreading on the host and will schedule VM's accordingly and will do its best to spread vm's accross physical cores. It is also my understanding that a VCPU will be bound to a logical processor on the host?
If it works as above could it mean that a VCPU from the CPU intensive VM could be on a thread and another VM could have a VCPU on the other thread for the same physical core, resulting in contention issues even if the overall Mhz consumption is not reached (e.g. sql vm consumes 20ghz out of 30 total meaning 10 is free, but there is contention for the resources of each core shared by the cpu intensive guest and the other guests)?
And do VM's bind to threads on start or are they moved dynamically? e.g is thread 0 is cpu intensive and thread 1 becomes intensive, would esx move thread1 to another logical processor that uses a different physical processor?
From reading the document my initial thoughts are to suggest setting hyperthread sharing to none. or to at least look at setting high reservation for CPU cores so that if contention does occur, our guest gets priority as we paid for the blade.
Thanks