Are you trying to squeeze every CPU cycle out of your dual, quad, or hex core processors for making more VMs per core?
Unfortunately for those who like to overclock CPUs for gaming, over utilizing CPU doesn’t quite work the same on virtual machine hosts (ESX/ESXi/XenServer or Hyper-V).
How many Virtual Machines Per Core?
From my experience, I know you can get anywhere from 8 – 10 vCPUs per core, but I have found a sweet spot of 4 VMs (that’s right I said VMs, not vCPUs) per core will provide decent performance for the user and not affect hardware performance.
Remember, you don’t want users waiting 30 – 60 seconds for a reboot, logon, or screen refresh…
Too many VMs or vCPU per core will cause lag and the same thing goes for over utilizing memory which is another topic of concern. Overutilization of CPU cores and memory will cause system users frustration due to poor VM performance. (Newer versions of ESXi have improved this but it’s still risky to over provision.)
What Eric has to Say about Virtual Machines Per Core
Eric Siebert wrote an excellent article on SearchNetworking.com called “Sizing server hardware” which goes into “nuts and bolts” details for sizing virtual server hardware.
My 1 to 4 ration for VM density per logical CPU is my best practice but Eric goes into much more detail.
Always! Always! Think of the user’s experience when considering your best practices.
Sure, deploying 100 VMs on a single host sounds good, but what kind of performance and experience will users have?
I’ve had my share of complaints about slow virtual servers and believe me, you don’t want users to start complaining that your (that’s right, your!) virtual servers are slow.
Rule of Thumb on VMs Per Core
Rule of thumb: Keep it simple, 4 VMs per CPU core – even with today’s powerful ESX server hardware.
Don’t use more than 1 to 4 vCPU per VM unless the application running on the virtual server requires more or unless the developer demands them and calls your boss. VMs with 1 to 2 vCPU run more efficiently and from my experience, nobody seems to notice, except for – maybe, over-clockers!
Here’s something I wrote a while back:
“The measurement of a successful virtual infrastructure deployment is not how many VMs can be hosted per host, it’s how many users can be satisfactorily serviced without them knowing they are using virtual technology. Virtualization should be invisible. Once users start noticing foot prints in the snow, it’s over…”