Xen, VMWare, HyperV, and bhyve all do it this way. This is always why they recommend all guests have the same number of cores. Say you have an 8 core CPU. You can have 2 4 core guests running at the same time. But if a 5 core guest came along, that 5 core guest would have to wait for all 8 cores to become free, and while it was running, no other 4 core guests could run.
This is yet another reason Docker or Jails are getting popular.
Eh?
No, ESXi just advances the clock of virtual cores that are in the idle state that the guest OS puts them in, doesn't have to schedule them to run on a real core at all. Done that for quite a few releases now. A 5 core VM and a 4 core VM will run happily enough simultaneously on an 8 physical core CPU provided that they have at least one idle virtual core between them at all times.
VMware recommend against having more virtual cores assigned to a VM than you need because it's more overhead for the hypervisor to check their idle status and schedule them. If you have 4 virtual cores in a VM then if nothing else then at some point there'll be enough background processes running on the guest OS to demand all of them at the same time, and that becomes a pain to schedule on a busy system. If need be, ESXi will not even run all busy virtual cores at the same time, just that it won't let the virtual core clocks get too far out of whack with each other (which is something that adds to the hypervisor scheduling overhead).