Tuesday, December 22, 2009

Hypervisor virtualization basics a visual representation

For all of you that want a place to point folks to describe the basics of virtualization, I have put together a few videos to describe the hypervisor, CPU, and network concepts in a visual way.

The intent is to give a quick amount of conceptual information to those folks that suddenly are dealing with VMs, but might not have the experience to fully understand what they are looking at.

Hypervisor Basics:

The basics of what a full (type 1) hypervisor is.

 

The hypervisor pool of resources – the CPU:

The basics of CPU scheduling. It is far more complex than this and there are many methods.  It gets really messy when hyper-threading is introduced.

More CPU scheduling details are over at the Xen.org site (the Xen folks are more open in discussing the gritty details of all this):  http://wiki.xen.org/xenwiki/CreditScheduler

The hypervisor pool of resource – the network:

This is about virtual networks / virtual switches / bridging.  The concepts, as each vendor implementation offers different features.

21 comments:

Marc Sherman said...

All three are great but the networking video is the best (for me) since it is the least I know about.

Thank you for making these!

Marc

BrianEh said...

I am open to additional ideas - trying to do these in a visual way takes a bit of creativity and there are some concepts that I have not come up with a good way to represent.

Unknown said...

Really very useful for the beginners.
The visual demonstration is very creative way to make the concepts clear.

Thanks Karthik

Anonymous said...

PLEEEEASE!!! would you make one like these for DISKS :))
and tks really a lot! congratulations by your voluntary efforts for 'HV education' ;))

BrianEh said...

I am still thinking about the disk thing. I have an idea but I want to come up with things that are useful. Any particular comments about what types of disk concepts are confusing are welcome.

nutty said...

Thanks! This was very usefull to me.

Anonymous said...

Very good and innovative. Thanks for the time
- Harish

Anonymous said...

Thanks for the visual examples, nice and clear.

We have an issue with a single-threaded application running 10x slower on a virtual machine as compared to a physical, but we don't know why. The perl app uses mosek libraries for intense math calculations. We're wondering if CPU scheduling is causing us grief. Although we've set affinity to get a dedicated core, could it be that the load (peaking at over 90%) is actually being shared? Any ideas?

BrianEh said...

Just from reading your description of your application I would not expect it to virtualize well.

In my mind, you have an example of a perfect application to leave on bare metal. It will always shine on bare metal, it will always be starved (to some degree) as a VM.

Affinity simply binds a VM to a processor. If the VM takes 100% of the CPU there is never a break in the threading and the VM usually does not drift to another processor. The net effect in this case is that affinity has little effect on performance.

Reserve and weight are the settings that give the vCPU more time on the physical processor.

But, (this is a big point) if your hypervisor is not loaded with many VMs these settings have very little effect. These settings are designed for tuning individual VMs when the entire system is under load - a constant load of many VMs.

Anonymous said...

I was wondering where would I find the networking video Marc Sherman has referred. Is there a link to it? Thanks.

BrianEh said...

It is in the post - "networking basics"

Anonymous said...

Thank you. Your videos are clear and informative. It is people like you that make the Internet a great place.

Anonymous said...

Thanks for your work. Very Useful Videos.

Now I will see wooden cubes when working with virtual machines ;)

Philip Elder Cluster MVP said...
This comment has been removed by the author.
Philip Elder Cluster MVP said...

Brian,

Question: When one assigns 4 vCPUs to a VM does the physical CPU need to process those 4 threads in parallel?

I am not saying they need to be beside each other across the physical cores/threads but processed at the same time.

I seem to recall seeing a performance video on Hyper-V just after release that made that statement. Grey matter being what it is though I'd like to see a confirmation, and pointer if you have one, of that.

Thanks for the great videos!

Philip

P.S. If the previous comment did get through what appeared to be a CAPTCHA snafu please post this one. ;)

BrianEh said...

Technically, each vCPU is a processing thread. A virtual slice of a physical CPU. And, I have not even involved the hypervisor in that explanation.

Years ago there was a conscious consideration that a single VM not be allocated more vCPU than the number of processing cores that the hardware has. The hypervisors / hardware of the day had an issue with performance when two vCPU threads were executing in parallel on the same physical processor thread.

It ended up being a performance impact. But this is long gone. And no hypervisor recommends that any longer.

Philip Elder Cluster MVP said...

Brian,

So, does the 4 threads belonging to that VM's vCPU need to be in the physical CPU pipeline in parallel or can they be processed out of sync with each other?

To visualize with each line representing CPU time/thread:

| | | |

Or:

| | |
|

Thanks! :)

BrianEh said...

Ah. Back when processors were old and slow they needed to be in parallel.

These days it no longer matters.

The (really old) recommendation you are referring to was to not assign more vCPU to a single VM than you had execution threads on your physical hardware.

But, like I mentioned, this is generally not an issue any longer. But it could be on older hardware. This is actually a case where faster processors win over more cores.

It is [a] better [experience] to process the thread slice of time faster than to process more in parallel.

The other issue that folks run into are applications that simply don't deal with the time slicing behavior very well. The description always goes - runs great on bare metal, horrible in a VM.

Philip Elder Cluster MVP said...

Brian,

Okay, so in my example if we had an 8 core CPU (no Hyper Threading) and a VM with 8 vCPUs those threads would be batched through the pipe versus running in parallel.

So, that at least confirms our conclusion that more GHz tends to be better than more Cores. :)

Thanks for this.

BrianEh said...

What would be happening at the hypervisor level is that your 8 vCPU threads would be spread among your 8 cores.
Thus you would have 8 running in parallel.

But a detail here that I did not dive into - technically they may or may not run in parallel.

This is because each vCPU worker thread is just a tiny execution in time. And the work is always moved about between CPU cores, a vCPU thread does not stick to a particular core.

A vCPU execution thread is not a long lived thing - it is a moment in time of execution that occurs on demand.

All that 8 vCPU gets you is the potential of executing 8 threads in parallel if it is necessary.

I don't know if I am explaining this well, it is pretty abstract stuff.

Adzmax said...

What a fantastic explanation of virtual CPU utilisation. Thank you very much!!