In case you missed it, MSFT is making a bit of noise about the new Dynamic Memory feature that is coming in the SP1 release for Server 2008 R2 / Hyper-V Server R2.
My angle on this is a bit different than most.
The way that Dynamic Memory has been implemented by the MSFT folks is rather interesting. It is memory ballooning under the hood – but it is hooked into the operating system of the VM in a way that only Microsoft could (as it owns the OS in this case).
What I find most interesting is the way that the Dynamic Memory feature responds to the needs of the application that is running in the virtual machine.
If the application begins needing additional RAM, then Dynamic Memory attempts to give it some, once the application frees up the RAM, then Dynamic Memory takes it back away.
This has the net effect that your machine is always running in an optimal ‘sweet spot’ as far as RAM is concerned.
The most interesting part is that you can perform actions against your VM and you can see how Dynamic Memory responds and in turn understand how your application demands additional RAM. You can load it to a certain level and get a really good baseline.
Considering that managing a virtual environment is still a bit of a black art this is highly useful in really understanding the demands of your workload in a way that means something. This directly correlates to a resource – RAM of all things – a very finite resource in most cases.
The end result is that you get a really good indicator of the range of RAM that your workload really needs.
I have said for years that most folks give their servers too much RAM (especially VMs) – and this is a great tool to prove it and to give a really excellent indication of the true RAM utilization of your workload.
Yes, you have to observe the numbers (in the UI or through polling WMI) but I think the return that you get for your time spent doing that is well worth it.