After a long process, I finally have a real-world calculation for determining how much RAM to reserve for a Hyper-V host. The question/answer about it is here. But the summary is that Hyper-V loses RAM to the Nonpaged pool (and all of it is "untagged") in addition to the "standard" stuff that Microsoft has documented. Be aware that I write MB/GB here, when I actually mean MiB/GiB. I feel it will be more intuitive to see the notation that Windows (incorrectly) uses. Host Overhead 300 MB for the Hypervisor services 512 MB for the Host OS (This is a recommended amount; you have some wiggle-room with this.) [The amount of physical RAM available to the host OS] multiplied by 0.0425 (result in GB ) for the Nonpaged pool (Which means multiply that by 1024 to convert to "MB") Per-VM Overhead 24 MB for the VM 8 MB for each 1 GB of RAM allocated to the VM. Examples 12 GB RAM, 1 VM @2 GB, 1 VM @4 GB Host: 812 + (0.0425 * 12 * 1024) = 1,334.24 MB
For those of you like me who aren't experts at all things Active Directory (AD) and Hyper-V Live Migration (LM) permissions, it can be enough of a pain to LM a Virtual Machine (VM) between domains that you simply decide to take the VMs offline to affect the move. See, I only tolerate AD because it's required for LM'ing VMs; there isn't a choice. (It's also required for Windows Clusters, but that's a different topic.) But I figured it out. My back-story is that we setup a cluster using Windows 2012 r1 as the AD Domain Controller (DC) and Hyper-V Server 2012 r1 for the VM hosts. Then we decided we wanted to use r2 for the AD DC and Hyper-V hosts. Upgrading Hyper-V was easy. But I found that there's some unresolved Microsoft bug with Windows Clustering when upgrading the AD DC from Windows 2012 r1 to Windows 2012 r2--- clustering simply doesn't work correctly anymore . So we gave up and created a from-scratch Windows 2012 r2 AD DC then made a new cluster