Skip to main content

Posts

Showing posts from 2014

Hyper-V and reserving RAM for the host/root/parent partition

After a long process, I finally have a real-world calculation for determining how much RAM to reserve for a Hyper-V host. The question/answer about it is here. But the summary is that Hyper-V loses RAM to the Nonpaged pool (and all of it is "untagged") in addition to the "standard" stuff that Microsoft has documented. Be aware that I write MB/GB here, when I actually mean MiB/GiB. I feel it will be more intuitive to see the notation that Windows (incorrectly) uses. Host Overhead 300 MB for the Hypervisor services 512 MB for the Host OS (This is a recommended amount; you have some wiggle-room with this.) [The amount of physical RAM available to the host OS] multiplied by 0.0425 (result in GB ) for the Nonpaged pool (Which means multiply that by 1024 to convert to "MB") Per-VM Overhead 24 MB for the VM 8 MB for each 1 GB of RAM allocated to the VM. Examples 12 GB RAM, 1 VM @2 GB, 1 VM @4 GB Host: 812 + (0.0425 * 12 * 1024) = 1,334.24 MB

Live Migration between domains

For those of you like me who aren't experts at all things Active Directory (AD) and Hyper-V Live Migration (LM) permissions, it can be enough of a pain to LM a Virtual Machine (VM) between domains that you simply decide to take the VMs offline to affect the move. See, I only tolerate AD because it's required for LM'ing VMs; there isn't a choice. (It's also required for Windows Clusters, but that's a different topic.) But I figured it out. My back-story is that we setup a cluster using Windows 2012 r1 as the AD Domain Controller (DC) and Hyper-V Server 2012 r1 for the VM hosts. Then we decided we wanted to use r2 for the AD DC and Hyper-V hosts. Upgrading Hyper-V was easy. But I found that there's some unresolved Microsoft bug with Windows Clustering when upgrading the AD DC from Windows 2012 r1 to Windows 2012 r2--- clustering simply doesn't work correctly anymore . So we gave up and created a from-scratch Windows 2012 r2 AD DC then made a new cluster
In my attempt to deploy ReFS into a Production environment, I did some testing to see what sort of performance hit I should expect. As it turns out, it doesn't look bad, especially considering the benefits I'm after (protection against bit rot). Be aware that what I was most concerned about was the differences between the 3 configurations I looked at. I used 6 physical drives; all 10k HDDs in the same DAS enclosure. 2 were in a hardware RAID-1 array at the DAS level. The next 2 were also in a hardware RAID-1 array at the DAS level. The last 2 were setup as JBOD (Dell's app makes me set them up as a single-disk RAID-0). So Windows sees a total of 4 drives. One of the RAID-1 drives was formatted as NTFS ("raid1, ntfs"). The other RAID-1 drive was formatted as ReFS ("raid1, refs"). The last 2 were added to a "manual" Storage Spaces Pool, then I explicitly created a Storage Spaces mirror from those 2 drives and formatted the resulting volume as