Skip to main content
In my attempt to deploy ReFS into a Production environment, I did some testing to see what sort of performance hit I should expect. As it turns out, it doesn't look bad, especially considering the benefits I'm after (protection against bit rot). Be aware that what I was most concerned about was the differences between the 3 configurations I looked at.

I used 6 physical drives; all 10k HDDs in the same DAS enclosure. 2 were in a hardware RAID-1 array at the DAS level. The next 2 were also in a hardware RAID-1 array at the DAS level. The last 2 were setup as JBOD (Dell's app makes me set them up as a single-disk RAID-0). So Windows sees a total of 4 drives.

One of the RAID-1 drives was formatted as NTFS ("raid1, ntfs"). The other RAID-1 drive was formatted as ReFS ("raid1, refs"). The last 2 were added to a "manual" Storage Spaces Pool, then I explicitly created a Storage Spaces mirror from those 2 drives and formatted the resulting volume as ReFS ("ss-m, refs"). Yes, I had to find an extra drive to add to the pool because of a dumb Microsoft limitation.

Since my environment was a Windows 2012 r2 Cluster that's hosting some Hyper-V VMs, I then made those 3 volumes into CSVs so I could use them in the cluster as expected. I then created a new 200GB dynamic VHDX and attached it to a running, clustered VM. I created it on the NTFS volume, then SLM'd (Storage Live Migration) the VHDX between the volumes as needed.

From within the VM, I ran CrystalDiskMark. Yes, that means that "on top" of the listed volume types, there's always "CSVFS", then a VHDX file, and NTFS within that, which is where the test runs.

Each test-set was set to 5 runs at 4GB each. Since I created a dynamic VHDX, I ran the test-set once to ensure disk expansion wasn't an issue. Then I ran the test-set a second time. (Since SLM'ing results in the VHDX getting compacted/optimized (like what you do with the Optimize-VHD PowerShell cmdlet), I made sure I ran the test-sets at least twice for each volume.) It turns out that it didn't matter. CrystalDiskMark creates a 4GB file during the "Preparing..." stage, so disk-expansion never impacts the tests themselves. 

After I was done, I took an average of all my test-sets and created some graphs in Excel. Each volume was tested twice, except for "ss-m, refs", which was done 3 times. For the most part, I feel there wasn't significant variability between test-sets for a given volume, so I'm confident in a simple average.







Enjoy!

Comments

Popular posts from this blog

Live Migration between domains

For those of you like me who aren't experts at all things Active Directory (AD) and Hyper-V Live Migration (LM) permissions, it can be enough of a pain to LM a Virtual Machine (VM) between domains that you simply decide to take the VMs offline to affect the move. See, I only tolerate AD because it's required for LM'ing VMs; there isn't a choice. (It's also required for Windows Clusters, but that's a different topic.) But I figured it out. My back-story is that we setup a cluster using Windows 2012 r1 as the AD Domain Controller (DC) and Hyper-V Server 2012 r1 for the VM hosts. Then we decided we wanted to use r2 for the AD DC and Hyper-V hosts. Upgrading Hyper-V was easy. But I found that there's some unresolved Microsoft bug with Windows Clustering when upgrading the AD DC from Windows 2012 r1 to Windows 2012 r2--- clustering simply doesn't work correctly anymore . So we gave up and created a from-scratch Windows 2012 r2 AD DC then made a new cluster

SqlBulkCopy and the "colid" error

I thought there was a page explaining this somewhere out there on the Internet, but I can't find it anymore. So here's what I re-discovered. When you try to insert the rows from a DataTable and the data in one of the columns of one of the rows is too big to fit into the destination column in the database, you get a SqlException with this error message: "Received an invalid column length from the bcp client for colid N." (Where "N" is a number.) It doesn't tell you which row, and it's a pain to figure out what column to look at. To determine what column it is referring to, you first need to get a listing of all columns in the table, listed in the order as they have been defined in the database. Next, you remove any columns in the list that are not represented in SqlBulkCopy.ColumnMappings (the order of the column mappings is irrelevant). The list that remains is what "colid" is referring to, with the first column corresponding to colid &

Hyper-V and reserving RAM for the host/root/parent partition

After a long process, I finally have a real-world calculation for determining how much RAM to reserve for a Hyper-V host. The question/answer about it is here. But the summary is that Hyper-V loses RAM to the Nonpaged pool (and all of it is "untagged") in addition to the "standard" stuff that Microsoft has documented. Be aware that I write MB/GB here, when I actually mean MiB/GiB. I feel it will be more intuitive to see the notation that Windows (incorrectly) uses. Host Overhead 300 MB for the Hypervisor services 512 MB for the Host OS (This is a recommended amount; you have some wiggle-room with this.) [The amount of physical RAM available to the host OS] multiplied by 0.0425 (result in GB ) for the Nonpaged pool (Which means multiply that by 1024 to convert to "MB") Per-VM Overhead 24 MB for the VM 8 MB for each 1 GB of RAM allocated to the VM. Examples 12 GB RAM, 1 VM @2 GB, 1 VM @4 GB Host: 812 + (0.0425 * 12 * 1024) = 1,334.24 MB