Skip to main content
In my attempt to deploy ReFS into a Production environment, I did some testing to see what sort of performance hit I should expect. As it turns out, it doesn't look bad, especially considering the benefits I'm after (protection against bit rot). Be aware that what I was most concerned about was the differences between the 3 configurations I looked at.

I used 6 physical drives; all 10k HDDs in the same DAS enclosure. 2 were in a hardware RAID-1 array at the DAS level. The next 2 were also in a hardware RAID-1 array at the DAS level. The last 2 were setup as JBOD (Dell's app makes me set them up as a single-disk RAID-0). So Windows sees a total of 4 drives.

One of the RAID-1 drives was formatted as NTFS ("raid1, ntfs"). The other RAID-1 drive was formatted as ReFS ("raid1, refs"). The last 2 were added to a "manual" Storage Spaces Pool, then I explicitly created a Storage Spaces mirror from those 2 drives and formatted the resulting volume as ReFS ("ss-m, refs"). Yes, I had to find an extra drive to add to the pool because of a dumb Microsoft limitation.

Since my environment was a Windows 2012 r2 Cluster that's hosting some Hyper-V VMs, I then made those 3 volumes into CSVs so I could use them in the cluster as expected. I then created a new 200GB dynamic VHDX and attached it to a running, clustered VM. I created it on the NTFS volume, then SLM'd (Storage Live Migration) the VHDX between the volumes as needed.

From within the VM, I ran CrystalDiskMark. Yes, that means that "on top" of the listed volume types, there's always "CSVFS", then a VHDX file, and NTFS within that, which is where the test runs.

Each test-set was set to 5 runs at 4GB each. Since I created a dynamic VHDX, I ran the test-set once to ensure disk expansion wasn't an issue. Then I ran the test-set a second time. (Since SLM'ing results in the VHDX getting compacted/optimized (like what you do with the Optimize-VHD PowerShell cmdlet), I made sure I ran the test-sets at least twice for each volume.) It turns out that it didn't matter. CrystalDiskMark creates a 4GB file during the "Preparing..." stage, so disk-expansion never impacts the tests themselves. 

After I was done, I took an average of all my test-sets and created some graphs in Excel. Each volume was tested twice, except for "ss-m, refs", which was done 3 times. For the most part, I feel there wasn't significant variability between test-sets for a given volume, so I'm confident in a simple average.







Enjoy!

Comments

Popular posts from this blog

Live Migration between domains

For those of you like me who aren't experts at all things Active Directory (AD) and Hyper-V Live Migration (LM) permissions, it can be enough of a pain to LM a Virtual Machine (VM) between domains that you simply decide to take the VMs offline to affect the move. See, I only tolerate AD because it's required for LM'ing VMs; there isn't a choice. (It's also required for Windows Clusters, but that's a different topic.) But I figured it out. My back-story is that we setup a cluster using Windows 2012 r1 as the AD Domain Controller (DC) and Hyper-V Server 2012 r1 for the VM hosts. Then we decided we wanted to use r2 for the AD DC and Hyper-V hosts. Upgrading Hyper-V was easy. But I found that there's some unresolved Microsoft bug with Windows Clustering when upgrading the AD DC from Windows 2012 r1 to Windows 2012 r2--- clustering simply doesn't work correctly anymore . So we gave up and created a from-scratch Windows 2012 r2 AD DC then made a new cluster...

SqlBulkCopy and the "colid" error

I thought there was a page explaining this somewhere out there on the Internet, but I can't find it anymore. So here's what I re-discovered. When you try to insert the rows from a DataTable and the data in one of the columns of one of the rows is too big to fit into the destination column in the database, you get a SqlException with this error message: "Received an invalid column length from the bcp client for colid N." (Where "N" is a number.) It doesn't tell you which row, and it's a pain to figure out what column to look at. To determine what column it is referring to, you first need to get a listing of all columns in the table, listed in the order as they have been defined in the database. Next, you remove any columns in the list that are not represented in SqlBulkCopy.ColumnMappings (the order of the column mappings is irrelevant). The list that remains is what "colid" is referring to, with the first column corresponding to colid ...

Outlook 2007/2010 Search Folders using email address domain

As of May 2010, the Beta of Outlook 2010 still hasn’t overcome this problem. I’m surprised this glaring omission has been left unfixed. Maybe Outlook is maintained by contractors? I have what I consider to be a simple need. I want a “Search Folder” that shows me all the email related to a particular client. What works well for me is a query that finds any email with the client name in the subject, or any email that involves an email address (from/to/cc) from the client's email domain. Back in Thunderbird, it was simple to setup a rule for this. Outlook can't do it. (I'd go back to Thunderbird, but I have to get calendaring working first.) What doesn't work When you edit the criteria for a Search Folder, on the “Messages” tab, the fields you want appear to be represented, but the way things work is wrong. All the criteria specified must be true, not any; they went with “AND” where I need “OR”. The other problem is that the “From…” and “Sent To…” fields use a “starts-with...