Yes, You Can Run A Hyper-V Cluster On Your JBOD Storage Spaces

What is the requirement of a cluster?  Shared storage.  What’s supported in WS2012/R2?

  • SAS SAN
  • iSCSI SAN
  • Fiber Channel SAN
  • FCoE
  • PCI RAID (like Dell VRTX)
  • Storage Spaces

What’s the difference between a cluster and a Hyper-V cluster?  You’ve enabled Hyper-V on the nodes.  Here are 2 nodes, each connected to a JBOD.  Storage Spaces is configured on the JBOD to create cluster shared volumes.  All that remains now is to enable Hyper-V on node 1 and node 2, and now you have a valid Hyper-V cluster that stores VMs on the CSVs

It’s completely supported, and a perfect Hyper-V cluster solution for the small/medium business, with the JBOD costing a fraction (search engine here and here) of the equivalent capacity SAN.

Stupid questions that you should not ask:

  • What file shares do I use to store my VMs on?  Where do you see “file shares” in the above text?  You store the VMs directly on the CSVs like in a Hyper-V cluster with a SAN, instead of storing file shares on the CSVs like in a SOFS cluster.
  • Can I run other roles on the hosts?  No.  You never should do that … and I include Exchange Server and SQL Server for the 2 people that I now hope have resigned from working in IT who asked that recently.
  • The required networks if you use 10 GbE are shown above.  Go look at converged networks for all possible designs; it’s the same clustered 2012/R2 Hyper-V networking as always.

18 thoughts on “Yes, You Can Run A Hyper-V Cluster On Your JBOD Storage Spaces”

    1. The LSI adapter already has 2 interfaces. I think there would be better ways to spend budget … or not spend it.

  1. Is this still supported in VMM? I had read that having the SOFS and hyperv role on the host was not supported in VMM

    1. VMM does not like Hyper-V hosts to also be SOFS nodes that share storage via SMB 3.0 to other nodes/clusters. And to be honest, that scenario is pretty pointless. Either the servers are Hyper-V hosts with Storage Spaces clustered storage, or they are SOFS nodes that use Storage Spaces clustered storage.

  2. Thanks for everything you’ve put up here – it’s helping me a great deal as I learn more about storage technology!

    In this model, how do you scale out storage? If you were to daisy-chain two more DataON DNS-1640Ds off of daisy-chain port in your diagram, you would have a single point of failure. If you add a second JBOD enclosure, would you need to add another 2-port SAS card to each server to provide the same path redundancy?

    1. The best way is to add more LSI cards to the clustered file servers. Therefore you direct connect the new JBODs and do not actually daisy chain.

  3. How big is small or medium business?
    We have around 100 virtual machines.
    Would you recommend this for a 4 node hyper-v cluster?

    1. It’s all relative. How many hosts, depends on your hosts’ capacity, your VM resource requirements, and your desire to increase VM:host density.

  4. Aidan, your diagram seems to still rely on a couple of SOFS servers, is that right? I’m confused! I’m looking to build a Hyper-V cluster with two servers, SS and a JBOD, no SOFS servers.

    1. VRTX of the past uses PCI-RAID and therefore not Storage Spaces compatible. That has either recently changed, or will be changing AFAIK. Talk to Dell or a Dell reseller.

  5. Hi, I have found suggestions that a minimum of a three storage enclosures are required to cover individual storage outage. Does it make sense to build a solution according suggestions or Dual PSU/Dual SAS Expanders design + one enclosure would be safe enough?

    1. If you are cloud scale (hundreds of racks) then single PSU is fine. But in that case, you work for Microsoft, Google or Amazon and you already know that 😉 You do dual PSU/SAS etc otherwise. And if you have enough data, or will have enough eventually, you should put in the number of JBODs required for your mirroring to get FT (3 for 2-way or 4 for 3-way).

  6. Setting up a Dell VRTX with two nodes to be in a small Hyper-V cluster with 2-4 VM’s each. One option is to set up the 8HDD’s as one Virtual Disk (i.e. using PCI RAID) and present these to the the two nodes in multiple assignment and do CSV. But couldn’t I instead present each physical HDD’s as a separate disk with a 1-disk RAID0 to the Hyper-V nodes in multiple assignment and use Storage Spaces? -Which option would be better?

  7. Aidan, thanks for the advice given here. I am setting up a small 2 node cluster with Windows 2016 Datacentre edition. There is no SAN so need to use DAS storage (6 x 3TB) SAS drives on each host. I also have 2 x SSD drives in each Dell R730 host. Any recommedndations on providing shared storage for VM failover would be much appreaciated. Should I use RAID 6?

Leave a Reply to AFinn Cancel reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.