Yesterday we took delivery of a DataOn DNS-1640D JBOD tray with 8 * 600 GB 10K disks and 2 * 400 GB dual channel SSDs. This is going to be the heart of V2 of the lab at work, providing me with physical scalable and continuously available storage.
Below you can see the architecture of the setup. Let’s start with the DataOn JBOD. It has dual controllers and dual PSUs. Each controller has some management ports for factory usage (not shown). In a simple non-stacked solution such as below, you’ll use SAS ports 1 and 2 to connect your servers. A SAS daisy chaining port is included to allow you to expand this JBOD to multiple trays. Note that if scaling out the JBOD is on the cards then look at the much bigger models – this one takes 24 2.5” disks.
I don’t know why people still think that SOFS disks go into the servers – THEY GO INTO A SHARED JBOD!!! Storage inside a server cannot be HA; there is no replication or striping of internal disks between servers. In this case we have inserted 8 * 600 GB 10K HDDs (capacity at a budget) and 2 STEC 400 GB SSDs (speed). This will allow us to implement WS2012 R2 Storage Spaces tiered storage and write-back cache.
I’m recycling the 2 servers that I’ve been using as Hyper-V hosts for the last year and a half. They’re HP DL360 servers. Sadly, HP Proliants are stuck in the year 2009 and I can’t use them to demonstrate and teach new things like SR-IOV. We’re getting in 2 Dell rack servers to take over the role as Hyper-V hosts and the HP servers will become our SOFS nodes.
Both servers had 2 * dual port 10 GbE cards, giving me 4 * 10 GbE ports. One card was full height and the other modified to half height – occupying both ports in the servers. We got LSI controllers to connect the 2 servers to the JBOD. Each LSI adapter is full height and has 2 ports. Thus we needed 4 SAS cables. SOFS Node 1 connects to port 1 on each controller on the back of the JBOD, and SOFS Node 2 connects to port 2 on each controller. The DataOn manual shows you how to attach further JBODs and cable the solution if you need more disk capacity in this SOFS module.
Note that I have added these features:
- Multipath I/O: To provide MPIO for the SAS controllers. There are rumblings of performance issues with this enabled.
- Windows Standards-Based Storage Management: This provides us with with integration into the storage, e.g. SES
The network design is what I’ve talked about before. The on-board 1 GbE NICs are teamed for management. The servers now have a single dual port 10 GbE card. These 10 GbE NICs ARE NOT TEAMED – I’ve put them on different subnets for SMB Multichannel (a cluster requirement). That means they are simple traditional NICs, each with a different IP address. I’ve used NetQOSPolicy to do QoS for those 2 networks on a per-protocol basis. That means that SMB 3.0 and backup and cluster communications go across these two networks.
Hans Vredevoort (Hyper-V MVP colleague) went with a different approach: teaming the 10 GbE NICs and presenting team interfaces that are bound to different VLANs/subnets. In WS2012, the dynamic teaming mode will use flowlets to truly aggregate data and spread even a single data stream across the team members (physical interfaces).
The storage pool is created in Failover Clustering. While TechEd demos focused on PowerShell, you can create a tiered pool and tiered virtual disks in the GUI. PowerShell is obviously the best approach for standardization and repetitive work (such as consulting). I’ve fired up a single virtual disk so far with a nice chunk of SSD tiering and it’s performing pretty well.
I wanted to test quickly before the new Dell hosts come so Hyper-V is enabled on the SOFS cluster. This is a valid deployment scenario, especially for a small/medium enterprise (SME). What I have built is the equivalent (more actually) of a 2-node Hyper-V cluster with a SAS attached SAN … albeit with tiered storage … and that storage was less than half the cost of a SAN from Dell/HP. In fact, the retail price of the HDDs is around 1/3 the list price of the HP equivalent. There is no comparison.
I deployed a bunch of VMs with differential disks last night. Nice and quick. Then I pinned the parent VHD to the SSD tier and created a boot storm. Once again, nice and quick. Nothing scientific has been done and I haven’t done comparison tests yet.
But it was all simple to set up and way cheaper than traditional SAN. You can’t beat that!