How Lab At Work Is Configured At The Moment

As I share from time to time, here is a description of what the test lab I run at work looks like at the moment. It has grown a little bit since the last of these posts.

image

There are 7 physical computers split into two AD domains:

Physical Network

  • There is an ADSL internet connection using a NetGear router.
  • We have a WatchGuard 2 series XTM. This provides me with my primary VPN route into the lab (SSL VPN client) and connects the on-premises lab with Microsoft Azure. I also use the built-in wireless hub to connect to the lab using my laptop while in the office.
  • There are 2 x dumb Netgear 24 port 1 GbE switches.
  • 1 x HP Procurve 10 GbE SFP+ switch that I hate. The only redeeming quality is that it is 10 GbE, allowing the iWARP cards to be plugged in.

Lab.Internal

This environment is pretty static, and enables me to get into the lab, and have enough fabric to rebuild the demo lab from scratch.

  • Lab-DC1: This is an old Sony laptop. I run AD on here for the lab domain. Here you can fund WSUS, and RRAS as one of my emergency backdoors in. This machine has just a 1 GbE network connection.
  • Lab-Storage1: This is a beefy HP DL370 G6 storage box with lots of capacity. I store all of my ISOs and images here. I have enabled Hyper-V and run the management pieces of the Demo.Internal domain on here, including AD and System Center. This machine has 1 GbE networking and 2 x iWARP (10 GbE RDMA) ports, each of which are connected to different virtual switches – I enable vRSS in VMs that run on this host and do SMB Multichannel in the guest OS. I’ve also done the unsupported Shared VHDX hack to enable Shared VHDX on local storage.

image

Demo.Internal

The physical part of this environment is frequently built from scratch, using what is running/hosted in the Lab.Internal domain.

  • Virtual Management Stuff: You can see the range of things running in this domain that are hosted by Lab-Storage1.lab.internal. The most important of these is Demo-DC1, the DC for the demo domain. I run all of my demo System Center VMs as VMs on Lab-Storage1, and I also run demo PCs as VMs.
  • JBOD: I have a DataOn DNS-1640 with 8 x HDDs and there are currently 6 x SSDs in there too. Yes, that is a very weird breakdown for tiering and for column counts.
  • Demo-FS1 & Demo-FS2: These are HP DL360 G7 servers that are connected to the JBOD using 6 Gbps LSI 9207-8e SAS cards (8 = two interfaces/cables, each having 4 “ports” that run at 6 Gbps). These servers are clustered to make the SOFS. From time to time, I enable Hyper-V on them to have a second Hyper-V cluster. The servers have 4 x 1 GbE and 2 x iWARP for SMB 3.0 storage networking.
  • Demo-Host1 and Demo-Host2: Two Dell R420 servers that are my Hyper-V cluster. Each has 4 x 1 GbE and 4 x iWARP 🙂 That gives me lots of flexibility for SMB 3.0 designs. Normally VMs are stored on the SOFS, but you might have noticed that I also have an iSCSI target running as a VM on Lab-Storage1. My network design varies depending on what I’m trying to do.
  • Demo-Host3: This is a HP Elitebook 8740w. This “beast” was my work laptop until it was replaced by a Toshiba i5 KIRAbook – a portable lab is pretty useless for me now so I prefer a light presentation machine that I can VPN from. The mobile workstation is now in the lab where it runs as an additional host on 1 GbE networking. It gives me capacity for Hyper-V Replica, and quickly testing things without touching the Hyper-V cluster.

Azure

With a site-to-site connection into Azure, I have capacity to deploy additional things in the cloud, with integrated management via System Center.

The main changes over the past year have involved the addition of the XTM and Azure. My work has me spending a lot of time learning and teaching about Azure so that side of things will continue. Our DataOn business has been growing so we’ll see how things go there. Of course, I’ll have to stay up to date with the on-premises gear so we’ll see what changes might be driven by “Threshold” come TechEd Europe.

9 thoughts on “How Lab At Work Is Configured At The Moment”

  1. How’s the DNS-1640-D holding up ? We’ve got exactly the same with 4x SSD & 8x HDD, but whenever I connect redundant with 2 SAS cables per node we get severe performance issues/high latencies. DataOnStorage adviced to do Set-MSDSMGlobalDefaultLoadBalancePolicy LB on the nodes but ANY setting doesn’t help, incl. the default None ? Only thing helping so far is non-redundant 1 cable per node (still redundant over NICs though so not a huge big deal, but still, it SHOULD work with 2x SAS, hence my question) May I ask which brand/model SSDs/HDDs you use ?

    1. The HDDS are Seagate and the SSDs are STEC and yukky SANDisk. NEVER buy SANDisk – they’ve shipped lots of units with old firmware that gives shit performance. Once I upgraded the firmware, all was well, including with MPIO set to LB.

      Right now, DataOn prefer HGST HDDs and SSDs.

  2. Aidan, that is a VERY nice lab!

    I just have 2 hypervisors and some old shared FC storage direct connected to HBAs. Using the on-board quadport Broadcom NIC is pretty terrible when I can only use one port for LiveMigration/Cluster network. But with the Compressed Live Migration in 2012 R2 it is not that bad.
    I have no FC switch nor 10GB infrastructure.
    Would it be possible to add in a ConnectX-3 Mellanox card (for example) in each server and direct connect the hypervisors for a separate, dedicated, Live Migration network?

    Thanks in advance for your advice!

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.