Today I downloaded and installed the free iSCSI target for Windows Server 2008 R2 that was just released. I needed something free and lightweight for the lab in work. We’re using a pair of HP DL165 G7s as clustered hosts and a DL180 G6 with “cheap” SATA disk as the “SAN”. I was planning on using Windows Storage Server 2008 R2, but then I saw the tweet by Microsoft’s Jose Barreto that announced the release. Perfect – that was one less ISO I would have to download.
I deployed W2008 R2 from the WDS VM in the lab and downloaded the compressed setup file. After it was extracted, I installed the the target. That gives you a simple enough tool to use.
The service creates targets. Each target is a collection of disks (fixed size VHDs that are stored on the iSCSI target server) and you permission the target using IQN, MAC, IP address … and I can’t remember if DNS name was one of the options or not.
I needed two targets. One would be for the VMM library. For my lab, VMM would be running as a VM on a standalone host (another DL165 G7). I set up a target with a disk and permitted the iSCSI addresses of the standalone host to connect.
On the standalone host I added the MPIO feature, enabled iSCSI, and added the iSCSI NICs devices. In the initiator, I added the target IP address, enabled multipath, and added the volume. All I had to do now was format it in Disk Management.
For my Hyper-V cluster (all the networking was set up), I set up a second target, and permitted the 4 iSCSI NIC IP addresses of the 2 hosts to connect. The first disk I created was a 1GB VHD. This would be for the cluster witness.
Back on each clustered host, I added the Hyper-V role, and added the MPIO and Failover Clustering features. Once again, I enabled iSCSI in MPIO and added the NIC devices. On each host, I connected to the target IP address and enabled multipath. It found the second (cluster storage) target and did not find the first (VMM storage) target. That’s because the VMM storage target did not permit the IP addresses of the clustered hosts iSCSI NICs to connect. The witness disk was added.
Now I set up the cluster. The witness disk was added and I renamed it to “Witness Disk” in Failover Clustering.
Now I needed some storage for VMs. In the CSV target admin console, I created another disk on the “SAN” server of the required size. It was associated with the second (cluster storage) target so the clustered hosts could now see it in Disk Management. I formatted the volume, labelling it as “CSV1”, and added it into Failover Clustering, renaming it as "CSV1” in there. CSV was enabled in Failover Clustering, and the CSV1 disk was added as CSV storage.
I repeated that process to create CSV2.
A couple of VMs later and I had a fully functioning Hyper-V cluster working with a free Microsoft iSCSI target, running on relatively economic storage.
I found the iSCSI target to be really easy to set up and use. You just need to get used to the idea that you are sharing VHDs instead of LUNS to your iSCSI clients. The performance is OK – it’s never going to match a dedicated appliance like a Compellent, P4000, or a Clarion. But it sure does beat them on price and quick availability. I had no complaints but I intend this lab to be a lab, not a production private cloud with hundreds of VMs.
I was asked if I would run performance benchmarks. I though this would be pointless – you cannot compare something that is intended to run on a huge variety of economic platforms (I’m using a non-dedicated HP 1 Gbps switch in the lab, along with slow SATA disk on a budget storage server) with something like a pre-set collection of gear like you get with a HP P4000 bundle. Everyone’s performance experience of this solution will vary wildly.
This sort of solution is going to be of use in two scenarios:
- Demonstrations and training labs: If you need to try something out quick or show clustering in action, you can’t beat something that you can run even on a laptop and is free to download and use.
- Low end, budget production clusters: No, it cannot match a storage appliance or even other paid-for iSCSI software solutions for features or performance, but I bet you that many low end, 2 or 3 node cluster owners would prefer economy over features. Not everyone needs snapshots replicating to a remote site, you know!
Give it a look-see and find out for yourself what it can do. You might have an EVA 8000 series or some monster Hitachi SAN for production – but maybe something like this could be useful in a test lab?