How Many NICs for Clustered Windows Server 8 Hyper-V?

If you asked me, any Hyper-V expert, or Microsoft that question about Windows Server 2008 R2 then it was easy: either 4 or 6 (or 8 or 10 with NIC teaming) depending if you used iSCSI (2 NICs with MPIO) or not.  Ask that question with Windows Server 8 and the answer is … it depends.

You do have several roles that need to be serviced with network connections:

  • Parent
  • Cluster/Storage
  • Live Migration
  • Hyper-V Extensible Switch (note that what we called virtual network is now virtual switch – a virtual network is an abstraction or virtualisation now.  This is probably serviced by 2 NICs with NIC teaming (by Windows)

How this connections are physically presented to the network really does depend on the hardware in your server, whether you need physical fabric isolation or not (trend is to fabric convergence to reduce physical fabrics complexity and cost), and whether you want to enable NIC teaming or not.

Here’s a converged example from yesterday’s Build Windows sessions that uses fault tolerant 10 GbE NICs (teamed by Windows Server 8). 

image

All of the networking functions all have port connections into the Hyper-V Extensible Switch.  The switch is bound to two 10 GbE network adapters in the host server.  NIC teaming provides network path fault tolerance (in my experience a switch is more likely to die than a NIC now).  QoS ensures that each connection gets the necessary bandwidth – I reckon the minimum bandwidth option is probably best here because it provides a service guarantee and allows burst when capacity is available.  Port ACLs can be used to control what a connection can connect to – to provide network isolation.

The reason that MSFT highlighted this example is because it is a common hardware configuration now.  If you buy HP blades, you can do some of this now with their Flex10 solution.  Microsoft are recommending 10 GbE for future proofing, and you can use 2 NICs and physical switch ports with NIC teaming and network fault tolerance, instead of using 10 NICs and 10 switch ports for the 1 GbE alternative!

A lot of examples were shown.  This one goes down a more traditional route with physical isolation:

image

Most servers come with 4 * 1 GbE NICs by default.  You could take the above example, and use just 1 * 1 GbE NIC for the Hyper-V Extensible switch if budget was an issue, but you’d lose NIC teaming.  You could add NIC teaming to that example by adding another 1 GbE NIC (now giving a total of 5 * 1 GBe NICs).

The answer to the “how many NICs” question is, fortunately and unfortunately, a consultants answer: it depends.

Enabling Multi-Tenancy and Converged Fabric for the Cloud Using QoS

Speakers: Charley Wen and Richard Wurdock

Pretty demo intensive session.  We start off with a demo of “fair sharing of bandwidth” where PSH is used with minimum bandwidth setting to provide equal weight to a set of VMs.  One VM is needs to get more bandwidth but can’t get it.  A new policy is deployed by script and it get’s a higher weight. It then can access more of the pipe.  Maximum bandwidth would have capped the VM so it couldn’t access idle b/w.

Minimum Bandwidth Policy

  • Enforce bandwidth allocation –> get performance predictability
  • Redistribute unused bandwidth –> get high link utilisation

The effect is that VMs get an SLA.  They always get the minimum if the require it.  They consume nothing if they don’t use it, and that b/w is available to others to exceed their minimum.

Min BW % = Weight / Sum of Weights

Example of 1 Gbps pipe:

  • VM 1 = 1 = 100 Mbps
  • VM 2 = 2 = 200 Mbps
  • VM 3 = 5 = 500 Mbps

If you have NIC teaming, there is no way to guarantee minimum b/w of total potential pipe. 

Maximum Bandwidth

Example, you have an expensive WAN link.  You can cap a customer’s ability to use the pipe based on what they pay.

How it Works Under the Covers

Bunch of VMs trying to use a pNIC.  The pNIC reports it’s speed.  It reports when it sends a packet.  This is recorded in a capacity meter.    It feeds into the traffic meter and it determines classification of packet.  Using that it figures out if exceeds capacity of the NIC.  The peak bandwidth meter is fed by latter and it stops traffic (draining process). 

Reserved bandwidth meter guarantees bandwidth. 

All of this is software, and it is h/w vendor independent. 

With all this you can do multi-tenancy without over-provisioning.

Converged Fabric

Simple image: two fabrics: network I/O and storage I/O across iSCSI, SMB, NFS, and Fiber Channel.

Expensive, so we’re trying to converge onto one fabric.  QoS can be used to guarantee service of various functions of the converged fabric, e.g. run all network connections through a single hyper-v extensible switch, via 10 Gbps NIC team.

Windows Server 8 takes advantage of hardware where available to offload QoS.

We get a demo where a Live Migration cannot complete because a converged fabric is saturated (no QoS).  In the demo a traffic class QoS policy is created and deployed.  Now the LM works as expected … the required b/w is allocated to the LM job.  The NIC in the demo supports h/w QoS so it does the work.

Business benefit: reduced capital costs by using fewer switches, etc.

Traffic Classification:

  • You can have up to 8 traffic classes – 1 of them is storage, by default by the sound of it.
  • Appears that DCB is involved with the LAN miniport and iSCSI miniport is traffic QoS with traffic classification.  My head hurts.

Hmm, they finished after using only half of their time allocation.

Windows Server 8 Hyper-V Day 1 Look Back

I’ve just been woken up from my first decent sleep (jetlag) by my first ever earthquake (3.5) and I got to thinking … yesterday (Hyper-V/Private Cloud day) was incredible.  Normally when I live blog I can find time to record what’s “in between the lines” and some of the spoken word of the presenter.  Yesterday, I struggled to take down the bullet points from the slides; there was just so much change being introduced.  There wasn’t any great detail on any topic, simply because there just wasn’t time.  One of the cloud sessions ran over the allotted time and they had to skip slides.

I think some things are easy to visualise and comprehend because they are “tangible”.  Hyper-V Replica is a killer headline feature.  The increase host/cluster scalability gives us some “Top Gear” stats: just how many people really have a need for a 1,000 BHP car?  And not many of us really need 63 host clusters with 4,000 VMs.  But I guess Microsoft had an opportunity to test and push the headline ahead of the competition, and rightly took it.

Speaking of Top Gear metrics, one interesting thing was that the vCPU:pCPU ratiio of 8:1 was eliminated with barely a mention.  Hyper-V now supports as many vCPUs as you can fit on a host without compromising VM and service performance.  That is excellent.  I once had a quite low end single 4 core CPU host that was full (memory, before Dynamic Memory) but CPU only averaged 25%.  I could have reliably squeezed on way more VMs, easily exceeding the ratio.  The elimination of this limit by Hyper-V will further reduce the cost of virtualisation.  Note that you still need to respect the vCPU:pCPU ratio support statements of applications that you virtualise, e.g. Exchange and SharePoint, because an application needs what it needs.  Assessment, sizing, and monitoring are critical for squeezing in as much as possible without compromising on performance.

The lack of native NIC Teaming was something that caused many concerns.  Those who needed it used the 3rd party applications.  That caused stability issues, new security issues (check using HP NCU and VLANing for VM isolation), and I also know that some Microsoft partners saw it as enough of an issue to not recommend Hyper-V.  The cries for native NIC teaming started years ago.  Next year, you’ll get it in Windows 8 Server.

One of the most interesting sets features is how network virtualisation has changed.  I don’t have the time or equipment here in Anaheim to look at the Server OS yet, so I don’t have the techie details.  But this is my understanding of how we can do network isolation.

image

Firstly, we are getting Port ACLs (access control lists).  Right now, we have to deploy at least 1 VLAN per customer or application to isolate them.  N-tier applications require multiple VLANs.  My personal experience was that I could deploy customer VMs reliably in very little time.  But I had to wait quite a while for one or more VLANs to be engineered and tested.  It stressed me (customer pressure) and it stressed the network engineers (complexity).  Network troubleshooting (Windows Server 8 is bringing in virtual network packet tracing!) was a nightmare, and let’s not imagine replacing firewalls or switches.

Port VLANs will allow us to say what a VM can or cannot talk to.  Imagine being able to build a flat VLAN with hundreds or thousands of IP addresses.  You don’t have to subnet it for different applications or customers.  Instead, you could (in theory) place all the VMs in that one VLAN and use Port ACLs to dictate what they can talk to.  I haven’t seen a demo of it, and I haven’t tried it, so I can’t say more than that.  You’ll still need an edge firewall, but it appears that Port ACLs will isolate VMs behind the firewall.

image

Port ACLs have the potential to greatly simplify physical network design with fewer VLANs.  Equipment replacement will be easier.  Troubleshooting will be easier.  And now we have greatly reduced the involvement of the network admins; their role will be to customise edge firewall rules.

Secondly we have the incredibly hard to visualise network or IP virtualisation.  The concept is that a VM or VMs are running on network A, and you want to be able to move them to a different network B, but they want to do it without changing IP address or downtime.  The scenarios include:

  • A company’s network is being redesigned as a new network with new equipment.
  • One company is merging with another, and they want to consolidate the virtualisation infrastructures.
  • A customer is migrating a virtual machine to a hoster’s network.
  • A private cloud or public cloud administrator wants to be able to move virtual machines around various different networks (power consolidation, equipment replacement, etc) without causing downtime.

image

Any of these would normally involve an IP address change.  You can see above that the VMs (10.1.1.101 and 10.1.1.102) are on Network A with IPs in the 10.1.1.0/24 network.  That network has it’s own switches and routers.  The admins want to move the 10.1.1.101 VM to the 10.2.1.0/24 network which has different switches and routers.

Internet DNS records, applications (that shouldn’t, but have) hard coded IP addresses, other integrated services, all depend on that static IP address.  Changing that on one VM would cause mayhem with accusatory questions from the customer/users/managers/developers that make you out to be either a moron or a saboteur.  Oh yeah; it would also cause business operations downtime.  Changing an IP address like that is a problem. In this scenario, 10.1.1.102 would lose contact with 10.1.1.101 and the service they host would break.

Today, you make the move and you have a lot of heartache and engineering to do.  Next year …

image

Network virtualisation abstracts the virtual network from the physical network.  IP address virtualisation does similar.  The VM that was moved still believes it is on 10.1.1.101.  10.1.1.102 can still communicate with the other VM.  However, the moved VM is actually on the 10.2.1.0/24 network as 10.2.1.101.  The IP address is virtualised.  Mission accomplished.  In theory, there’s nothing to stop you from moving the VM to 10.3.1.0/24 or 10.4.1.0/24 with the same successful results.

How important is this?  I worked in the hosting industry and there was a nightmare scenario that I was more than happy to avoid.  Hosting customers pay a lot of money for near 100% uptime.  They have no interest in, and often don’t understand, the intricacies of the infrastructure.  They pay not to care about it.  The host hardware, servers and network, had 3 years of support from the manufacturer.  After that, replacement parts would be hard to find and would be expensive.  Eventually we would have to migrate to a new network and servers.  How do you tell customers, who have applications sometimes written by the worst of developers, that they could have some downtime and then that there is a risk that their application would break because of a change of IP.  I can tell you the response: they see this as being caused by the hosting company and any work the customers need to pay for to repair the issues will be paid by the hosting company.  And there’s the issue.  IP address virtualisation with expanded Live Migration takes care of that issue.

For you public or private cloud operators, you are getting metrics that record the infrastructure utilisation of individual virtual machines.  Those metrics will travel with the virtual machine.  I guess they are stored in a file or files, and that is another thing you’ll need to plan (and bill) for when it comes to storage and storage sizing (it’ll probably be a tiny space consumer).  These metrics can be extracted by a third party tool so you can analyse them and cross charge (internal or external) customers.

We know that the majority of Hyper-V installations are smaller, with the average cluster size being 4.78 hosts.  In my experience, many of these have a Dell Equalogic or HP MSA array.  Yes, these are the low end of hardware SANs.  But they are a huge investment for customers.  Some decide to go with software iSCSI solutions which also add cost.  Now it appears like those lower end clusters can use file shares to store virtual machines with support from Microsoft.  NIC teaming with RDMA gives massive data transport capabilities and gives us a serious budget solution for VM storage.  The days of the SAN aren’t over: they still offer functionality that we can’t get from file shares.

I’ve got more cloud and Hyper-V sessions to attend today, including a design one to kick off the morning.  More to come!

Using Windows Server 8 for Building Private and Public IaaS Clouds

Speakers: Jeff Woolsey and Yigal Edery of Microsoft.

Was the cloud optimization of Windows Server 8 mentioned yet? Probably not, but it’s mentioned now.

– Enable multi tenant clouds: isolation and security
– High scale and low cost data centres
– Managable and extensible: they are pushing PowerShell here

Windows Server 8 should make building a IaaS much easier.

Evolution of the data centre (going from least to most scalable):

1) Dedicated servers, no virtualisation, and benefit of hardware isolation
2) Server virtalisation, with benefits of server consolidation, some scale out, and heterogeneous hardware
3) Cloud with Windows 8: Shared compute, storage, network. Multi-tenancy, converged network and hybrid clouds. Benefits of infrastructure utilization increase, automatic deployment and migration of apps, VMs, and services. Scaling of network/storage.

Enable Multi-Tenant Cloud
What is added?
– Secure isolation between tenants: Hyper-V extensible swich (routing, etc), Isolation policies (can define what a VM can see in layer 2 networking), PVLANs
– Dynamic Placement of Services: Hyper-V network virtualisation, complete VM mobility, cross-premise connectivity (when you move something to the cloud, it should still appear on the network as internal for minimal service disruption)
– Virtual Machine Metering: Virtual Machine QoS policies, resource meters (measure activity of VM over time, and those metric stay with a VM when it is moved), performance counters

Requirements:
– Tenant wants to easily move VMs to and from the cloud
– Hoster wants to place VMs anywhere in the data center
– Both want: easy onboarding, flexibility and isolation

The Hyper-V extensible switch has pVLAN functionality. But managing VLANs is not necessarily the way you want to go. 4095 maximum VLANs. And absolute nightmare to maintain, upgrade, or replace. IP address management is usually controlled by the hoster.

Network virtualisation aims to solve these issues. VM has two IPs: one it thinks it is using, and one that it really is using. “Each virtual network has illusiion it is running as a physical fabric”. The abstraction of IP address make the VM more mobile. Virtualisation unbinds server and app from physical hardware. Network virtualisation unbinds server and app from physical network.

Mobility Design
Rule 1: no new features that preclude Live Migration
Rule 2: maximise VM mobility with security

Number 1: recommendation is Live Migration with High Availability
Number 2: SMB Live Migration
Number 3: Live Storage Migration

Live Storage Migration enables:
– Storage load balancing
– No owntime servicing
– Leverages Hyper-V Offloaded Data Transfer (ODX): pass a secure token to a storage array to get it to move large amounts of data for you. Possibly up to 90% faster.

You can Live Migrate a VM with just a 1 Gbps connection and nothing else. VHDX makes deployment easier. Get more than 2040 GB in a vDisk without the need to do passthrough disk which requires more manual and exceptional effort. Add in the virtual fibre channel HBA with MPIO and you reduce the need for physical servers for customer clusters in fibre channel deployments.

Bandwitdh management is an option in the virtual network adapter. You can restrict bandwidth for customers with this. IPsec offload can be enabled to reduced CPU utilisation.

Upto 63 nodes in a cluster, with up to 4,000 VMs. That’s one monster cluster.

QoS and Resource Metering
Network: monitor incoming andoutgoing traffic per IP address
Sotrage: high water mark disk allocation
Memory: high and low water mark memory, and average

We get a demo of resource meters being used to rught size VMs.

Dynamic Memory gets a new setting: Minimum RAM. Startup RAM could give a VM 1024MB, but the VM could reduce to Minimum RAM of 512MB if there is insufficient pressure.

High scale and low cost data centres:
– The vCPU:pCPU ratio limit has been removed from Hyper-V support… just squeeze in what you can without impacting VM performance
– Up to 160 logical processors
– Up to 2 TB RAM

Networking:
– Dynamic VMQ
– Single root I/O virtualiation (SR-IOV): dedicate a pNIC to a VM
– Receive side scalling (RSS)
– Receive side coalescing (RSC)
– IPsec task offload

Storage
– ODX
– RDMA
– SMB 2.2
– 4K native disk support

HA and Data Protection
– Windows NIC teaming across different vendors of NIC!
– Hyper-V Replica for DR to scondary site – either one I own or a cloud provider
– BitLocker: Physically safeguard customers’ data. Even if you lose the disk the data is protected by encryption. You can now encrypt cluster volumes. TPMs can be leveraged for the first time with Hyper-V cluster shared disks. Cluster Names Obkect (CNO) used to lock and unlock disks.

Managable and Extensible
– PowerShell for Hyper-V by MSFT for the first time. Can use WMI too, as before.
– Workflows across many servers.
– Hyper-V Extensible switch to get visibility into the network
– WMIv2/CIM, OData, Data Center TCP

go.microsoft.com/fwlink/p/?LinkID=228511 is where a whitepaper will appear in the next week on this topic.

Build Windows: Windows Server 8

This is an IT pro session featuring Bill Laing (Corporate Vice President Server & Cloud Division) and Mike Neil (General Manager Windows Server) are the speakers.  This will be jam packed with demos.

“Windows Server 8 is cloud optimized for all business” – Bill Laing.  For single servers and large clusters.  The 4 themes of this server release:

  • beyond virtualisation
  • The power of many servers, the simplicity of one
  • Every app, any cloud
  • Modern work style enabled

Hyper-V headline features:

  • network virtualisation
  • Live storage migration
  • multi-tenancy
  • NIC teaming
  • 160 logical processors
  • 32 virtual processors
  • virtual fiber channel
  • Offloaded data transfer (between VMs on the same storage)
  • Hyper-V replicat
  • Cross-premise connectivity
  • IP address mobility
  • Cloud backup

Did they mention cloud yet?  I think not: apparently this release is cloud optimized.

A VM can have up to 32 vCPUs.  RAM can be up to 512 GB.  VHDX supports up to 16 TB of storage per vDisk.  Guest NUMA is where VMs are now NUMA aware … having 32 vCPUs makes this an issue.  A VM can optimize threads of execution VS memory allocation on the host.  A guest can now direct connect to a fibre channel SAN via a virtual fibre channel adapter/HBA – now the high end customers can do in-VM clustering just like iSCSI customers.  You can do MPIO with this as well, and it works with existing supported guest OSs.  No packet filtering is done in the guest.

Live Migration.  You can now do concurrent Live Migrations.  Your limit is the networking hardware.  You can LM a VM from one host to another with “no limits”.  In other words, a 1 Gbps connection with no clustering and no shared storage is enough for a VM live migration now.  You use the Move wizard, and can choose pieces of the VM or the full VM.  Live Storage Migration sits under the hood.  It is using snapshots similar to what was done with Quick Storage Migration in VMM 2008 R2. 

On to Hyper-V networking.  What was slowing down cloud adoption?  Customers want hybrid computing.  Customers also don’t like hosting enforced IP addressing.  The customer can migrate their VM to a hosting company, and keep their IP address.  A dull demo because it is so transparent.  This is IP Address Mobility.  The VM is exported.  Some PowerShell is involved in the hosting company.  Windows Server 8 Remote Access IPsec Secure Tunnel is used to create a secure tunnel from the client to the hosting company.  This extends the client cloud to create a hybrid cloud.  The moved VM keeps its original IP address and stays online.  Hosted customers can have common IP addresses.  Thanks to IP virtualisation, the VMs internal IP is abstracted.  The client assigned in-VM address is used for client site communications.  In the hosting infrastructure, the VM has a different IP address.

VLANs have been used by hosting companies for this in the past.  It was slow to deploy and complicates networking.  It also means that network cannot be changed – EVER … been there, bought the t-shirt. 

Cross-network VM live migration can be done thanks to IP virtualisation.  The VM can change it’s hosted IP address, but the in-VM address does not change.  Makes the hosting company more flexible, e.g. consolidate during quiet/maintenance periods, network upgrades, etc.  There is no service disruption, so the customer has no downtime, and the hosting company can move VMs via Live Migration as and when required.  This works just as well in the private cloud.  Private cloud = hosting company with internal customers.

More:

  • Extensible virtual switch
  • Disaster recovery services with Hyper-V replicat to the cloud
  • Hybrid cloud with Hyper-V network virtualisation
  • Multi-tenant aware network gateway
  • Highly available storage appliances

And more:

  • SMB transparent failover
  • Automated cluster patching
  • Online file system repairs
  • Auto load balancing
  • Storage spaces
  • Thin provisioning
  • Data de-duplication
  • Multi-protocol support
  • 23000 PowerShell cmdlets
  • Remote server admin
  • Knowledge sharing
  • Multi-machine management

Server Manager is very different.  Very pretty compared to the old MMC style UI.  It has Metro Live Tiles that are alive.  Task/Actions pane is gone.  Selecting a server shows events, services, best practices analyser, performance alerts, etc.  You can select one, or event select a number of VMs at once.  A new grid control allows you to sort, filter, filter based on attribute, group, etc.  Makes cross-server troubleshooting much easier.  You can select a role, and you’ll see just the servers with that role.

Once again …”starting with Windows 8 the preferred install is Server Core”.  We’ll be the judge of that Winking smile  We ruled against MSFT on Server 2008 and Server 2008 R2 on that subject.  New add/remove roles wizard.  You can install a role to a live server or to a VHD!  This is offline installation of roles for pre-provisioning native VHD or VM VHD images.  You can even choose to export the settings to an XML file instead of deploying.  That allows you to run a PowerShell cmdlet to use the XML to install the role(s).  PowerShell now has workflows.  It converts a PSH function into a workflow that can work across multiple machines.  For example, deploy IIS (using install-windowsfeature & the XML file), deploy content, test content (invoke-webrequest), across many machines in parallel – big time saver instead of doing 1 machine at a time.  Great for big deployments, but I really see s/w testers really loving this.

Data Deduplication allows you to store huge amounts of data on a fraction of the disk space by only storing unique data.  We see a demo of terabytes of data on 4% of the traditionally required space.  This is single instance storage on steroids.  Only unique blocks are written by the looks of it. 

Native NIC teaming has come to Windows Server.  No more third party software required for this, increasing stability and security, while reducing support complexity.  In a  demo, we see a file share stored SQL VM with perfmon monitoring storage performance.  The host has 2 teamed NICs.  One is busy and one is idle.  The active NIC is disabled.  The idle NIC takes over automatically, as expected.  There is a tiny blip in storage performance … maybe 1-2 seconds.  The VM stays running with no interruption. 

Now we see a  high availability failover of a VM using a file share for the shared storage. 

On to applications:

  • Symmetry between clouds
  • Common management
  • Common developer tools
  • Distributed caching
  • Pub/Sub messaging
  • Multi-tenant app container
  • Multi-tenant web sites
  • Sandboxing and QoS
  • NUMA aware scaling for IIS
  • Open Source support
  • Support for HTML5

Note: I can’t wait to do a road show on this stuff back in Ireland. 

  • Greater density with IIS8
  • Scalable apps for public/private clouds
  • Extension of programming tools
  • Websocket extensions

Work style improvements:

  • Remote sessions, VDI or apps.
  • USB devices support
  • Simplified VDI management: badly needed
  • RemoteFX for WAN!
  • User VHDs
  • RDP 3D graphics and sound
  • Claims based file access
  • And more

Controlling access to data, discretionary access controls (DACLs) that we use up to now are difficult.  Dynamic Access Control allows you to specify AD attributes that dictate what objects can access a resource: e.g. AD object with “Accounts” in a department attribute gets access to the Accounts file share.  Done in Classification tab for the folder.  Who populates to attributes?  Doesn’t a user have a lot of control over their own object?  Good thing: it is very flexible compared to DACLs.

When a user is denied access to content, they can click on Request Access but to ask an admin for access.  No need for helpdesk contact. 

Automatic classification can search content of data to classify the data in case it is accidentally move to a wrong location.  It removes the human factor from content security.

Next up: RDP.  Metro UI with touch is possible with 10 touch points, rather than 30.  Lovely new web portal has the Metro UI appearance.  RemoteApp is still with us.  Favourite RDP sessions are visible in Remote Desktop.  Locally cached credentials are used for single sign-on.  3D graphics are possible: we see a 3D model being manipulated with touch.  We see a Surface fish pond app with audio via RDP and 10 touch points.  Seriously IMPRESSIVE!  You can switch between RDP sessions like IE10 tabs in Metro.  You can flip between them and local desktop using Back, and use live Side-by-Side to see both active at the same time. 

My HP Microserver & Windows Home Server 2011

I have a lot of digital media scattered all over at home.  I’ve got documents (whitepapers and books), music, videos, and about 700 GB of photos (RAW, PSD, and JPEG), all of which are either on a laptop or a USB disk.  I have tried to backup but it’s a painful, time consuming process.  I have Live Mesh and volume shadow copy up an running but that’s no solution.

Last week I bought a HP Microserver with the intention of running it as a home server.  It’s a low end machine, with a dual core AMD processor, and 1 GB (max 8 GB) RAM, with a 256 GB SATA drive.  I upgraded it with 2 * 2.5 TB Seagate “green” (low power) disks (removing the default 256 GB drive), and bumped up the memory by 4 GB.  It must be said that the chassis build is not great.  Getting the top cover off/on was a nightmare.  The board where the DIMMS sit can be seen in the following picture.  It’s at the bottom.  The sides do not come off, so you have to disconnect all those visible cables, undo 2 thumb screws, and wiggle the board out on the built-in slides.

The machine has a built in wired NIC.  My home network is wireless N.  I probably could have gotten a wifi NIC for the machine (2 * PCIe slots) but I decided to span my network using a Devolo 200 MB power over ethernet kit:

image

This allows my Xbox and laptop/netbook/iDevices to sit free on the wifi network.  Upstairs in my office (a box room) is where the Devolo breaks out 3 wired connections and that’s where the HP Microserver, a PC, and printers (photo and general purpose) can be found.  The entire wifi and wired network runs on the same subnet.

I decided to try out Windows Home Server 2011 as the operating system.  It’s intended to do what I need:

  • Centralised storage
  • Automated backup of the storage and of PCs
  • Media streaming (to xBox, PCs, or to remote connections)

My first install was just a test.  The server’s storage controller was set by default to not have RAID enabled.  The result was that WHS 2011 was installed on disk 1 with a 60 GB C: drive, a 2 TB D: drive, and the remnants were unused.  This is where I realise that it backs up using Windows Server Backup to VHD.  The 2 TB volume limit is a result of the limit of a VHD.  Doh!  The disk is not RAIDed, so Disk 2 was partitioned up as well.  Not so useful.

So yesterday afternoon I had time to revisit.  I configured the server with RAID1 (wiping the contents of the disks) and reinstalled.  Or I tried to.  The install failed with the message being something like “the installation has failed.  Please see the log for details”.  The log told me the setup was starting and then it stopped.  Useless!  I Googled, re-RAIDed, I recreated the USB installer, and no joy.  Based on where the failure was (configuring Windows before the first reboot), it appears like the setup routine was trying to configure the boot environment and failing.

Eventually I tried installing on the 256 GB drive.  It worked.  OK – so the problem is RAID and/or the 2.5 TB drives.  I tried the following:

  • Preconfigure software based RAID1 array prior to installing WHS 2011 using DISKPART.  No joy because the WHS2011 installer wipes everything.
  • Install on 512 GB RAID1 drive set.  Worked fine.
  • Pop out a 512 GB drive and try repair with the 2nd drive being a 2.5 TB drive.  No joy because the RAID tool wouldn’t even give me the option.
  • Try to restore a backup from the first install to a 2.5 TB RAID 1 array.  No joy because the restore tool couldn’t see the WHS backup on the USB drive.

This left me with 2 choices:

  • Keep the 512 GB RAID1 array for the OS (and video), using 2 slots, and use the remaining 2 disk slots for a 2.5 TB RAID1 array.
  • Not use any RAID.  No way!

The end result is that I have a 512 RAID 1 drive with the OS on it, and a share for videos.  The 2.5TB drive is used for PC backup, docs/photos, and documents.  The volume is converted to GPT … and being 2.5 TB means that WHS2011 backup won’t back it up.  I’m looking at an alternative solution now.

Everything is in the same Windows 7 homegroup on the network.  I copied a bunch of video and music onto the machine last night.  I was streaming video from it to my netbook via Windows Media Player last night and that worked well.  I configured the remote access, and first thing thing this morning at work I was able to start watching “The King’s Speech” from home on my PC at work.  There is a minor loss in quality for bandwidth reasons but that’d be acceptable for most people I think.  That will come in handy whenever I’m staying in a hotel and the TV inevitably sucks.  As I write this post, I am listening to music streaming from home.  I can even log onto a home PC from work via the remote access feature – it’s kind of like using RDS Gateway – but much easier to configure.

Microsoft IT Environment Health Scanner

Credit to John McCabe for finding this useful looking tool. 

“The Microsoft IT Environment Health Scanner is a diagnostic tool that is designed for administrators of small or medium-sized networks (recommended up to 20 servers and up to 500 client computers) who want to assess the overall health of their network infrastructure. The tool identifies common problems that can prevent your network environment from functioning properly as well as problems that can interfere with infrastructure upgrades, deployments, and migration.
When run from a computer with the proper network access, the tool takes a few minutes to scan your IT environment, perform more than 100 separate checks, and collect and analyze information about the following:

  • Configuration of sites and subnets in Active Directory
  • Replication of Active Directory, the file system, and SYSVOL shared folders
  • Name resolution by the Domain Name System (DNS)
  • Configuration of the network adapters of all domain controllers, DNS servers, and e-mail servers running Microsoft Exchange Server
  • Health of the domain controllers
  • Configuration of the Network Time Protocol (NTP) for all domain controllers

If a problem is found, the tool describes the problem, indicates the severity, and links you to guidance at the Microsoft Web site (such as a Knowledge Base article) to help you resolve the problem. You can save or print a report for later review. The tool does not change anything on your computer or your network”.

Another Way To Give Wireless Access to Hyper-V VMs

When building my new demo laptop environment, I wanted a way to:

  1. Grant Internet access VMs
  2. Give the VMs access to communicate with the host OS
  3. Keep VMs off of the office network – because I will do some messy stuff

Usually we use NIC bridging or Windows routing to get Hyper-V VMs talking on the wireless NIC because Hyper-V virtual networks cannot be bound to a wifi NIC.  But this would put my VMs on the office network which is a flat single-VLAN network.  That breaks requirement #3.

image

 

My solution was as seen above.  I created an internal virtual network.  That allows the VMs to talk to the parent partition (host OS) without physical network access.  To give the VMs internet access (for Windows updates and activation), I have installed a light weight proxy on the parent partition.  Users on the VMs are configured to use the proxy, this giving the VMs the required Internet access.  I can confiure the proxy to use the wifi or wired NIC on the laptop for outbound communications.   This solution meets all 3 of my requirements.

How VMs Really Bind to a vSwitch in Hyper-V

Ben Armstrong  (aka the Virtual PC Guy) has just finished a presentation at the MVP Summit and presented one little bit of non-NDA info that I can share (and I’m sure Ben will correct me if [there’s an if?] I get it wrong).

Most people (including me up to this morning) assume the following about how VMs connect to a vSwitch in Hyper-V networking:

image

We assume, thanks to the GUI making things easy for us, is that properties such as VLAN ID and VMQ, which we edit in the vNIC properties, are properties of the vNIC in the VM.  We assume then that the vNIC connects directly to the vSwitch.  However, it is not actually like that at all in Hyper-V.  Under the covers, things work like this:

image

In reality, the vNIC connects to a switch port.  This vSwitch Port is not a VM device at all.  And like in the physical world, the vSwitch Port is connected to the switch.  In Hyper-V some networking attributes (e.g. VLAN and VMQ) are not attributes of the vNIC but they’re attributes of the vSwitch Port.

What does this mean and why do you care?  You might have had a scenario where you’ve had to rescue a non-exported VM and want to import it onto a host.  You have some manipulation work to do to be able to do that first.  You get that done and import the VM.  But some of your network config is gone and you have to recreate it.  Why?  Well, that’s because those networking attributes were not attributes of the VM while it was running before, as you can see in the second diagram.

Bye Bye IPv4

Mark Minasi posted on Facebook last night that the very last IPv4 address blocks were distributed to regional IP managers.  That’s it; the last of the IPv4 addresses are now in the control of your local IP managers.

Now is the time to run to the supermarket, stock up on water and canned foods, get as much petrol/diesel as you can, and attend that crash-course survivalist training camp!!!!!!

Oh hold on a sec; Any decent ISP will have a certain allocation to keep them going for a while.  Your internal network is probably NAT’d so you’ve no internal IP issues there.  But where we do have an issue is IPv6.  I can only speak for Ireland but I’m guessing (other than China) most of us are totally unprepared for IPv6.  ISPs have not even started work on it – I’m told by those in the know that they have not taken the problem seriously.  And many network admins (including us server admins) don’t understand IPv6.  It is quite different.  It has different terminology and it works very differently.  For example, asking an end user to ping an IPv6 address will be … different.

My advice is, if you do have an external presence, do your best to stock up on IPv4 addresses now to meet short and medium term requirements.  They may not be there later on, and your local ISPs may not have the alternative IPv6 deployed.  Make sure your network appliances are IPv6 ready.  Start learning.  And put pressure on your ISP.