Microsoft made two significant announcements yesterday, further innovating their platform for cloud deployments.
Hyper-V Containers
Last year Microsoft announced a partnership with Docker, a leader in application containerization. The concept is similar to Server App-V, the now deprecated service virtualization solution from Microsoft. Instead of having 1 OS per app, containers allow you to deploy multiple applications per OS. The OS is shared, and sets of binaries and libraries are shared between similar/common apps.
Hypervisor versus application containers
These containers are can be deployed on a physical machine OS or within the guest OS of a virtual machine. Right now, you can deployed Docker app containers onto Ubuntu VMs in Azure, that are managed from Windows.
Why would you do this? Because app containers are FAST to deploy. Mark Russinovich demonstrated a WordPress install being deployed in a second at TechEd last year. That’s incredible! How long does it take you to deploy a VM? File copies are quick enough, especially over SMB 3.0 Direct Access and Multichannel, but the OS specialisation and updates take quite a while, even with enhancements. And Azure is actually quite slow, compared to a modern Hyper-V install, at deploying VMs.
Microsoft use the phrase “at the speed of business” when discussing containers. They want devs and devops to be able to deploy applications quickly, without the need to wait for an OS. And it doesn’t hurt, either, that there are fewer OSs to manage, patch, and break.
Microsoft also announced, with their partnership with Docker, that Windows Server vNext would offer Windows Server Containers. This is a means of app containers that is native to Windows Server, all manageable via the Microsoft and Docker open source stack.
But there is a problem with containers; they share a common OS, and sets of libraries and binaries. Anyone who understands virtualization will know that this creates a vulnerability gateway … a means to a “breakout”. If one application container is successfully compromised then the OS is vulnerable. And that is a nice foothold for any attacker, especially when you are talking about publicly facing containers, such as those that might be in a public cloud.
And this is why Microsoft has offered a second container option in Windows Server vNext, based on the security boundaries of their hypervisor, Hyper-V.
Windows Server vNext offers Windows Containers and Hyper-V Containers
Hyper-V provides secure isolation for running each container, using the security of the hypervisor to create a boundary between each container. How this is accomplished has not been discussed publicly yet. We do know that Hyper-V containers will share the same management as Windows Server containers and that applications will be compatible with both.
Nano Server
It’s been a little while since a Microsoft employee leaked some details of Nano Server. There was a lot of speculation about Nano, most of which was wrong. Nano is a result of Microsoft’s, and their customers’, experiences in cloud computing:
- Infrastructure and compute
- Application hosting
Customers in these true cloud scenarios have the need for a smaller operating system and this is what Nano gives them. The OS is beyond Server Core. It’s not just Windows without the UI; it is Windows without the I (interface). There is no logon prompt and no remote desktop. This is a headless server installation option, that requires remote management via:
- WMI
- PowerShell
- Desired State Configuration (DSC) – you deploy the OS and it configures itself from a template you host
- RSAT (probably)
- System Center (probably)
Microsoft also removed:
- 32 bit support (WOW64) so Nano will run just 64-bit code
- MSI meaning that you need a new way to deploy applications … hmm … where did we hear about that very recently *cough*
- A number of default Server Core components
Nano is a stripped down OS, truly being incapable of doing anything until you add the functionality
The intended scenarios for Nano usage are in the cloud:
- Hyper-V compute and storage (Scale-Out File Server)
- “Born-in-the-cloud” applications, such as Windows Server containers and Hyper-V containers
In theory, a stripped down OS should speed up deployment, make install footprints smaller (we need non-OEM SD card installation support, Microsoft), reduce reboot times, reduce patching (pointless if I reboot just once per month), and reduce the number of bugs and zero day vulnerabilities.
Nano Server sounds exciting, right? But is it another Server Core? Core was exciting back in W2008. A lot of us tried it, and today, Core is used in a teeny tiny number of installs, despite some folks in Redmond thinking that (a) it’s the best install type and (b) it’s what customers are doing. They were and still are wrong. Core was a failure because:
- Admins are not prepared to use it
- The need to have on-console access
We have the ability add/remove a UI in WS2012 but that system is broken when you do all your updates. Not good.
As for troubleshooting, Microsoft says to treat your servers like cattle, not like pets. Hah! How many of you have all your applications running across dozens of load balanced servers? Even big enterprise deploys applications the same way as an SME: on one to a handful of valuable machines that cannot be lost. How can you really troubleshoot headless machines that are having networking issues?
On the compute/storage stack, almost every issue I see on Windows Server and Hyper-V is related to failures in certified drivers and firmwares, e.g. Emulex VMQ. Am I really expected to deploy a headless OS onto hardware where the HCL certification has the value of a bucket with a hole in it? If I was to deploy Nano, even in cloud-scale installations, then I would need a super-HCL that stress tests all of the hardware enhancements. And I would want ALL of those hardware offloads turned OFF by default so that I can verify functionality for myself, because clearly, neither Microsoft’s HCL testers nor the OEMs are capable of even the most basic test right now.
Summary
In my opinion, the entry of containers into Windows Server and Hyper-V is a huge deal for larger customers and cloud service providers. This is true innovation. As for Nano, I can see the potential for cloud-scale deployments, but I cannot trust the troubleshooting-incapable installation option until Microsoft gives the OEMs a serous beating around the head and turns off hardware offloads by default.