Microsoft News – 29 June 2015

As you might expect, there’s lots of Azure news. Surprisingly, there is still not much substantial content on Windows 10.

Hyper-V

Windows Server

Windows Client

clip_image001_thumb.png

Azure

Office 365

EMS

Misc

Microsoft News – 25-May-2015

It’s taken me nearly all day to fast-read through this lot. Here’s a dump of info from Build, Ignite, and since Ignite. Have a nice weekend!

Hyper-V

Windows Server

Windows Client

System Center

Azure

Office 365

Intune

  • Announcing support for Windows 10 management with Microsoft Intune: Microsoft announced that Intune now supports the management of Windows 10. All existing Intune features for managing Windows 8.1 and Windows Phone 8.1 will work for Windows 10.
  • Announcing the Mobile Device Management Design Considerations Guide: If you’re an IT Architect or IT Professional and you need to design a mobile device management (MDM) solution for your organization, there are many questions that you have to answer prior to recommending the best solution for the problem that you are trying to solve. Microsoft has many new options available to manage mobile devices that can match your business and technical requirements.
  • Mobile Application Distribution Capabilities in Microsoft Intune: Microsoft Intune allows you to upload and deploy mobile applications to iOS, Android, Windows, and Windows Phone devices. In this post, Microsoft will show you how to publish iOS apps, select the users who can download them, and also show you how people in your organization can download these apps on their iOS devices.
  • Microsoft Intune App Wrapping Tool for Android: Use the Microsoft Intune App Wrapping Tool for Android to modify the behavior of your existing line-of-business (LOB) Android apps. You will then be able to manage certain app features using Intune without requiring code changes to the original application.

Licensing

Miscellaneous

Ignite 2015 – Windows Server Containers

Here are my notes from the recording of Microsoft’s New Windows Server Containers, presented by Taylor Brown and Arno Mihm. IMO, this is an unusual tech because it is focused on DevOps – it spans both IT pro and dev worlds. FYI, it took me twice as long as normal to get through this video. This is new stuff and it is heavy going.

Objectives

  • You will now enough about containers to be dangerous 🙂
  • Learn where containers are the right fit
  • Understand what Microsoft is doing with containers in Windows Server 2016.

Purpose of Containers

  • We used to deploy 1 application per OS per physical server. VERY slow to deploy.
  • Then we got more agility and cost efficiencies by running 1 application per VM, with many VMs per physical server. This is faster than physical deployment, but developers still wait on VMs to deploy.

Containers move towards a “many applications per server” model, where that server is either physical or virtual. This is the fastest way to deploy applications.

Container Ecosystem

An operating system virtualization layer is placed onto the OS (physical or virtual) of the machine that will run the containers. This lives between the user and kernel modes, creating boundaries in which you can run an application. Many of these applications can run side by side without impacting each other. Images, containing functionality, are run on top of the OS and create aggregations of functionality. An image repository enables image sharing and reuse.

image

When you create a container, a sandbox area is created to capture writes; the original image is read only. The Windows container sees Windows and thinks it’s regular Windows. A framework is installed into the container, and this write is only stored in the sandbox, not the original image. The sandbox contents can be preserved, turning the sandbox into a new read-only image, which can be shared in the repository. When you deploy this new image as a new container, it contains the framework and has the same view of Windows beneath, and the container has a new empty sandbox to redirect writes to.

You might install an application into this new container, the sandbox captures the associated writes. Once again, you can preserve the modified sandbox as an image in the repository.

What you get is layered images in a repository, which are possible to deploy independently from each other, but with the obvious pre-requisites. This creates very granular reuse of the individual layers, e.g. the framework image can be deployed over and over into new containers.

Demo:

A VM is running Docker, the tool for managing containers. A Windows machine has the Docker management utility installed. There is a command-line UI.

Docker Images < list the images in the repository.

There is an image called windowsservercore. He runs:

docker run –rm –it windowsservercore cmd

Note:

  • –rm (two hyphens): Remove the sandbox afterwards
  • –it: give me an interactive console
  • cmd: the program he wants the container to run

A container with a new view of Windows starts up a few seconds later and a command prompt (the desired program) appears. This is much faster than deploying a Windows guest OS VM on any hypervisor.  He starts a second one. On the first, he deletes files from C: and deletes HKLM from the registry, and the host machine and second container are unaffected – all changes are written to the sandbox of the first container. Closing the command prompt of the first container erases all traces of it (–rm).

Development Process Using Containers

The image repository can be local to a machine (local repository) or shared to the company (central repository).

First step: what application framework is required for the project … .Net, node.js, PHP, etc? Go to the repository and pull that image over; any dependencies are described in the image and are deployed automatically to the new container. So if I deploy .NET a Windows Server image will be deployed automatically as a dependency.

The coding process is the same as usual for the devs, with the same tools as before. A new container image is created from the created program and installed into the container. A new “immutable image” is created. You can allow selected people or anyone to use this image in their containers, and the application is now very easy and quick to deploy; deploying the application image to a container automatically deploys the dependencies, e.g. runtime and the OS image. Remember – future containers can be deployed with –rm making it easy to remove and reset – great for stateless deployments such as unit testing. Every deployment of this application will be identical – great for distributed testing or operations deployment.

You can run versions of images, meaning that it’s easy to rollback a service to a previous version if there’s an issue.

Demo:

There is a simple “hello world” program installed in a container. There is a docker file, and this is a text file with a set of directions for building a new container image.

The prereqs are listed with FROM; here you see the previously mentioned windowsservercore image.

WORKDIR sets the baseline path in the OS for installing the program, in this case, the root of C:.

Then commands are run to install the software, and then run (what will run by default when the resulting container starts) the software. As you can see, this is a pretty simple example.

image

He then runs:

docker build -t demoapp:1 < which creates an image called demoapp with a version of 1. -t tags the image.

Running docker images shows the new image in the repository. Executing the below will deploy the required windowsservercore image and the version 1 demoapp image, and execute demoapp.exe – no need to specity the command because the docker file specified a default executable.

docker run –rm -it demoapp:1

He goes back to the demoapp source code, compiles it and installs it into a container. He rebuilds it as version 2:

docker build -t demoapp:2

And then he runs version 2 of the app:

docker run –rm -it demoapp:2

And it fails – that’s because he deliberately put a bug in the code – a missing dependent DLL from Visual Studio. It’s easy to blow the version 2 container away (–rm) and deploy version 1 in a few seconds.

What Containers Offer

  • Very fast code iteration: You’re using the same code in dev/test, unit test, pilot and production.
  • There are container resource controls that we are used to: CPU, bandwidth, IOPS, etc. This enables co-hosting of applications in a single OS with predictable levels of performance (SLAs).
  • Rapid deployment: layering of containers for automated dependency deployment, and the sheer speed of containers means applications will go from dev to production very quickly, and rollback is also near instant. Infrastructure no longer slows down deployment or change.
  • Defined state separation: Each layer is immutable and isolated from the layers above and below it in the container. Each layer is just differences.
  • Immutability: You get predictable functionality and behaviour from each layer for every deployment.

Things that Containers are Ideal For

  • Distributed compute
  • Databases: The database service can be in a container, with the data outside the container.
  • Web
  • Scale-out
  • Tasks

Note that you’ll have to store data in and access it from somewhere that is persistent.

Container Operating System Environments

  • Nano-Server: Highly optimized, and for born-in-the-cloud applications.
  • Server Core: Highly compatible, and for traditional applications.

Microsoft-Provided Runtimes

Two will be provided by Microsoft:

  • Windows Server Container: Hosting, highly automated, secure, scalable & elastic, efficient, trusted multi-tenancy. This uses a shared-kernel model – the containers run on the same machine OS.
  • Hyper-V Container: Shared hosting, regulate workloads, highly automated, secure, scalable and elastic, efficient, public multi-tenancy. Containers are placed into a “Hyper-V partition wrap”, meaning that there is no sharing of the machine OS.

Both runtimes use the same image formats. Choosing one or the other is a deployment-time decision, with one flag making the difference.

Here’s how you can run both kinds of containers on a physical machine:

image

And you can run both kinds of containers in a virtual machines. Hyper-V containers can be run in virtual machine that is running the Hyper-V role. The physical host must be running virtualization that supports virtualization of the VT instruction sets (ah, now things get interesting, eh?). The virtual machine is a Hyper-V host … hmm …

image

Choosing the Right Tools

You can run containers in:

  • Azure
  • On-premises
  • With a service provider

The container technologies can be:

  • Windows Server Containers
  • Linux: You can do this right now in Azure

Management tools:

  • PowerShell support will be coming
  • Docker
  • Others

I think I read previously that System Center would add support. Visual Studio was demonstrated at Build recently. And lots of dev languages and runtimes are supported. Coders don’t have to write with new SDKs; what’s more important is that

Azure Service Fabric will allow you to upload your code and it will handle the containers.

Virtual machines are going nowhere. They will be one deployment option. Sometimes containers are the right choice, and sometimes VMs are. Note: you don’t join containers to AD. It’s a bit of a weird thing to do, because the containers are exact clones with duplicate SIDs. So you need to use a different form of authentication for services.

When can You Play With Containers?

  • Preview of Windows Server Containers: coming this summer
  • Preview of Hyper-V Containers: planned for this year

Containers will be in the final RTM of WS2016. You will be able to learn more on the Windows Server Containers site when content is added.

Demos

Taylor Brown, who ran all the demos, finished up the session with a series of demos.

docker history <name of image> < how was the image built – looks like the dockerfile contents in reverse order. Note that passwords that are used in this file to install software appears to be legible in the image.

He tries to run a GUI tool from a container console – no joy. Instead, you can remote desktop into the container (get the IP of the container instance) and then run the tool in the Remote Desktop session. The tool run is Process Explorer.

If you run a system tool in the container, e.g. Process Explorer, then you only see things within the container. If you run a tool on the machine, then you have a global view of all processes.

If you run Task Manager, go to Details and add the session column, you can see which processes are owned by the host machine and which are owned by containers. Session 0 is the machine.

Runs docker run -it windowsservercore cmd < does not put in –rm which means we want to keep the sandbox when the container is closed. Typing exit in the container’s CMD will end the container but the sandbox is kept.

Running ps -a shows the container ID and when the container was created/exited.

Running docker commit with the container ID and a name converts the sandbox into an image … all changes to the container are stored in the new image.

Other notes:

The IP of the container is injected in, and is not the result of a setup. A directory can be mapped into a container. This is how things like databases are split into stateless and stateful; the container runs the services and the database/config files are injected into the container. Maybe SMB 3.0 databases would be good here?

Questions

  • How big are containers on the disk? The images are in the repository. There is no local copy – they are referred to over the network. The footprint of the container on the machine is the running state (memory, CPU, network, and sandbox), the size of which is dictated by your application.
  • There is no plan to build HA tech into containers. Build HA into the application. Containers are stateless. Or you can deploy containers in HA VMs via Hyper-V.
  • Is a full OS running in the container? They have a view of a full OS. The image of Core that Microsoft will ship is almost a full image of Windows … but remember that the image is referenced from the repository, not copied.
  • Is this Server App-V? No. Conceptually at a really really high level they are similar, but Containers offer a much greater level of isolation and the cross-platform/cloud/runtime support is much greater too.
  • Each container can have its own IP and MAC address> It can use the Hyper-V virtual switch. NATing will also be possible as an alternative at the virtual switch. Lots of other virtualization features available too.
  • Behind the scenes, the image is an exploded set of files in the repository. No container can peek into the directory of another container.
  • Microsoft are still looking at which of their own products will be support by them in Containers. High priority examples are SQL and IIS.
  • Memory scale: It depends on the services/applications running the containers. There is some kind of memory de-duplication technology here too for the common memory set. There is common memory set reuse, and further optimizations will be introduced over time.
  • There is work being done to make sure you pull down the right OS image for the OS on your machine.
  • If you reboot a container host what happens? Container orchestration tools stop the containers on the host, and create new instances on other hosts. The application layer needs to deal with this. The containers on the patched host stop/disappear from the original host during the patching/reboot – remember; they are stateless.
  • SMB 3.0 is mentioned as a way to present stateful data to stateless containers.
  • Microsoft is working with Docker and 3 containerization orchestration vendors: Docker Swarm, Kubernetes and Mesosphere.
  • Coding: The bottom edge of Docker Engine has Linux drivers for compute, storage, and network. Microsoft is contributing Windows drivers. The upper levels of Docker Engine are common. The goal is to have common tooling to manage Windows Containers and Linux containers.
  • Can you do some kind of IPC between containers? Networking is the main way to share data, instead of IPC.

Lesson: run your applications in normal VMs if:

  • They are stateful and that state cannot be separated
  • You cannot handle HA at the application layer

Personal Opinion

Containers are quite interesting, especially for a nerd like me that likes to understand how new techs like this work under the covers. Containers fit perfectly into the “treat them like cattle” model and therefore, in my opinion, have a small market of very large deployments of stateless applications. I could be wrong, but I don’t see Containers fitting into more normal situations. I expect Containers to power lots of public cloud task -based stuff. I can see large customers using it in the cloud, public or private. But it’s not a tech for SMEs or legacy apps. That’s why Hyper-V is important.

But … nested virtualization, not that it was specifically mentioned, oh that would be very interesting 🙂

I wonder how containers will be licensed and revealed via SKUs?

Ignite 2015–Windows 10 Management Scenarios For Every Budget

Speakers: Mark Minasi

“Windows 10 that ships in July will not be complete”. There will be a later release in October/November that will be more complete.

Option One

Windows 7 is supported until 2020. Windows 8 is supported until 2023. Mark jokes that NASA might have evidence of life on other planets before we deploy Windows 10. We don’t have to rush from Windows 7 to 10, because there is a free upgrade for 1 year after the release. Those with SA don’t have any rush.

Option Two

Use Windows 10. All your current management solutions will work just fine on enterprise and pro editions.

Identity in Windows 10

Option 1: Local accounts, e.g. hotmail etc.

Offers ID used by computer and many online locations. Let’s you sync settings between machines via MSFT.  Let’s store apps roam with your account. Minimal MDM. Works on Windows 8+ devices. It’s free – but management cost is high. Fine for homes and small organisations.

Option 2: AD joined.

GPO rich management. App roaming via GPO. Roaming profiles and folder redirection. Wide s/w library. Must have AD infrastructure and CALs. Little-no value for phones/tablets. Can only join one domain.

Option 3: Cloud join.

Includes Azure AD, Office 365, Windows 10 devices. Enable device join in AAD, create AAD accounts.  Enables conditional access for files. DMD via Intune. ID for Store apps. Requires AAD or O365. On-prem AD required. Can only join one AAD. Can’t be joined to legacy AD. No trust mechanisms between domains.

The reasons to join to the cloud right now are few. The list will get much longer. This might be the future.

Demo: Azure AD device registration.

Deploying Apps to Devices

Option 1: Use the Windows Store

Need a MSFT account and credit card. You can get any app from the store onto Windows 8+ device. Apps can roam with your account. LOB apps can be put in the store but everyone sees them. You can sideload apps that you don’t want in the store but it requires licensing and management systems. Limited governance and requiring everyone to deploy via credit card is a nightmare.

Option 2: Business Store Portal

New. businessstore.microsoft.com. Web based – no cost. Needs AAD or MSFT account. Lot into MSFT account and get personal apps. Log in with AAD account and get organisational apps. Admins can block categories of apps. Can create a category for the organisation. Can acquire X copies of a particular app for the organisation.

Option 3: System Center Configuration Manager

System Center licensing. On-premises AD required. Total control over corporate machines. Limited management over mobile devices. You can get apps from the Business Store in offline mode and deploy them via SCCM. When you leave the company or cannot sign into AD/AAD then you lose access to the org apps.

Controlling Apps in Windows 10

Session hosts in Azure:

You can deploy apps using this. RDS in the cloud, where MSFT manages load balancing and the SSL gateway, and users get published applications.

Windows 10 has some kind of Remote Desktop Caching which boosts the performance of Remote Desktop. One attendee, when asked, said it felt 3 times faster than Windows 8.x.

Device Guard:

A way to control which apps are able to run. Don’t think of it as a permanent road block. It’s more of a slowdown mechanism. You can allow some selected apps, apps with signed code, or code signed by some party. Apparently there’s a MSFT tool for easy program signing.

Hyper-V uses Virtual Secure Mode where it hosts a mini-Windows where the LSA runs in 1 GB RAM. < I think this will only be in the Enterprise edition > This is using TPM on the machine and uses virtual TPM in the VM. Doesn’t work in current builds yet.

Ignite 2015–Nano Server: The Future of Windows Server

Speaker: Jeffrey Snover

Reasons for Nano Server, the GUI-less installation of Windows Server

 

  • It’s a cloud play. For example, minimize patching. Note that Azure does not have Live Migration so patching is a big deal.
  • CPS can have up to 16 TB of RAM moving around when you patch hosts – no service interruption but there is an impact on performance.
  • They need a server optimized for the cloud. MISFT needs one, and they think cloud operators need one too.

Details:

  • Headless, there is no local interface and no RDP. You cannot do anything locally on it.
  • It is a deep ra-factoring of Windows Server. You cannot switch from Nano to/from Core/Full UI
  • The roles they are focused on are Hyper-V, SOFS and clustering.
  • They also are focusing on born-in-the-cloud applications.
  • There is a zero-footprint model. No roles or features are installed by default. It’s a functionless server by default.
  • 64-bit only
  • No special hardware or drivers required.
  • Anti-malware is built in (Defender) and on by default.
  • They are working on moving over the System Center and app insights agents
  • They are talking to partners to get agent support for 3rd party management.
  • The Nano installer is on the TP2 preview ISO in a special folder. Instructions here.

Demo

  • They are using 3 *  NUC-style PCs as their Nano server cluster demo lab.  The switch is bigger than the cluster, and takes longer to boot than Nano Server. One machine is a GUI management machine and 2 nodes are a cluster. They use remote management only – because that’s all Nano Server supports.
  • They just do some demos, like Live Migration and PowerShell
  • When you connect to a VM, there is a black window.
  • They take out a 4th NUC that has Nano Server installed already, connect it up, boot it, and add it to the cluster.

Notes: this demo goes wrong. Might have been easier to troubleshoot with a GUI on the machine Smile

Management

  • “removing the need” to sit in front of a server
  • Configuration via “Core PoSH” and DSC
  • Remote management/automation via Core PowerShell and WMI: Limited set of cmdlets initially. 628 cmdlets so far (since January).
  • Integrate it into DevOps tool chains

They want to “remove the drama and heroism from IT”. Server dies, you kill it and start over. Oh, such a dream. To be honest, I hardly ever have this issue with hosts, and I could never recommend this for actual application/data VMs.

They do a query for processes with memory more than 10 MB. There are 5.

Management Tools

Some things didn’t work well remotely: Device Manager and remove event logging. Microsoft is improving in these tools to improve them and make remote management 1st class.

There will be a set of web-based tools:

  • Task manager
  • Registry editor
  • Event viewer
  • Device manager
  • sconfig
  • Control panel
  • File Explorer
  • Performance monitor
  • Disk management
  • Users/groups Manager

Also can be used with Core, MinShell, and Full UI installations.

We see a demo of web-based management, which appears to be the Azure Stack portal. This includes registry editor and task manager in a browser. And yes, they run PoSH console on the Nano server running in the browser too. Azure Stack could be a big deal.

Cloud Application Platform:

  • Hyper-V hosts
  • SOFS noes
  • In VMs for cloud apps
  • Hyper-V containers

Stuff like PoSH management coming in later releases.

Terminology

  • At the base there is Nano Server
  • Then there is Server …. what used to be Server Core
  • Anything with a GUI is now called Client, what used to be called Full UI

Client is what MSFT reckons should only be used for RDS and Windows Server Essentials. As has happened since W2008, customers and partners will completely ignore this 70% of the time, if not more.

The Client experience will never be available in containers.

The presentation goes on to talk about development and Chef automation. I leave here.

Survey Results – What percentage of your Windows APPLICATION servers run with MinShell or Core UI?

Another thank you, this time to the folks that answered  this second survey that focused on Windows Server application servers no matter if they were physical, virtual , on Hyper-V or anything else.

In this survey I asked:

What percentage of your APPLICATION servers run with MinShell or Core UI? Consultants: Please answer with the most common customer scenario.

  • 0% – All of my servers have a FULL UI
  • 1-20%
  • 20-40%
  • 40-60% – Around half of my servers have MinShell or Core UI
  • 60-80%
  • 80-100% – All of my servers have MinShell or Core UI

In other words, I wanted to know what was the market penetration like for non-Full UI installations of Windows Server. I had a gut feeling, but I wanted to know for sure.

The Sample

I was worried about survey fatigue, and sure enough we had a drop from the amazing 425 responses of the previous survey. But we did have 242 responses:

image

Once again, we saw a great breakdown from all around the world with the USA representing 25% of the responses.

Once again I recognize that the sample is skewed. Anyone, like you, who reads a blog like this, follows influencers on social media, or regularly attends something like a TechNet/Ignite/community IT pro events is not a regular IT pro. You are more educated and are not 100% representative of the wider audience. I suspect that more of you are using non-Full UI options (Hyper-V Server, MinShell or Core) than in the wider market.

The Results

Here we go:

image

So the vast majority of people are not using any installations of MinShell or Core for their application servers. Nearly 15% have a few Core or MinShell installations and then we get into tiny percentages for the rest of the market.

image

We can see quite clearly, that despite the evangelizing by Microsoft, the market prefers to deploy valuable servers with a UI that allows management and troubleshooting – not to mention support by Microsoft.

Is there a regional skewing of the data? Yes, to some extent. The USA (25% of responses) has opted to deploy a Full UI slightly less than the rest of the world:

image

You can see the difference when we compare this to a selection of EU countries including: Great Britain, Germany, Austria, Ireland, The Netherlands, Sweden, Belgium, Denmark, Norway, Slovenia, France and Poland (53% of the survey).

image

FYI, the 4 responses that indicated that 80-100% of application servers were running MInShell or Core UI came from:

  • USA (2)
  • Germany (2)

My Opinion

I am slightly less hardline with Full VS Core/MinShell when it comes to application servers than I am with Hyper-V hosts. But, I am not in complete agreement with the Microsoft mantra of Core, Core, Core. I know that when it comes to most LOB apps, even large enterprises have loads of those awful single or dual server installations that right-minded admins dislike – if that’s what devs deploy then there’s little we can do about it. And those are exactly the machines that become sacred cows.

However, in large scale-out apps where servers can be stateless, I can see the benefits of using Core/MinShell … to a limited extent. To be honest, I think Nano would be better when it eventually makes it to a non-infrastructure role.

Your Opinion

What do you think? Post your comments below.

Technorati Tags: ,

Survey – What percentage of your Windows APPLICATION servers run with MinShell or Core UI?

And we’re back with a follow-up survey. The last time I asked you about your Hyper-V hosts and the results were very interesting. Now I want to know about your Windows Server application servers, be they physical, on VMware, Hyper-V, Azure, AWS, or any other platform. Note: I do not care about any hosts this time – just the application servers that are running Windows Server. Here is the survey:

image

As before, I’ll run the survey for a few days and then post the results.

Please share this post with colleagues and on social media so we can get a nice big sample from around the world.

 

Technorati Tags: ,

Survey Results – What UI Option Do You Use For Hyper-V Hosts?

Thank you to the 424 (!) people who answered the survey that I started late on Friday afternoon and finished today (Tuesday morning). I asked one question:

What kind of UI installation do you use on Hyper-V hosts?

  • The FREE Hyper-V Server 2012 R2
  • Full UI
  • MinShell
  • Core

Before I get to the results …

The Survey

Me and some other MVPs used to do a much bigger annual survey. The work required by us was massive, and the amount of questions put people off. I kept this very simple. There were no “why’s” or further breakdowns of information. This lead to a bigger sample size.

The Sample

We got a pretty big sample size from all around the world, with results from the EU, USA and Canada, eastern Europe, Asia, Africa, the south Pacific, and south America. That’s amazing! Thank you to everyone who helped spread the word. We got a great sample in a very short period of time.

image

However (there’s always one of these with surveys!), I recognize that the sample is skewed. Anyone, like you, who reads a blog like this, follows influencers on social media, or regularly attends something like a TechNet/Ignite/community IT pro events is not a regular IT pro. You are more educated and are not 100% representative of the wider audience. I suspect that more of you are using non-Full UI options (Hyper-V Server, MinShell or Core) than in the wider market.

Also, some of you who answered this question are consultants or have more complex deployments with a mixture of installations. I asked you to submit your most common answer. So a consultant that selects X might have 15 customers with X, 5 with Y and 2 with Z.

The Results

So, here are the results:

image

 

70% of the overall sample chose the full UI for the management OS of their Hyper-V hosts. If we discount the choice of Hyper-V Server (they went that way for specific economic reasons and had no choice of UI) then the result changes.

Of those who had a choice of UI when deploying their hosts, 79% went with the Full UI, 5.5% went with MinShell, and 15% went with Server Core. These numbers aren’t much different to what we saw with W2008 R2, with the addition of MinShell taking share from Server Core. Despite everything Microsoft says, customers have chosen easier management and troubleshooting by leaving the UI on their hosts.

image

Is there a specific country bias? The biggest response came from the USA (111):

  • Core: 19.79%
  • MinShell: 4.17%
  • Full UI: 76.04%

In the USA, we find more people than average (but still a small minority) using Core and MinShell. Next I compared this to Great Britain, Germany, Austria, Ireland, The Netherlands, Sweden, Belgium, Denmark, Norway, Slovenia, France and Poland (not an entire European sample but a pretty large one from the top 20 responding countries, coming in at a total of 196 responses):

  • Core: 13.78%
  • MinShell: 4.08%
  • Full UI: 82.14%

It is very clear. The market has spoken and the market has said:

  • We like that we have the option to deploy Core or MinShell
  • But most of us want a Full UI

Those of you who selected Hyper-V Server did not waste your time. There are very specific and useful scenarios for this freely licensed product. And Microsoft loves to hear that their work in maintaining this SKU has a value in the market. To be honest, I expect this number (10.59%) to gradually grow over time as those without Software Assurance choose to opt into new Hyper-V features without upgrading their guest OS licensing.

My Opinion

I have had one opinion on this matter since I first tried a Core install for Hyper-V during the beta of Windows Server 2008. I would only ever deploy a Full UI. If (and it’s a huge IIF), I managed a HUGE cloud with HA infrastructure then I would deploy Nano Server on vNext. But in every other scenario, I would always choose a Full UI.

The arguments for Core are:

  • Smaller installation: Who cares if it’s 6GB or 16 GB? I can’t buy SD cards that small anymore, let alone hard disks!!!
  • Smaller attack footprint: You deserve all the bad that can happen if you read email or browse from your hosts.
  • Fewer patches: Only people who don’t work in the real world count patches. We in the real world count reboots, and there are no reductions. To be honest, this is irrelevant with Cluster Aware Updating (CAU).
  • More CPU: I’ve yet to see a host in person where CPU is over 33% average utilisation.
  • Less RAM: A few MB savings on a host with at least 64 GB (rare I see these anymore) isn’t going to be much benefit.
  • You should use PowerShell: Try using 3rd party management or troubleshooting isolated hosts with PowerShell. Even Microsoft support cannot do this.
  • Use System Center: Oh, child! You don’t get out much.
  • It stops admins from doing X: You’ve got other problems that need to be solved.
  • You can add the UI back: This person has not patched a Core install over several months and actually tried to re-add the UI – it is not reliable.

In my experience, and that of most people. servers are not cattle; they are not pets either; no – they are sacred cows (thank you for finding a good ending to that phrase, Didier). We cannot afford to just rebuild servers when things go wrong. They do need to be rescued and trouble needs to be fixed. Right now, the vast majority of problems I hear about are network card driver and firmware related. Try solving those with PowerShell or remote management. You need to be on the machine and solving these issues and you need a full UI. The unreliable HCL for Windows Server has lead to awful customer experiences on Broadcom (VMQ enabled and faulty) and Emulex NICs (taking nearly 12 months to acknowledge the VMQ issue on FCoE NICs).

Owning a host is like owning a car. Those who live in the mainstream have a better experience. Things work better. Those who try to find cheaper alternatives, dare to be different, find other sources … they’re the ones who call for roadside assistance more. I see this even in the Hyper-V MVP community … those who dare to be on the ragged edge of everything are the ones having all the issues. Those who stay a little more mainstream, even with the latest tech, are the ones who have a reliable infrastructure and can spend more time focusing on getting more value out of their systems.

Another survey will be coming soon. Please feel free to comment your opinions on the above and what you might like to see in a survey. Remember, surveys need closed answers with few options. Open questions are 100% useless in a survey.

What about Application Servers?

That’s the subject of my next survey.

Using This Data

Please feel free to use the results of the survey if:

  • You link back to this post
  • You may use 1 small quote from this post

Continue To Use October 2014 Windows Server Preview

Lots of folks that are using Window Server Technical Preview (from October 2014) were facing a ticking time bomb. The preview is set to expire on April 14th (tomorrow). Microsoft released a hotfix that will extend the life of the preview until the next preview is released in May.

Lot of folks have reported that this hotfix didn’t fix their issue. According to Microsoft:

  • If you are running Datacenter edition with a GUI then you need to activate the install with the key from here.
  • Sometime you will need to run SLMGR /ato to reactivate the installation.

Survey – What Kind of UI Do You Use For Hyper-V Hosts?

I have a one question survey for you:

image

If you are a consultant or have multiple answers then please select the most commonly deployed option. Don’t select your preferred option, but what is really used most often.

Please tweet, Facebook, LinkedIn, whatever, this survey to get as big a sample as you can. You’ll see the results as they go along after voting.

Technorati Tags: ,