Windows Server 2019 Did Not RTM – And Why That Matters

I will start this article by saying there is a lot in Windows Server 2019 to like. There are good reasons to want to upgrade to it or deploy it – if I was still in the on-premises server business I would have been downloading the bits as soon as they were shared.

As you probably know Microsoft has changed the way that they develop software. It’s done in sprints and the goal is to produce software and get it into the hands of customers quickly. It doesn’t matter if it’s Azure, Office 365, Windows 10, or Windows Server, the aim is the same.

This release of Windows Server is the very first to go through this process. When Microsoft announced the general availability of Windows Server 2019 on October 2nd, they shared those bits with everyone at the same time. Everyone – including hardware manufacturers. There was no “release to manufacturing” or RTM.

In the past, Microsoft would do something like this:

  1. Microsoft: Finish core development.
  2. Microsoft: RTM – share the bits privately with the manufacturers.
  3. Microsoft: Continue quality work on the bits.
  4. Manufacturing: Test & update drivers, firmware, and software.
  5. Microsoft & Manufacturing: Test & certify hardware, drivers & firmware for the Windows Server Catalog, aka the hardware compatibility list or HCL.
  6. Microsoft: 1-3 months after RTM, announce general availability or GA
  7. Microsoft: Immediately release a quality update via Windows Update

This year, Microsoft has gone straight to step 6 from the above to get the bits out to the application layer as quickly as possible. The OEMs got the bits the same day that you could have. This means that the Windows Server Catalog, the official listing of all certified hardware, is pretty empty. When I looked on the morning of Oct 3, there was not even an entry for Windows Server 2019 on it! Today (October 4th) there are a handful of certified components and 1 server from an OEM I don’t know:

image

So my advice is, sure, go ahead and download the bits to see what Microsoft has done. Try out the new pieces and see what they offer. But hold off on production deployments until your hardware appears on this list.

I want to be clear here – I am not bashing anyone. I want you to have a QUALITY Windows Server experience. Too often in the past, I have seen people blame Windows/Hyper-V for issues when the issues were caused by components – maybe some of you remember the year of blue screens that Emulex caused for blade server customers running Windows Server 2012 R2 because of bad handling of VMQ in their converged NICs driver & firmware?

In fact, if you try out the software-defined features, Network Controller and Storage Spaces Direct (S2D), you will be told that you can’t try them out without opening a free support call to get a registry key – which someone will eventually share online. This is because those teams realize how dependent they are on hardware/driver/firmware quality and don’t want you judging their work by the problems of the hardware. The S2D team things the first wave of certified “WSSD” hardware will start arriving in January.

Note: VMware, etc, should be considered as hardware. Don’t go assuming that Windows Server 2019 is certified on it yet – wait for word from your hypervisor’s manufacturer.

Why would Microsoft do this? They want to get their software into application developers hands as quickly as possible. Container images based on Windows Server will be smaller than ever before – but they’re probably on the semi-annual channel so WS2019 doesn’t mean much to them. Really, this is for people running Windows Server in a cloud to get them the best application platform there is. Don’t start the conspiracy theories – if Microsoft had done the above process then none of us would be seeing any bits maybe until January! What they’ve effectively done is accelerate public availability while the Windows Server Catalog gets populated.

Have fun playing with the new bits, but be careful!

Physical Disks are Missing in Disk Management

In this post, I’ll explain how I fixed a situation where most of my Storage Spaces JBOD disks were missing in Disk Management and Get-PhysicalDisk showed their OperationalStatus as being stuck on “Starting”.

I’ve had some interesting hardware/software issues with an old lab at work. All of the hardware is quite old now, but I’ve been trying to use it in what I’ll call semi-production. The WS2016 Hyper-V cluster hardware consists of a pair of Dell R420 hosts and an old DataON 6 Gbps SAS Storage Spaces JBOD.

Most of the disks disappeared in Disk Management and thus couldn’t be added to a new Storage Spaces pool. I checked Device Manager and they were listed. I removed the devices and rebooted but the disks didn’t appear in Disk Management. I then ran Get-PhysicalDisk and this came up:

image

As you can see, the disks were there, but their OperationalStatus was hung on “Starting” and their HealthStatus was “Unknown”. If this was a single disk, I could imagine that it had failed. However, this was nearly every disk in the JBOD and spanned HDD and SSD. Something else was up – probably Windows Server 2016 or some firmware had threw a wobbly and wasn’t wrapping up some task.

The solution was to run Reset-PhysicalDisk. The example on docs.microsoft.com was incorrect, but adding a foreach loop fixed things:

$phydisk = (Get-Physicaldisk | Where-Object -FilterScript {$_.HealthStatus -Eq “Unknown”})

foreach ($item in $phydisk)
{
Reset-PhysicalDisk -FriendlyName $item.FriendlyName
}

A few seconds later, things looked a lot better:

image

I was then able to create the new pool and virtual disks (witness + CSVs) in Failover Cluster Manager.

Feedback Required By MS – Storage Replica in WS2019 STANDARD

Microsoft is planning to add Storage Replica into the Standard Edition of Windows Server 2019 (WS2019). In case you weren’t paying attention, Windows Server 2016 (WS2016) only has this feature in the Datacenter edition – a large number of us campaigned to get that changed. I personally wrecked the head of Ned Pyle (@NerdPyle) who, when he isn’t tweeting gifs, is a Principal Program Manager in the Microsoft Windows Server High Availability and Storage group – he’s one of the people responsible for the SR feature and he’s the guy who presents it at conferences such as Ignite.

What is SR? It’s volume based replication in Windows Server Failover Clustering. The main idea what to enable replication of LUNs when companies couldn’t afford SAN replication licensing. Some SAN vendors charge a fortune to enable LUN replication for disaster recovery and SR is a solution for this.

A by product of SR is a scenario for smaller businesses. With the death of cluster-in-a-box (manufacturers are focused on larger S2D customers) the small-medium business is left looking for a new way to build a Hyper-V cluster. You can do 2-node S2D clusters but they have single points of failure (4 nodes are required to get over this) and require at least 10 GBE networking. If you use SR, you can create an active/passive 2-node Hyper-V cluster using just internal RAID storage in your Hyper-V hosts. It’s a simpler solution … but it requires Datacenter Edition today, and in the SME & branch office scenario, Datacenter only makes financial sense when there are 13+ VMs per host.

Ned listened to the feedback. I think he had our backs Smile and understood where we were coming from. So SR has been added to WS2019 Standard in the preview program. Microsoft wants telemetry (people to use it) and to give feedback – there’s a survey here. SR in Standard will be limited. Today, those limits are:

  • SR replicates a single volume instead of an unlimited number of volumes.
  • Servers can have one partnership instead of an unlimited number of partners.
  • Volume size limited to 2 TB instead of an unlimited size.

Microsoft really wants feedback on those limitations. If you think those limitations are too low, then TALK NOW. Don’t wait for GA when it is too late. Don’t be the idiot at some event who gives out shite when nothing can be done. ACT NOW.

If you cannot get the hint, complete the survey!

Online Windows Server Mini-Conference – June 26th

Microsoft wants to remind you that they have this product called Windows Server, and that it has a Windows Server 2016 release, a cool new administration console, and a future (Windows Server 2019). In order to do that, Microsoft will be hosting an online conference on June 26th with some of the big names behind the creation of Windows Server called the Windows Server Summit.

This event will have a keynote featuring Erin Chapple, Director of Program Management, Cloud + AI (which includes Windows Server). Then the event will break out into a number of tracks with multiple sessions each, covering things like:

  • Hybrid scenarios with Azure
  • Security
  • Hyper-converged infrastructure (Storage Spaces Direct/S2D)
  • Application platform (containers on Windows Server)

The event, on June 26th, starts at 5pm UK/Irish time and runs for 4 hours (12:00 EST). Don’t worry if this time doesn’t suit; the sessions will be available to stream afterwards. Those who tune in live will also have the opportunity to participate in Q&A.

Windows Server 2019 Announced for H2 2018

Last night, Microsoft announced that Windows Server 2019 would be released, generally available, in the second half of 2018. I suspect that the big bash will be Ignite in Orlando at the end of September, possibly with a release that week, but maybe in October – that’s been the pattern lately.

LTSC

Microsoft is referring to WS2019 as a “long term servicing channel release”. When Microsoft started the semi-annual channel, a Server Core build of Windows Server released every 6 months to Software Assurance customers that opt into the program, they promised that the normal builds would continue every 3 years. These LTSC releases would be approximately the sum of the previous semi-annual channel releases plus whatever new stuff they cooked up before the launch.

First, let’s kill some myths that I know are being spread by “someone I know that’s connected to Microsoft” … it’s always “someone I know” that is “connected to Microsoft” and it’s always BS:

  • The GUI is not dead. The semi-annual channel release is Server Core, but Nano is containers only since last year, and the GUI is an essential element of the LTSC.
  • This is not the last LTSC release. Microsoft views (and recommends) LTSC for non-cloud-optimised application workloads such as SQL Server.
  • No – Windows Server is not dead. Yes, Azure plays a huge role in the future, but Azure Stack and Azure are both powered by Windows, and hundreds of thousands, if not millions, of companies still are powered by Windows Server.

Let’s talk features now …

I’m not sure what’s NDA and what is not, so I’m going to stick with what Microsoft has publicly discussed. Sorry!

Project Honolulu

For those of you who don’t keep up with the tech news (that’s most IT people), then Project Honolulu is a huge effort by MS to replace the Remote Server Administration Toolkit (RSAT) that you might know as “Administrative Tools” on Windows Server or on an admin PC. These ancient tools were built on MMC.EXE, which was deprecated with the release of W2008!

Honolulu is a whole new toolset built on HTML5 for today and the future. It’s not finished – being built with cloud practices, it never will be – but but’s getting there!

Hybrid Scenarios

Don’t share this secret with anyone … Microsoft wants more people to use Azure. Shh!

Some of the features we (at work) see people adopt first in the cloud are the hybrid services, such as Azure Backup (cloud or hybrid cloud backup), Azure Site Recovery (disaster recovery), and soon I think Azure File Sync (seamless tiered storage for file servers) will be a hot item. Microsoft wants it to be easier for customers to use these services, so they will be baked into Project Honolulu. I think that’s a good idea, but I hope it’s not a repeat of what was done with WS2016 Essentials.

ASR needs more than just “replicate me to the cloud” enabled on the server; that’s the easy part of the deployment that I teach in the first couple of hours in a 2-day ASR class. The real magic is building a DR site, knowing what can be replicated and what cannot (see domain controllers & USN rollback, clustered/replicating databases & getting fired), orchestration, automation, and how to access things after a failover.

Backup is pretty easy, especially if it’s just MARS. I’d like MARS to add backup-to-local storage so it could completely replace Windows Server Backup. For companies with Hyper-V, there’s more to be done with Azure Backup Server (MABS) than just download an installer.

Azure File Sync also requires some thought and planning, but if they can come up with some magic, I’m all for it!

Security

In Hyper-V:

  • Linux will be supported with Shielded VMs.
  • VMConnect supported is being added to Shielded VMs for support reasons – it’s hard to fix a VM if you cannot log into it via “console” access.
  • Encrypted Network Segments can be turned on with a “flip of a switch” for secure comms – that could be interesting in Azure!

Windows Defender ATP (Advanced Threat Protection) is a Windows 10 Enterprise feature that’s coming to WS2019 to help stop zero-day threats.

DevOps

The big bet on Containers continues:

  • The Server Core base image will be reduced from 5GB by (they hope) 72% to speed up deployment time of new instances/apps.
  • Kubernetes orchestration will be natively supported – the container orchestrator that orginated in Google appears to be the industry winner versus Docker and Mesos.

In the heterogeneous world, Linux admins will be getting Windows Subsystem on Linux (WSL) for a unified scripting/admin experience.

Hyper-Converged Infrastructure (HCI)

Storage Spaces Direct (S2D) has been improved and more changes will be coming to mature the platform in WS2019. In case you don’t know, S2D is a way to use local (internal) disks in 2+ (preferably 4+) Hyper-V hosts across a high speed network (virtual SAS bus) to create a single cluster with fault tolerance at the storage and server levels. By using internal disks, they can use cheaper SATA disks, as well as new flash formats don’t natively don’t support sharing, such as NVME.

The platform is maturing in WS2019, and Project Honolulu will add a new day-to-day management UI for S2D that is natively lacking in WS2016.

The Pricing

As usual, I will not be answering any licensing/pricing questions. Talk to the people you pay to answer those questions, i.e. the reseller or distributor that you buy from.

OK; let’s get to the messy stuff. Nothing has been announced other than:

It is highly likely we will increase pricing for Windows Server Client Access Licensing (CAL). We will provide more details when available.

So it appears that User CALs will increase in pricing. That is probably good news for anyone licensing Windows Server via processor (don’t confuse this with Core licensing).

When you acquire Windows Server through volume licensing, you pay for every pair of cores in a server (with a minimum of 16, which matched the pricing of WS2012 R2), PLUS you buy User CALs for every user authenticating against the server(s).

When you acquire Windows Server via Azure or through a hosting/leasing (SPLA) program, you pay for Windows Server based only on how many cores that the machine has. For example, when I run an Azure virtual machine with Windows Server, the per-minute cost of the VM includes the cost of Windows Server, and I do not need any Windows Server CALs to use it (RDS is a different matter).

If CALs are going up in price, then it’s probably good news for SPLA (hosting/leasing) resellers (hosting companies) and Azure where Server CALs are not a factor.

The Bits

So you want to play with WS2019? The first preview build (17623) is available as of last night through the Windows Server Insider Preview program. Anyone can sign up.

image

Would You Like To Learn About Azure Infrastructure?

If you found this information useful, then imagine what 2 days of training might mean to you. I’m delivering a 2-day course in Amsterdam on April 19-20, teaching newbies and experienced Azure admins about Azure Infrastructure. There’ll be lots of in-depth information, covering the foundations, best practices, troubleshooting, and advanced configurations. You can learn more here.

Big Changes to Windows Server–Semi-Annual Channel

Microsoft has just announced that they are splitting Windows Server and System Center into two channels:

  • Long-Term Servicing Channel (aka Branch)
  • Semi-Annual Channel

Long-Term Servicing Channel

This is the program that we’ve been using for years. Going forward, we will get a new version of Windows Server every 2-3 years. This big-bang release is what we are used to. We’ll continue to get 5 years mainstream support and 5 years extended support, and recently Microsoft announced the option to pay for an extra 6 years of Premium Assurance support.

Existing installations of Windows Server will fall into this channel. This channel will continue to get the usual software updates and security updates every month.

Semi-Annual Channel

This is aimed at hosting companies, private cloud (Azure Stack), and other customers that desire the latest and greatest. In addition to the usual monthly updates, these customers will get an OS upgrade, similar to what happens with Windows 10 now, twice per year in the Spring and Autumn.  Each of these releases will have 18 months of support after the initial release. Most of the included features will be rolled up to create future Long-Term Servicing Channel builds. While the Long-Term Servicing Channel releases will probably continue to be named based on years, the Semi-Annual Channel will use build numbers. A theoretical release in September 2017 would be called version 1709, and a March release in 2018 would be called version 1803.

Customers who can avail of this option are:

  • Software Assurance customers
  • Azure marketplace
  • MSDN and similar programs

SPLA wasn’t mention but this surely would have to be included for hosters?

Impact

The first word that came to my mind was “confusion”. Customers will be baffled by all this. MS wants to push out updates to more aggressive customers, but most companies are conservative with servers. The channel had to split. But it shall be fun to explain all of this … over and over … and over … and over … and again.

VMQ On Team Interface Breaking Hyper-V Networking

I recently had a situation where virtual machines on a Windows Server 2016 (WS2016) Hyper-V host could not communicate with each other. Ping tests were failing:

  • Extremely high latency
  • Lost packets

In this case, I was building a new Windows Server 2016 demo lab for some upcoming community events in The Netherlands and Germany, an updated version of my Hidden Treasures in Hyper-V talk that I’ve done previously at Ignite and TechEd Europe (I doubt I’ll ever do a real talk at Ignite again because I’m neither a MS employee or a conference sponsor). The machine I’m planning on using for these demos is an Intel NUC – it’s small, powerful, and is built with lots of flash storage. My lab consists of some domain controllers, storage, and some virtualized (nested) hosts, all originally connected to an external vSwitch. I built my new hosts, but could not join them to the domain. I did a ping from the new hosts to the domain controllers, and the tests resulted in massive packet loss. Some packets go through but with 3000+ MS latency.

At first I thought that I had fat-fingered some IPv4 configurations. But I double and triple checked things. No joy there. And that didn’t make sense (did I mention that this was at while having insomnia at 4am after doing a baby feed?) The usual cause of network problems is VMQ so that was my next suspect. I checked NCPA.CPL for the advanced NIC properties of the Intel NIC and there was no sign of VMQ. That’s not always a confirmation, so I ran Get-NetAdapterAdvancedProperty in PowerShell. My physical NIC did not have VMQ features at all, but the team interface of the virtual switch did.

And then I remembered reading that some people found that the team interface (virtual NIC) of the traditional Windows Server (LBFOADMIN) team (not Switch-Embedded Teaming) had VMQ enabled by default and that it caused VMQ-style issues. I ran Set-VMNetAdapterAdvancedProperty to disable the relevant RegistryKeyword for VMQ while running a ping –t and the result was immediate; my virtual switch was now working correctly. I know what you’re thinking – how can packets switching from one VM to another on the same host be affected by a NIC team? I don’t know, but they randomly are.

I cannot comment on how this affects 10 GbE networking – the jerks at Chelsio didn’t release WS2016 drivers for the T4 NICs and I cannot justify a spend on new NICs for WinServ work right now (it’s all Azure, all the time these days).  But if you are experiencing weird virtual switch packet issues, and you are using a traditional NIC team, then see if VMQ on the team interface (the one connected to your virtual switch) is causing the issue.

Ignite 2016 – Introducing Windows Server and System Center 2016

This session (original here) introduces WS2016 and SysCtr 2016 at a high level. The speakers were:

  • Mike Neil: Corporate VP, Enterprise Cloud Group at Microsoft
  • Erin Chapple: General Manager, Windows Server at Microsoft

A selection of other people will come on stage to do demos.

20 Years Old

Windows Server is 20 years old. Here’s how it has evolved:

image

The 2008 release brought us the first version of Hyper-V. Server 2012 brought us the same Hyper-V that was running in Azure. And Windows Server 2016 brings us the cloud on our terms.

The Foundation of Our Cloud

The investment that Microsoft made in Azure is being returned to us. Lots of what’s in WS2016 came from Azure, and combined with Azure Stack, we can run Azure on-prem or in hosted clouds.

There are over 100 data centers in Azure over 24 regions. Windows Server is the platform that is used for Azure across all that capacity.

IT is Being Pulled in Two Directions – Creating Stresses

  • Provide secure, controlled IT resources (on prem)
  • Support business agility and innovation (cloud / shadow IT)

By 2017, 50% of IT spending will be outside of the organization.

Stress points:

  • Security
  • Data centre efficiency
  • Modernizing applications

Microsoft’s solution is to use unified management to:

  • Advanced multi-layer security
  • Azure-inspired, software-defined,
  • Cloud-read application platform

Security

Mike shows a number of security breach headlines. IT security is a CEO issue – costs to a business of a breach are shown. And S*1t rolls downhill.

Multi-layer security:

  • Protect identity
  • Secure virtual machines
  • Protect the OS on-prem or in the cloud

Challenges in Protecting Credentials

Attack vectors:

  1. Social engineering is the one they see the most
  2. Pass the hash
  3. Admin = unlimited rights. Too many rights given to too many people for too long.

To protect against compromised admin credentials:

image

  • Credential Guard will protect ID in the guest OS
  • JEA limits rights to just enough to get the job done
  • JITA limits the time that an admin can have those rights

The solution closes the door on admin ID vulnerabilities.

Ryan Puffer comes on stage to do a demo of JEA and JITA. The demo is based on PowerShell:

  1. He runs Enter-PSSession to log into a domain controller (DNS server). Local logon rights normally mean domain admin.
  2. He cannot connect to the DC, because his current logon doesn’t have DC rights, so it fails.
  3. He tries again, but adding –ConfiguratinName to add a JEA config to Enter-PSSession, and he can get in. The JEA config was set up by a more trusted admin. The JEA authentication is done using a temporary virtual local account on the DC that resides nowhere else. This account exists only for the duration of the login session. Malware cannot use this account because it has limited rights (to this machine) and will disappear quickly.
  4. The JEA configuration has also limited rights – he can do DNS stuff but he cannot browse the file system, create users/groups, etc. His ISE session only shows DNS Get- cmdlets.
  5. He needs some modify rights. He browses to a Microsoft Identity Manager (MIM) portal and has some JITA roles that he can request – one of these will give his JEA temp account more rights so he can modify DNS (via a group membership). He selects one and has to enter details to justify the request. He puts in a time-out of 30 minutes – 31 minutes later he will return to having just DNS viewer rights. MFA via Azure can be used to verify the user, and manager approval can be required.
  6. He logs in again using Enter-PSSession with the JEA config. Now he has DNS modify rights. Note: you can whitelist and blacklist cmdlets in a role.

Back to Mike.

Challenges Protecting Virtual Machines

VMs are files:

  • Easy to modify/copy
  • Too many admins have access

Someone can mount a VMs disks or copy a VM to gain access to the data. Microsoft believes that attackers (internal and external) are interested in attacking the host OS to gain access to VMs, so they want to prevent this.

This is why Shielded Virtual Machines was invented – secure the guest OS by default:

  • The VM is encrypted at rest and in transit
  • The VM can only boot on authorised hosts

Azure-Inspired, Software-Defined

Erin Chapple comes on stage.

This is a journey that has been going on for several releases of Windows Server. Microsoft has learned a lot from Azure, and is bringing that learning to WS2016.

Increase Reliability with Cluster Enhancements

  • Cloud means more updates, with feature improvements. OS upgrades weren’t possible in a cluster. In WS2016, we get cluster rolling upgrades. This allows us to rebuild a cluster node within a cluster, and run the cluster temproarily in mixed-version mode. Now we can introduce changes without buying new cluster h/w or VM downtime. Risk isn’t an upgrade blocker.
  • VM resiliency deals with transient errors in storage, meaning a brief storage outage pauses a VM instead of crashing it.
  • Fault domain-aware clusters allows us to control how errors affect a cluster. You can spread a cluster across fault domains (racks) just like Azure does. This means your services can be spread across fault domains, so a rack outage doesn’t bring down a HA service.

image

24 TB of RAM on a physical host and 12 TB RAM in a guest OS are supported. 512 physical LPs on a host, and 240 virtual processors in a VM. This is “driven by Azure” not by customer feedback.

Complete Software-Defined Storage Solution

Evolving Storage Spaces from WS2012/R2. Storage Spaces Direct (S2D) takes DAS and uses it as replicated/shared storage across servers in a cluster, that can either be:

  • Shared over SMB 3 with another tier of compute (Hyper-V) nodes
  • Used in a single tier (CSV, no SMB 3) of hyper-converged infrastructure (HCI)

image

Storage Replica introduces per-volume sync/async block-level beneath-the-file system replication to Windows Server, not caring about what the source/destination storage is/are (can be different in both sites) as long as it is cluster-supported.

Storage QoS guarantees an SLA with min and max rules, managed from a central point:

  • Tenant
  • VM
  • Disk

The owner of S2D, Claus Joergensen, comes on stage to do an S2D demo.

  1. The demo uses latest Intel CPUs and all-Intel flash storage on 16 nodes in a HCI configuration (compute and storage on a single cluster, shared across all nodes).
  2. There are 704 VMs run using an open source tool called VMFleet.
  3. They run a profile similar to Azure P10 storage (each VHD has 500 IOPS). That’s 350,000 IOPS – which is trivial for this system.
  4. They change this to Azure P20: now each disk has 2,300 IOPS, summing 1.6 million IOPS in the system – it’s 70% read and 30% write. Each S2D cluster node (all 16 of them) is hitting over 100,000 IOPS, which is about the max that most HCI solutions claim.
  5. Clause changes the QoS rules on the cluster to unlimited – each VM will take whatever IOPS the storage system can give it.
  6. Now we see a total of 2.7 million IOPS across the cluster, with each node hitting 157,000 to 182,000 IOPS, at least 50% more than the HCI vendors claim.

Note the CPU usage for the host, which is modest. That’s under 10% utilization per node to run the infrastructure at max speed! Thank Storage Spaces and SMB Direct (RDMA) for that!

image

  1. Now he switches the demo over to read IO only.
  2. The stress test hits 6.6 million read IOPS, with each node offering between 393,000 and 433,000 IOPS – that’s 16 servers, no SAN!
  3. The CPU still stays under 10% per node.
  4. Throughput numbers will be shown later in the week.

If you want to know where to get certified S2D hardware, then you can get DataON from MicroWarehouse in Dublin (www.mwh.ie):

image

Nano Server

Nano Server is not an edition – it is an installation option. You can install a deeply stripped down version of WS2016, that can only run a subset of roles, and has no UI of any kind, other than a very basic network troubleshooting console.

It consumes just 460 MB disk space, compared to 5.4 GB of Server Core (command prompt only). It boots in less than 10 seconds and a smaller attack surface. Ideal scenario: born in the cloud applications.

Nano Server is not launched in Current Branch for Business. If you install Nano Server, then you are forced into installing updates as Microsoft releases them, which they expect to do 2-3 times per year. Nano will be the basis of Microsoft’s cloud infrastructure going forward.

Azure-Inspired Software-Defined Networking

A lot of stuff from Azure here. The goal is that you can provision new networks in minutes instead of days, and have predictable/secure/stable platforms for connecting users/apps/data that can scale – the opposite of VLANs.

Three innovations:

  • Network Controller: From Azure, a fabric management solution
  • VXLAN support: Added to NVGRE, making the underlying transport less important and focusing more on the virtual networks
  • Virtual network functions: Also from Azure, getting firewall, load balancing and more built into the fabric (no, it’s not NLB or Windows Firewall – see what Azure does)

Greg Cusanza comes on stage – Greg has a history with SDN in SCVMM and WS2012/R2. He’s going to deploy the following:

image

That’s a virtual network with a private address space (NAT) with 3 subnets that can route and an external connection for end user access to a web application. Each tier of the service (file and web) has load balancers with VIPs, and AD in the back end will sync with Azure AD. This is all familiar if you’ve done networking in Azure Resource Manager (ARM).

  1. A bunch of VMs have been created with no network connections.
  2. He opens a PoSH script that will run against the network controller – note that you’ll use Azure Stack in the real world.
  3. The script runs in just over 29 seconds – all the stuff in the screenshot is deploy and the VMs are networked and have Internet connectivity – He can browse the Net from a VM, and can browse the web app from the Internet – he proves that load balancing (virtual network function) is working.

Now an unexpected twist:

  1. Greg browses a site and enters a username and password – he has been phished by a hacker and now pretends to be the attacker.
  2. He has discovered that the application can be connected to using remote desktop and attempts to sign in used the phished credentials. He signs into one of the web VMs.
  3. He uploads a script to do stuff on the network. He browses shares on the domain network. He copies ntds.dit from a DC and uploads it to OneDrive for a brute force attack. Woops!

This leads us to dynamic security (network security groups or firewall rules) in SDN – more stuff that ARM admins will be familiar with. He’ll also add a network virtual appliance (a specialised VM that acts as a network device, such as an app-aware firewall) from a gallery – which we know that Microsoft Azure Stack will be able to syndicate from :

image

 

  1. Back in PoSH, he runs another script to configure network security groups, to filter traffic on a TCP/UDP port level.
  2. Now he repeats the attack – and it fails. He cannot RDP to the web servers, he couldn’t browse shared folders if he did, and he prevented outbound traffic from the web servers anyway (stateful inspection).

The virtual appliance is a network device that runs a customized Linux.

  1. He launches SCVMM.
  2. We can see the network in Network Service – so System Center is able to deploy/manage the Network Controller.

Erin finished by mentioning the free WS2016 Datacenter license offer for retiring vSphere hosts “a free Datacenter license for every vSphere host that is retired”, good until June 30, 2017 – see www.microsoft.com/vmwareshift

Cloud-Ready Application Platform

Back to Mike Neil. We now have a diverse set of infrastructure that we can run applications one:

image

WS2016 adds new capabilities for cloud-based applications. Containers was a huge thing for MSFT.

A container virtualizes the OS, not the machine. A single OS can run multiple Windows Server Containers – 1 container per app. So that’s a single shared kernel – that’s great for internal & trusted apps, similar to containers that are available on Linux. Deployment is fast and you can get great app density. But if you need security, you can deploy compatible Hyper-V Containers. The same container images can be used. Each container has a stripped down mini-kernal (see Nano) isolated by a Hyper-V partition, meaning that untrusted or external apps can be run safely, isolated from each other and the container host (either physical or a VM – we have nested Hyper-V now!). Another benefit of Hyper-V Containers is staggered servicing. Normal (Windows Server) Containers share the kernal with the container host – if you service the host then you have to service all of the containers at the same time. Because they are partitioned/isolated, you can stagger the servicing of Hyper-V Containers.

Taylor Brown (ex- of Hyper-V and now Principal Program Manager of Containers) comes on stage to do a demo.

image

  1. He has a VM running a simple website – a sample ASP.NET site in Visual Studio.
  2. In IIS Manager, he does a Deploy > Export Application, and exports a .ZIP.
  3. He copies that to a WS2016 machine, currently using 1.5 GB RAM.
  4. He shows us a “Docker File” (above) to configure a new container. Note how EXPOSE publishes TCP ports for external access to the container on TCP 80 (HTTP) and TCP 8172 (management). A PowerShell snap-in will run webdeploy and it will restore the exported ZIP package.
  5. He runs Docker Build –t mysite  … with the location of the docker file.
  6. A few seconds later a new container is built.
  7. He starts the container and maps the ports.
  8. And the container is up and running in seconds – the .NET site takes a few seconds to compile (as it always does in IIS) and the thing can be browsed.
  9. He deploys another 2 instances of the container in seconds. Now there are 3 websites and only .5 GB extra RAM is consumed.
  10. He uses docker run -isolation=hyperv to get an additional Hyper-V Container. The same image is started … it takes an extra second or two because of “cloning technology that’s used to optimize deployment of Hyper-V Containers”.
  11. Two Hyper-V containers and 3 normal containers (that’s 5 unique instances of IIS) are running in a couple of minutes, and the machine has gone from using 1.5 GB RAM to 2.8 GB RAM.

Microsoft has been a significant contributor to the Docker open source project and one MS engineer is a maintainer of the project now. There’s a reminder that Docker’s enterprise management tools will be available to WS2016 customers free of charge.

On to management.

Enterprise-Class Data Centre Management

System Center 2016:

  • 1st choice for Windows Server 2016
  • Control across hybrid cloud with Azure integrations (see SCOM/OMS)

SCOM Monitoring:

  • Best of breed Windows monitoring and cross-platform support
  • N/w monitoring and cloud infrastructure health
  • Best-practice for workload configuration

Mahesh Narayanan, Principal Program Manager, comes on stage to do a demo of SCOM. IT pros struggle with alert noise. That’s the first thing he wants to show us – it’s really a way to find what needs to be overriden or customized.

  1. Tune Management Packs allows you to see how many alerts are coming from each management pack. You can filter this by time.
  2. He click Tune Alerts action. We see the alerts, and a count of each. You can then do an override (object or group of objects).

Maintenance cycles create a lot of alerts. We expect monitoring to suppress these alerts – but it hasn’t yet! This is fixed in SCOM 2016:

  1. You can schedule maintenance in advance (yay!). You could match this to a patching cycle so WSUS/SCCM patch deployments don’t break your heart on at 3am on a Saturday morning.
  2. Your objects/assets will automatically go into maintenance mode and have a not-monitored status according to your schedules.

All those MacGuyver solutions we’ve cobbled together for stopping alerts while patching can be thrown out!

That was all for System Center? I am very surprised!

PowerShell

PowerShell is now open source.

  • DevOps-oriented tooling in PoSH 5.1 in WS2016
  • vNext Alpha on Windows, macOS, and Linux
  • Community supported releases

Joey Aiello, Program Manager, comes up to do a demo. I lose interest here. The session wraps up with a marketing video.

Windows Server 2016 is Launched But NOT Generally Available Yet

Microsoft did the global launch of Windows Server 2016 at Ignite in Atlanta yesterday. But contrary to tweets about an eval edition that you can download (but is useless for production), neither Windows Server 2016 or System Center 2016 are actually generally available; they are on the October pricelists but won’t be GA until “mid October”. You won’t find WS2016 or System Center 2016 GA yet on:

  • Azure
  • MSDN
  • MVLS

So you’ll have to wait until mid-October? Why the wait? It’s obvious, if you think about the last 2 releases. What do you do after installing a new OS from media? You run Windows Update. And what got installed after deploying GA bits for WS2012 or WS2012 R2? Hundreds of GB of a monster update. Microsoft is probably hard at work on a monster cumulative update that they need to get right for the GA of WS2016. It must not be ready yet, and they aren’t 100% sure when, and that’s why the time of the GA mentioned in public is mid-October and not a specific date.

Azure Stack was previously announced for GA in mid-2017. TP2 (which has been in TAP for a while, I am lead to believe) was made public yesterday with a number of improvements.

It was a quiet launch … I think the OS was mentioned once in the keynote, and there were no demos (which are actually pretty stunning this time around). System Center 2016 was also launched. Some in the media might use this quiet launch to continue their theory that Windows Server is walking dead and being replaced by Azure. That could not be further from the truth. Microsoft very much pushed (the following session by Jason Zander)  the hybrid cloud message, powered by WS2016, saying that their unique selling point will continue for enterprises that can never move (some or all services) to a public cloud. And, of course, this is why Azure Stack has been developed and is getting a lot of attention. There are countless sessions on WS2016, System Center 2016, and Azure Stack during the week at Ignite. Don’t forget, also, that hybrid goes down to the code – the same people work on Azure and Windows Server because they are one and the same.

The cloud-cloud-cloud keynote was a call rather than a sales pitch. It’s time to start learning the cloud – your bosses and your customers want it so it’s pointless for you to fight yourself out of a job. Most of the cloud solutions they showed actually supplemented on-prem installations rather than replaced them.

Cannot Bind Parameter ‘ForegroundColor’ Error When Creating Nano Server Image

You get the following error when running New-NanoServerImage in PowerShell ISE to create a new Windows Server 2016 (WS2016) Nano Server image:

Write-W2VError : Cannot bind parameter ‘ForegroundColor’. Cannot convert the “#FFFF0000” value of type
“System.Windows.Media.Color” to type “System.ConsoleColor”.

image

The fix (during TP5) is to not use PowerShell ISE. Use an elevated PowerShell prompt instead. The reasoning is explained here by “daviwil”.