Private Connections to Azure PaaS Services

In this post, I’d like to explain a few options you have to get secure/private connections to Azure’s platform-as-a-service offerings.

Express Route – Microsoft Peering

 

ExpressRoute comes in a few forms, but at a basic level, it’s a “WAN” connection to Azure virtual networks via one or more virtual network gateways; Customers this private peering to connect on-premises networks to Azure virtual networks over an SLA-protected private circuit. However, there is another form of peering that you can do over an ExpressRoute circuit called Microsoft peering. This is where you can use your private circuit to connect to Microsoft cloud services that are normally connected to over the public Internet. What you get:

  • Private access to PaaS services from your on-premises networks.
  • Access to an entire service, such as Azure SQL.
  • A wide array of Azure and non-Azure Microsoft cloud services.

FYI, Office 365 is often mentioned here. In theory, you can access Office 365 over Microsoft peering/ExpressRoute. However, the Office 365 group must first grant you permission to do this – the last I checked, you had to have legal proof of a regulatory need for private access to Cloud services. 

Service Endpoint

Imagine that you are running some resources in Azure, such as virtual machines or App Service Environment (ASE); these are virtual network integrated services. Now consider that these services might need to connect to other services such as storage accounts, Azure SQL, or others. Normally, when a VNet connected resource is communicating with, say, Azure SQL, the packets will be routed to “Internet” via the 0.0.0.0/0 default route for the subnet – “Internet” is everywhere outside the virtual network, not necessarily The Internet. The flow will hit the “public” Azure backbone and route to the Azure SQL compute cluster. There are two things about that flow:

  • It is indirect and introduces latency.
  • It traverses a shared network space.
  • A growing number of Azure-only services that support service endpoints.

A growing number of services, including storage accounts, Azure SQL, Cosmos DB, and Key Vault, all have services endpoints available to them. You can enable a service endpoint anywhere in the route from the VM (or whatever) to “Internet” and the packets will “drop” through the service endpoint to the required Azure service – make sure that any firewall in the service accepts packets from the private subnet IP address of the source (VM or whatever). Now you have a more direct and more private connection to the platform service in Azure from your VNet. What you get:

  • Private access to PaaS services from your Azure virtual networks.
  • Access to an entire service, such as Azure SQL, but you can limit this to a region.

Service Endpoint Trick #1

Did you notice in the previous section on service endpoints that I said:

You can enable a service endpoint anywhere in the route from the VM (or whatever) to “Internet”

Imagine you have a complex network and not everyone enables service endpoints the way that they should. But you manage the firewall, the public IPs, and the routing. Well, my friend, you can force traffic to support Azure platform services via service endpoints. If you have a firewall, then your routes to “Internet” should direct outbound traffic through the firewall. In the firewall (frontend) subnet, you can enable all the Azure service endpoints. Now when packets egress the firewall, they will “drop” through the service endpoints and to the desired Azure platform service, without ever reaching “Internet”.

Service Endpoint Trick #2

You might know that I like Azure Firewall. Here’s a trick that the Azure networking teams shared with me – it’s similar to the above one but is for on-premises clients trying to access Azure platform services.

You’ve got a VPN connection to a complex virtual network architecture in Azure. And at the frontend of this architecture is Azure Firewall, sitting in the AzureFirewallSubnet; in this subnet you enabled all the available service endpoints. Let’s say that someone wants to connect to Azure SQL using Power BI on their on-premises desktop. Normally that traffic will go over the Internet. What you can do is configure name resolution on your network (or PC) for the database to point at the private IP address of the Azure Firewall. Now Power BI will forward traffic to Azure Firewall, which will relay you to Azure SQL via the service endpoint. What you get:

  • Private access to PaaS services from your on-premises or Azure networks.
  • Access to individual instances of a service, such as an Azure SQL server
  • A growing number of Azure-only services that support service endpoints.

Private Link

In this post, I’m focusing on only one of the 3 current scenarios for Private Link, which is currently in unsupported preview in limited US regions only, for limited platform services – in other words, it’s early days.

This approach aims to give a similar solution to the above “Service Endpoint Trick #2” without the use of trickery. You can connect an instance of an Azure platform service to a virtual network using Private Link. That instance will now have a private IP address on the VNet subnet, making it fully routable on your virtual network. The private link gets a globally unique record in the Microsoft-managed privatelink.database.windows.net DNS zone. For example, your Azure SQL Server would now be resolvable to the private IP address of the private link as yourazuresqlsvr.privatelink.database.windows.net. Now your clients, be the in Azure or on-premises, can connect to this DNS name/IP address to connect to this Azure SQL instance. What you get:

  • Private access to PaaS services from your on-premises or Azure networks.
  • Access to individual instances of a service, such as an Azure SQL server.
  • (PREVIEW LIMITATIONS) A limited number of platform services in limited US-only regions.

Migrating Azure Firewall To Availability Zones

Microsoft recently added support for availability zones to Azure firewall in regions that offer this higher level of SLA. In this post, I will explain how you can convert an existing Azure Firewall to availability zones.

Before We Proceed

There are two things you need to understand:

  1. If you have already deployed and configured Azure Firewall then there is no easy switch to turn on availability zones. What I will be showing is actually a re-creation.
  2. You should do a “dress rehearsal” – test this process and validate the results before you do the actual migration.

The Process

The process you will do will go as follows:

  1. Plan a maintenance window when the Azure Firewall (and dependent communications) will be unavailable for 1 or 2 hours. Really, this should be very quick but, as Scotty told Geordi La Forge, a good engineer overestimates the effort, leaves room for the unexpected, and hopefully looks like a hero if all goes to the unspoken plan.
  2. Freeze configuration changes to the Azure Firewall.
  3. Perform a backup of the Azure Firewall.
  4. Create a test environment in Azure – ideally a dedicated subscription/virtual network(s) minus the Azure Firewall (see the next step).
  5. Modify the JSON file to include support for availability zones.
  6. Restore the Azure Firewall backup as a new firewall in the test environment.
  7. Validate that that new firewall has availability zones and that the rules configuration matches that of the original.
  8. Confirm & wait for the maintenance window.
  9. Delete the Azure Firewall – yes, delete it.
  10. Restore the Azure Firewall from your modified JSON file.
  11. Validate the restore
  12. Celebrate – you have an Azure Firewall that supports multiple zones in the region.

Some of the Technical Bits

The processes of backing up and restoring the Azure Firewall are covered in my post here.

The backup is a JSON export of the original Azure Firewall, describing how to rebuild and re-configure it exactly as is – without support for availability zones. Open that JSON and make 2 changes.

The first change is to make sure that the API for deploying the Azure Firewall is up to date:

        {
            "apiVersion": "2019-04-01",
            "type": "Microsoft.Network/azureFirewalls",

The next change is to instruct Azure which availability zones (numbered 1, 2, and 3) that you want to use for availability zones in the region:

        {
            "apiVersion": "2019-04-01",
            "type": "Microsoft.Network/azureFirewalls",
            "name": "[variables('FirewallName')]",
            "location": "[variables('RegionName')]",
            "zones": [
                "1",
                "2",
                "3"
            ],
            "properties": {
                "ipConfigurations": [
                    {

And that’s that. When you deploy the modified JSON the new Azure Firewall will exist in all three zones.

Note that you can use this method to place an Azure Firewall into a single specific zone.

Costs Versus SLAs

A single zone Azure Firewall has a 99.95% SLA. Using 2 or 3 zones will increase the SLA to 99.99%. You might argue “what’s the point?”. I’ve witnessed a data center (actually, it was a single storage cluster) in an Azure region go down. That can have catastrophic results on a service. It’s rare but it’s bad. If you’re building a network where the Azure Firewall is the centre of secure, then it becomes mission critical and should, in my opinion, span availability zones, not for the contractual financial protections in an SLA but for protecting mission critical services.  That protection comes at a cost – you’ll now incur the micro-costs of data flows between zones in a region. From what I’ve seen so far, that’s a tiny number and a company that can afford a firewall will easily absorb that extra relatively low cost.

Why Choose the Azure Firewall over a Virtual Firewall Appliance?

In this post, I will explain why you should choose Azure Firewall over third-party firewall network virtual appliances (NVAs) from the likes of Cisco, Palo Alto, Check Point, and so on.

Microsoft’s Opinion

Microsoft has a partner-friendly line on Azure Firewall versus third-parties. Microsoft says that third-party solutions offer more than Azure Firewall. If you want you can use them side-by-side.

Now that’s out of the way, let me be blunt … like I’d be anything else! 😊

The NVA Promise

At their base, a firewall blocks or allows TCP/UDP/etc and does NAT. Some firewalls offer a “security bundle” of extra features such as:

  • Malware scanning based on network patterns
  • Download scanning, including zero-days (detonation chamber)
  • Browser URL logging & filtering

But those cool things either make no sense in Azure or are just not available from the NVA vendors in their cloud appliances. So what you are left with is central logging and filtering.

Documentation

With the exception of Palo Alto (their whitepaper for Azure is very good – not perfect) and maybe Check Point, the vendors have pretty awful documentation. I’ve been reading a certain data centre mainstay’s documents this week and they are incomplete and rubbish.

Understanding of Azure

It’s quite clear that some of the vendors are clueless about The Cloud and/or Azure. Every single vendor has written docs about deploying everything into a single VNet – if you can afford NVAs then you are not putting all your VMs into a single VNet (see hub & spoke VNet peering). Some have never heard of availability zones – if you can afford NVAs then you want as high an SLA as you can get. Most do not offer scale-out (active/active clusters) – so a single VM becomes your bottleneck on VM performance (3000 Mbps in a D3_v2). Some don’t even support highly available firewall clusters – so a single VM becomes the single point of failure in your entire cloud network! And their lack of documentation or understanding of VNet peering or route tables in a large cloud deployment is laughable.

The Comparison

So, what I’m getting at is that the third-party NVAs suck. Azure Firewall isn’t perfect either, but it’s a true cloud platform service and it is improving fast – just last night Microsoft announced Threat Intelligence-Based Filtering and Service Tags Filtering (this appeared recently). I know more things are on the way too 😊

Here is my breakdown of how Azure Firewall stacks up against firewall NVAs:

Azure Firewall NVA
Deployment Platform Linux VM + Software
Licensing Consumption: instance + GB Linux VM + Software
Scaling Automatic Add VMs + Software
Ownership Set & monitor Manage VM / OS / Software
Layer -7 Logging & filtering Potentially* deep inspection
Networking 1 subnet & PIP 1+ subnets & 1 PIP
Complexity Simple Difficult

I know: you laugh when you hear “Microsoft” and “Firewall” in the same sentence. You think of ISA Server. Azure Firewall is different. This is baked into the fabric of Azure, the strategic future of Microsoft. It is already rapidly improving, and it does more than the third parties.

Heck, what does the third-party offer compared to NSGs? NSGs filter TCP/UDP, they can log to a storage account, you can centrally log using Event Hubs, and does advanced reporting/analysis using NSG Flo Logs with Azure Monitor Logs (Log Analytics). Azure Firewall takes that another step with a hub deployment, an understanding of HTTP/S, and is now using machine learning for dynamic threat prevention!

My Opinion

Some people will always prefer a non-Microsoft firewall. But my counter would be, what are you getting that is superior – really? With Azure Firewall, I create a firewall, set my rules, configure my logging, and I’m done. Azure Firewall scales and it is highly available. Logging can be done to storage accounts, event hubs (SIEM), and Azure Monitor Logs. And here’s the best bit … it is SIMPLE to deploy and there is almost no cost of ownership. Compare that to some of the HACK solutions from the NVA vendors and you’d laugh.

The Azure Firewall was designed for The Cloud. It was designed for the way that Azure works. And it was designed for how we should use The Cloud … at scale. And that scale isn’t just about Mbps, but in terms of backend services and networks. From what I have seen so far, the same cannot be said for firewall NVAs. For me, the decision is easy: Azure Firewall. Every time.

Microsoft Ignite–Building Enterprise Grade Applications With Azure Networking’s Delivery Suite

Speakers: Daniel Grickholm & Amit Srivastava

I arrived late to this session after talking to some product group people in the expo hall.

Application Gateway Demo

We see the number of instances dynamically increase and cool down – I think there was an app on Kubernetes in the background.

Application Gateway

Application gateway ingress controller for AKS v2.

  • Attach WAG to AKS clusters.
  • Load balance from the Internet to pods
  • Supports features of K8s ingress resource – TLS, multisite and path-based

Demo: we see a K8s containers app published via the WAG. The backend pool is shown – IPs of containers. Deleting the app in K8s removes the backend pool registration from the WAG (this fails in the demo).

Web Application Firewall

WIN_20180927_13_08_49_Pro

WIN_20180927_13_10_55_Pro

Demo – WAF

App behind a firewall with no exclusion parameters. Backend pool is a simple PHP application. Second firewall is using the same backend VM as a backend pool – a scan exclusion is set up to ignore any field which matches a “comments” string. The second one allows a comment post, the other one does not.

WIN_20180927_13_18_03_Pro

WIN_20180927_13_19_03_Pro

WIN_20180927_13_19_41_Pro

WIN_20180927_13_20_03_Pro

Get performance closer to the customer. Runs in edge sites, not the azure data centers.

WIN_20180927_13_21_53_Pro

WIN_20180927_13_21_53_Pro

Once you hit an edge site via front door, you are on the Azure WAN.

WIN_20180927_13_25_42_Pro

ADN = application delivery network

WIN_20180927_13_25_42_Pro

Big focus on SLA HA and performance. Built for Office.

WIN_20180927_13_25_42_Pro

5 years old and mature.

Can work in conjunction with WAG, even if there is some overlap, e.g. SSL termination.

WIN_20180927_13_25_42_Pro

What will be in the next demo:

WIN_20180927_13_25_42_Pro

Has an app for USA in Central US. Another for UK deployed in UK South. Shows the front door creation – Name/resource group, Configuration screen during creation is a bit different for Azure. Create a global CName and session affinity in fron end hosts. Create backends – app service, gateways, etc. You can set up host headers for custom domains, priority, port translation, priority for failover, weight for load balancing. You can add health probes to the backend pools, to a URL path, HTTP/S, and set the interval. Finally you create a routing rule; this maps frontend hosts to backend pools. You can set if it should be HTTP and/or HTTPS.

Skips to one he created earlier. When he browses the two apps that are in it, he is sent to the closest instance – in central US. You can set up  rules to block certain countries.

You can implement rate limiting and policies for fairness.

You can implement URL rewrites to map to a different path on the web servers.

This is like traffic manager + WAG combined at the edges of the Azure WAN.

WIN_20180927_13_43_14_Pro

WIN_20180927_13_43_50_Pro

Front Door load balances between regions. WAG load balances inside the region – that’s why they work together.

WIN_20180927_13_43_50_Pro

Adding Address Spaces To An Azure Virtual Network

Have you ever run out of addresses in an Azure virtual network? Have you ever needed to add a different scope or address space to an existing Azure virtual network? If so, this post is for you.

Quite honestly, I did not know that this was possible until recently – it’s a setting in an Azure virtual network that I have never used or even looked at:

image

When you create a virtual network, you give it an address space. Typically that will be a 10.x.x.x range because that’s what the Azure Portal steers you towards and if offers a lot of address space to carve up. In the above virtual network, I created a virtual network with an address space of 192.168.1.0/24, one that should be very familiar to you. And the blades for setting up the virtual network created a single subnet consuming all of that space. What if I wanted to add another subnet? I used to think that it wasn’t possible, but I was wrong.

You can click Address Space in the Settings of the virtual network and add extra address spaces. In the above, I’ve added 10.0.0.0/16 and 172.16.0.0/16 (extreme but vivid examples) to my subnet. If that was an on-premises network, based on VLANs and routing, then life would get complicated. But this is software defined networking. These addresses are more for our comfort than for the “machine” that runs the network. In the end, NVGRE which powers the Azure network, is copying packets from a source NIC to destination NIC and is abstracts the underlying physical complexity through encapsulation (dig up Damian Flynn’s old NVGRE presentations on VMM logical software defined networks). In short … you add these address spaces, then create subnets and the subnets will route automatically across those spaces.

image

If you go into subnets, you now can create subnets within the address spaces of the virtual network and they just route.

image

To prove this simplicity, I deployed a VM in 192.168.1.0/24 and another in 172.16.1.0/24. I modified Windows Firewall to allow ICMP in (ping) and then ran some ping and tracert tests between the two machines in different address spaces. In a normal VLAN world, the results would illustrate the underlying complexity. In Azure’s software defined network, these are just 2 subnets in the same virtual network.

Pretty cool, right?

Azure Traffic Manager: Geography Versus Latency

A recent #AzureTrivia question on Twitter asked how you would configure Azure Traffic Manager to redirect clients to the closest endpoint (a place hosting a web application). That question made me go hmm – how do you define closest?

image

Defining Closeness

Do  you measure closeness by kilometres as the crow flies or on the road? Or do you measure closeness by how packets travel across the Internet, from the client to the actual Azure data centre? Here’s a story I tell in my Azure training when talking about this topic.

I once worked for a hosting company in Dublin, Ireland. It was the end of a workday in December and we were all excited because it was the night of our Christmas party. We were going to a restaurant in the city and the MD was paying for everything. Fun times! Sales, engineering, support, etc were in the top floor and we piled down the stairs to the NOC to get the folks who were coming off their shift. A few of us walked into the NOC and the staff were in a bit of a tizzy. A customer, not very far away from us, claimed that we were offline. Earlier that year, we did have a catastrophic outage caused by an electrician’s mistake, so we were a bit touchy about things like this. Straight away, us engineers ran back upstairs and started doing tests. The networking guys quickly verified that we were actually online, but the customer was adamant. NOC got the customer (in Ireland, remember) to run a tracert. We quickly found that the customer’s ISP connected to the rest of the Internet in Germany, and that there was a router fault in Germany that was nothing to do with us – there was an infinite loop and packets were timing out.

image

So this customer, only a few kilometres from us, was connected to the rest of the world through Germany. We were geographically close to the customer, but in terms of latency, the customer could have had a “closer” hosting company in Germany. When you use a phrase such as “closest” in networking, that typically means latency, and is nothing to do with an atlas or map book.

Controlling Traffic Manager

Traffic Manager is a DNS redirection Azure feature for services running across multiple Azure/other locations. The redirection of each Traffic Manager profile works in one of 4 ways:

  • Priority: You can think of this as a failover method. Traffic goes to endpoint 1, if that fails it goes to endpoint 2. If endpoint 2 fails, it goes to endpoint 3, and so on.
  • Weighted: This is a weight-based distribution method, i.e. load balancing. You might set one endpoint with a weight of 40 (40% in this case) and two other endpoints each with a weight of 30 (30%).
  • Performance: I’ll use Microsoft’s definition here … when you have endpoints in different geographic locations and you want end users to use the “closest” endpoint in terms of the lowest network latency.
  • Geographic: Using Microsoft’s definition again … users are directed to specific endpoints (Azure, External, or Nested) based on which geographic location their DNS query originates from.

So if you want to configure Traffic Manager to send clients to the closest Azure region, you use the Performance routing method.

In my above Europe example, I might have a web application running in North Europe (Dublin) and West Europe (Netherlands), unified and abstracted at the DNS level by Traffic Manager. If I set Geographic as the routing method, the customer would normally be sent to North Europe. If I set the routing method as Performance, the customer would normally be sent to West Europe because it is closer in terms of latency.

Want to Learn More Azure Stuff Like This?

If you found this information useful, then imagine what 2 days of training might mean to you. I’m delivering a 2-day course in London on July 5-6, teaching newbies and experienced Azure admins about Azure Infrastructure. There’ll be lots of in-depth information, covering the foundations, best practices, troubleshooting, and advanced configurations. You can learn more here.

First Cloud Mechanix Azure Course Completed

Last week, I delivered my first ever Cloud Mechanix Azure training course, to a full room in the Lancaster Gate area of London, UK.

It was a jam-packed full 2 days of Azure storage, networking, virtual machines, backup, DR, security, and management, with lots of hands-on labs. Half the attendees were from the UK, the rest from countries such as Denmark, Netherlands, Belgium, and even Canada! I had a lot of fun teaching the class – there were lots of questions and laughs. And as often happens in these classes, the interactions lead me to picking up a couple of ideas from the attendees.

In my class, everyone gets hands-on labs a few days before the event. That allows them to get their laptops ready. On the day, they get copies of the slides so they can follow/along or make notes on their laptops – the labs and slides are updated with the latest information that I have. The goal of the class isn’t to teach you where to click, but why to click. In the cloud, things move and get renamed so detailed instructions age very quickly. But what lasts is understanding the why. Not everyone got to finish the hands-on labs, but I am available to help the attendees complete the labs.

If this course sounds interesting to you, then we have another class running in Amsterdam in April. Some tweaks are being made the labs/slides (which the London class will be getting too) and, as always, the April class will be getting the latest that I can share on Azure.

Azure VMs–Block Outbound Traffic to the Internet (Updated)

In theory, it was possible to deny all outbound traffic to the Internet from an Azure VM. In theory, I can also place a loaded gun to my head, but my doctor disapproves of that.

Here’s what would happen:

  • You created an outbound rule to Deny all traffic to a service tag (location) called Internet.
  • The VM worked fine … for a while.
  • The VM was rebooted, maybe for a guest OS patch cycle.
  • The VM would not reboot.
  • Your boss screamed at you, if you were lucky.

The problem is that Azure included all Azure services under the service tag of “Internet”. And Azure VMs need to talk to Azure to boot up – to be specific, they need to talk to Azure Storage if the IaaSDiagnostics (Azure Performance Diagnostics) extension is configured. If a VM can’t talk to that storage account, the VM will fail to boot. There was a scripted workaround, but it was far from pretty.

Recently Microsoft made a Network Security Group service tags generally available. Service tags take those old locations and expand them to more than just Virtual Network, Load Balancer (probe), and Internet. Now you can specify Azure storage (storage account) and Azure SQL services, globally and locally (a specific region).

image

So for example, I can let a VM connect (Azure) Storage globally, in West Europe, or to connect to Azure SQL in North Europe. Now we can block outbound access to the Internet, but still allow access to Azure storage in the same region for diagnostics & metrics.

image

I’ve tested, and yes, my VM rebooted Smile

Was This Post Useful?

If you found this information useful, then imagine what 2 days of training might mean to you. I’m delivering a 2-day course in Amsterdam on April 19-20, teaching newbies and experienced Azure admins about Azure Infrastructure. There’ll be lots of in-depth information, covering the foundations, best practices, troubleshooting, and advanced configurations. You can learn more here.

Microsoft Azure Started Patching Reboots Yesterday

Contrary to a previous email that I received, Microsoft started rebooting Azure VMs yesterday, instead of the 9th. Microsoft also confirmed that this is because of the Intel CPU security flaw. The following email was sent out to customers:

Dear Azure customer,

An industry-wide, hardware-based security vulnerability was disclosed today. Keeping customers secure is always our top priority and we are taking active steps to ensure that no Azure customer is exposed to these vulnerabilities.

The majority of Azure infrastructure has already been updated to address this vulnerability. Some aspects of Azure are still being updated and require a reboot of some customer VMs for the security update to take effect.

You previously received a notification about Azure planned maintenance. With the public disclosure of the security vulnerability today, we have accelerated the planned maintenance timing and began automatically rebooting the remaining impacted VMs starting at PST on January 3, 2018. The self-service maintenance window that was available for some customers has now ended, in order to begin this accelerated update.

You can see the status of your VMs, and if the update completed, within the Azure Service Health Planned Maintenance Section in the Azure Portal.

During this update, we will maintain our SLA commitments of Availability Sets, VM Scale Sets, and Cloud Services. This reduces impact availability and only reboots a subset of your VMs at any given time. This ensures that any solution that follows Azure’s high availability guidance remains available to your customers and users. Operating system and data disks on your VM will be retained during this maintenance.

You should not experience noticeable performance impact with this update. We’ve worked to optimize the CPU and disk I/O path and are not seeing noticeable performance impact after the fix has been applied. A small set of customers may experience some networking performance impact. This can be addressed by turning on Azure Accelerated Networking (Windows, Linux), which is a free capability available to all Azure customers.

This Azure infrastructure update addresses the disclosed vulnerability at the hypervisor level and does not require an update to your Windows or Linux VM images. However, as always, you should continue to apply security best practices for your VM images.

For more information, please see the Azure blog post.

That email reads like Microsoft has done quite a bit of research on the bug, the fix, and the effects of bypassing the flawed CPU performance feature. It also sounds like the only customers that might notice a problem are those with large machines with very heavy network usage.

Accelerated networking is Azure’s implementation of Hyper-V’s SR-IOV. The virtual switch (in user mode in the host parent partition) is bypassed, and the NIC of the VM (in kernel mode) connects directly to a physical function (PF) on the host’s NIC via a virtual function (VF) or physical NIC driver in the VM’s guest OS. There are fewer context switches because there is no loop from the the NIC, via the VM bus, to the virtual switch, and then back to the host’s NIC drivers. Instead, with SR-IOV/Accelerated Networking, everything stays in kernel mode.

Image result for azure accelerated networking

If you find that your networking performance is impacted, and you want to enable Accelerated Networking, then there are a few things to note:

Thanks to Neil Bailie of P2V for spotting that I’d forgotten something in the below, stricken out, points:

Was This Post Useful?

If you found this information useful, then imagine what 2 days of training might mean to you. I’m delivering a 2-day course in Amsterdam on April 19-20, teaching newbies and experienced Azure admins about Azure Infrastructure. There’ll be lots of in-depth information, covering the foundations, best practices, troubleshooting, and advanced configurations. You can learn more here.