Ignite 2015–Stretching Failover Clusters and Using Storage Replica in Windows Server 2016

Speakers: Elden Christensen & Ned Pyle, Microsoft

A pretty full room to talk fundamentals.

Stretching clusters has been possible since Windows 2000, making use of partners. WS2016 makes it possible to do this without those partners, and it’s more than just HA, but also a DR solution. There is built-in volume replication so you don’t need to use SAN or 3rd-party replication technologies, and you can use different storage systems between sites.

Assuming: You know about clusters already – not enough time to cover this.

Goal: To use clusters for DR, not just HA.

RTO & RPO

  • RTO: Accepted amount of time that services are offline
  • RPO: Accepted amount of data loss, measured in time.
  • Automated failover: manual invocation, but automated process
  • Automatic failover: a heartbeat failure automatically triggers a failure
  • Stretch clusters can achieve low RPO and RTO
  • Can offer disaster avoidance (new term) ahead of a predicted disaster. Use clustering and Hyper-V features to move workloads.

Terminology

  • Stretch cluster. What used to be aclled a multi-site cluster, metro cluster or geo cluster.

Stretch Cluster Network Considerations

Clusters are very aggressive out of the box: once per second heartbeat and 5 missed heartbeats = failover. PowerShell = (Get-Cluster).SameSubnetThreshold = 10 and (Get-Cluster).CrossSubnetThreshold = 20

Different data centers = different subnets. They are using Network Name Resources  for things like file shares which are registered in DNS depending on which site the resource is active in. The NNR has IP address A and IP Address B. Note that DNS registrations need to be replicated and the TTL has to expire. If you failover something like a file share then there will be some time of RTO depending on DNS stuff.

If you are stretching Hyper-V clusters then you can use HNV to abstract the IPs of the VMs after failover.

Another strategy is that you prefer local failover. HA scenario is to failover locally. DR scenario is to failover remotely.

You can stretch VLANs across sites – you network admins will stop sending you XMas cards.

There are network abstraction devices from the likes of Cisco, which offer the same kind of IP abstraction that HNV offers.

(Get-Cluster).SecurityLevel =2 will encrypt cluster traffic on untrusted networks.

Quorum Considerations

When nodes cannot talk to each other then they need a way to reconcile who stays up and who “shuts down” (cluster activities). Votes are assigned to each node and a witness. When a site fails then a large block of votes disappears simultaneously. Plan for this to ensure that quorum is still possible.

In a stretch cluster you ideally want a witness in site C via independent network connection from Site A – Site B comms. The witness is available even if one site goes offline or site A-B link goes down. This witness is a file share witness. Objections: “we don’t have a 3rd site”.

In WS2016, you can use a cloud witness in Azure. It’s a blob over HTTP in Azure.

Demo: Created a storage account in Azure. Got the key. A container contains a sequence number, just like a file share witness. Configures a cluster quorum as usual. Chooses Select a Witness, and slect Configure a Cloud Witness. Enters the storage account name and pastes in the key. Now the cluster starts using Azure as the 3rd site witness. Very affordable solution using a teeny bit of Azure storage. The cluster manages the permissions of the blob file. The blob stores only a sequence number – there is no sensitive private information. For an SME: a single Azure credit ($100) might last a VERY long time. In testing, they haven’t been able to get a charge of even $0.01 per cluster!!!!

Controlling Failover

Clustering in WS2012 R2 can survive a 50% loss of votes at onces. One site is automatically elected to win. It’s random by default but you can configure it. You can configure manual failover between sites. You do this by manually toggling the votes in the DR site – remove the votes from DR site nodes. You can set preferred owners for resources too.

Storage Considerations

Elden hands over to Ned. Ned will cover Storage Replica. I have to leave at this point … but Ned is covering this topic in full length later on today.

Ignite 2015 – Spaces-Based, Software-Defined Storage–Design and Configuration Best Practices

Speakers: Joshua Adams and Jason Gerend, Microsoft.

Designing a Storage Spaces Solution

  1. Size your disks for capacity and performance
  2. size your storage enclosures
  3. Choose how to handlw disk failures
  4. Pick the number of cluster nodes
  5. Select a hardware solution
  6. Design your storage pools
  7. Design your virtual disks

Size your disks – for capacity (HDDs)

  1. Identify your workloads and resiliency type: Parity for backups and mirror for everything else.
  2. Estimate how much raw capacity you need. Currently capcity x% data grown X data copies (if your using mirrors). Add 12% initially for automatic virtual disk repairs and meta data overhead. Example: 135 TB x 1. x 3 data copies + 12 % = 499 TB raw capacity
  3. Size your HDDs: Pick big 7200 RPM NL SAS HDDs. Fast HDD not required is using SSD tier.

Software Defined Storage Calculator allows you to size and design a deployment and it generates the PowerShell. Works with WS2012 R2 and WS2016, disaggregated and hyperconverged deployments.

Size your disks – for performance (SSDs)

  1. How many SSDs to use. Sweet spot is 1 SSD for every 2-4 HDDs. Typically 4-5 SSDs per enclosure per pool. More SSDs = more absolute performance
  2. Determine the SD size. 800 GB SSDs are typical. Larger SSD capacity = can handle larger amounts of active data. Anticipate around 10% of SSD capacity for automatically repairing after an SSD failure.

Example 36 x 800 GB SSDs.

Size you Enclosures

  1. Pick the enclosure size (12, 24, 60, etc  disks)
  2. Pick the number of enclosures. If you have 3 or 4 then you have enclosure awareness/fault tolerance, depending on type of mirroring.
  3. Each enclosure should have an identical number of disks.

Example, 3 x 60 bay JBODs each with 48 HDDs and 12 SSDs

The column count is fixed between 2 tiers. The smaller tier (SSD) limits the column count. 3-4 columns is a sweet spot.

Expanding pools has an overhead. Not trivial but it works. Recommend that you fill JBODs.

Choose how to Handle Disk Failures

  1. Simultaneous disk failures to tolerate. Use 2 data copies for small deployments and disks, and/or less important data. use 3 data copies for larger deployments and disks, and for more important data.
  2. Plan to automatically repair disks. Instead of hot spares, set aside pool capacity to automatically replace failed disks. Also effects column count … more later.

Example: 3-way mirrors.

Pick the number of Cluster Nodes

Start with 1 node per enclosure and scale up/down depending on the amount of compute required. This isn’t about performance; it’s about how much compute you can afford to lose and still retain HA.

Example: 3 x 3 = 3 SOFS nodes + 3 JBODs.

Select a hardware vendor

  1. DataON
  2. Dell
  3. HP
  4. RAID Inc
  5. Microsoft/Dell CPS

Design your Storage Pools

  1. Management domains: put your raw disks in the pool and manage them as a group. Some disk settings are applied at the pool level.
  2. More pools = more to manage. Pools = fault domains. More pools = less risk – increased resiliency and resiliency overhead..

Start with 84 disks per pool.

Divide disks evenly between pools.

Design your Virtual Disks

  • Where storage tiers, write-back cache and enclosure awareness are set.
  • More VDs = more uniform load balancing, but more to manage.
  • This is where column count come in. More columns = more throughput, but more latency. 3-4 columns is best.
  • Load balancing is dependent on identical virtual disks.
  • To automatically repair after a disk failure, need at least one more disk per tier than columns for the smallest tier, which is usually the SSD tier.
  1. Set aside 10% of SSD and HDD capacity for repairs.
  2. Start with 2 virtual disks per node.
  3. Add more to keep virtual disk size to 10 TB or less. Divide SSD and HDD capacity evenly between virtual disks. Use 3-4 columns if possible.

Best Practices for WS2012 R2

  • Scale by adding fully populated clusters. Get used to the concept of storage/compute/networking stamps.
  • Monitor your existing workloads for performance. The more you know about the traits of your unique workloads, the better future deployments will be.
  • Do a PoC deployment. Use DiskSpd and fault injection to stress the solution. Monitor the storage tiers performance to determine how much SSD capacity you need to fit a given scale of your workloads into SSD tiers.

WORK WITH A TRUSTED SOLUTION VENDOR. Not all hardware is good, even if it is on the HCL. Some are better than others, and some suck. In my opinion Intel and Quanta suck. DataON is excellent. Dell appears to have gone through hell during CPS development to be OK. And some disks, e.g. SanDISK, are  the spawn of Satan, in my experience – Note that Dell use SanDISK and Toshiba so demand Toshiba only SSDs from Dell. HGST SSDs are excellent.

Deployment Best Practices

  • Disable TRIM on SSDs. Some drives degrade performance with TRIM enabled.
  • Disable all disk based caches – if enabled if degrades performance when write-through is used (Hyper-V).
  • Use LB (least blocks) for MPIO policy. For max performance, set individual SSDs to Round Robin. This must be done on each SOFS node.
  • Optimize Storage Spaces repair settings on SOFS. Use Fast Rebuild. Change it from Auto to Always on the pool. This means that 5 minutes after a write failure, a rebuild will automatically start. Pulling a disk does not trigger an automatic rebuild – an expensive process.
  • Install the latest updates. Example: repair process got huge improvement in November 2014 update.

Deployment & Management Best Practices

  • Deploy using VMM or PowerShell. FCM is OK for small deployments.
  • VMM is great for some stuff, but in 2012 R2 it doesn’t do tiering etc. It can create the cluster well and manage shares, but for disk creation, use PowerShell.
  • Monitor it using SCOM with the new Storage Spaces management pack.
  • Also use Test-StorageHealth.PS1 to do some checks occasionally. It needs tweaking to size it for your configuration.

Design Closing Thoughts

  • Storage Spaces solutions offer: 2-4 cluster nodes and 1-4 JBODs. Store 100 to as many as 2000 VMs.
  • Storage Pool Design; HDDs  provide most of the capacity. SSDs offer performance. Up to 84 disks per pool.
  • Virtual Disk design: Set aside 10% of SSD and HDD capacity for repairs. Start with 2 VDs per node. Max 0 TB/virtual disk. 3-4 volums for balanced performance.

Coming in May

  • Storage Spaces Design Considerations Guide (basis of this presentation)
  • Storage Spaces Design Calculator (spreadsheet used in this presentation)

Ignite 2015–Hyper-V Storage Performance with Storage Quality of Service

I am live blogging this session so hit refresh to see more.

Speakers: Senthil Rajaram and Jose Barreto.

This session is based on what’s in TPv2. There is a year of development and FEEDBACK left, so things can change. If you don’t like something … tell Microsoft.

Storage Performance

  1. You need to measure to shape
  2. Storage control allows shaping
  3. Monitoring allows you to see the results – do you need to make changes?

Rules

  • Maximum Allowed: Easy – apply a cap.
  • Minimum Guaranteed: Not easy. It’s a comparative value to other flows. How do you do fair sharing? A centralized policy controller avoids the need for complex distributed solutions.

The Features in WS2012 R2

There are two views of performance:

  • From the VM: what the customer sees – using perfmon in the guest OS
  • From the host: What the admin sees – using the Hyper-V metrics

VM Metrics allow performance data to move with a VM. (get-vm –name VM01)  | Measure-VM).HardDiskMetrics …. it’s Hyper-V Resource Metering – Enable-VMResourceMetering.

Normalized IOPS

  • Counted in 8K blocks – everything is a multiple of 8K.
  • Smaller than 8K counts as 1
  • More than 8K counted in multiples, e.g 9K = 2.

This is just an accounting trick. Microsoft is not splitting/aggregating IOs.

Used by:

  • Hyper-V Storage Performance Counters
  • Hyper-V VM Metrics (HardDiskMetrics)
  • Hyper-V Storage QoS

Storage QoS in WS2012 R2

Features:

  • Metrics – per VM and VHD
  • Maximum IOPS per VHD
  • Minimum IOPS per VHD – alerts only

Benefits:

  • Mitigate impact of noisy neighbours
  • Alerts when minimum IOPS are not achieved

Long and complicated process to diagnose storage performance issues.

Windows Server 2016 QoS Instroduction.

Moving from managing IOPS on the host/VM to managing IOPS on the storage system.

Simple storage QoS system that is installed in the base bits. You should be able to observe performance for the entire set of VMs. Metrics are automatically collected, and you can use them even if you ar enot using QoS. No need to log into every node using the storage subsystem to see performance metrics. Can create policies per VM, VHD, service or tenant. You can use PoSH or VMM to manage it.

This is a SOFS solution. One of the SOFS nodes is elected as the policy manager – a HA role. All of the nodes in the cluster share performance data, and the PM is the “thinker”.

  1. Measure current capacity at the compute layer.
  2. Measure current capacity at the storage layer
  3. use algorithm to meet policies at the policy manager
  4. Adjust limits and enforce them at the compute layer

In TP2, this cycle is done every 4 seconds. Why? Storage and workloads are constantly changing. Disks are added and removed. Caching makes “total IOPS” impossible to calculate. The workloads change … a SQL DB gets a new index, or someone starts a backup. Continuous adjustment is required.

Monitoring

On by default You can query the PM to get a summary of what’s going on right now.

Available data returned by a PoSH object:

  • VHD path
  • VM Name
  • VM Host name
  • VM IPOS
  • VM latency
  • Storage node name
  • Storage node IOPS
  • Storage node latency

Get-StorageQoSFlow – performance of all VMs using this file server/SOFS

Get0StorageQoSVolume – performance of each volume on this file server/SOFS

There are initiator (the VM’s perspective) metric and storage metrics. Things like caching can cause differences in initiator and storage metrics.

Get-StorageQoSFlow | Sort InitiatorIPOS | FT InitiarorName, InitiatorIIOPS, InitiatorLatency

Working not with peaks/troughs but with averages over 5 minutes. The Storage QoS metrics, averaged over the last 5 minutes, are rarely going to match the live metrics in perfmon.

You can use this data: export to CSV, open in Excel pivot tables

Deploying Policies

Three elements in a policy:

  • Max: hard cap
  • Min: Guaranteed allocation if required
  • Type: Single or Multi-instance

You create policies in one place and deploy the policies.

Single instance: An allocation of IOPS that are shared by a group of VMs. Multi-instance: a performance tier. Every VM get’s the same allocation, e.g. max IOPS=100 and each VM gets that.

Storage QoS works with Shared VHDX

Active/Active: Allocation split based on load. Active/Passive: Single VM can use full allocation.

This solution works with Live Migration.

Deployment with VMM

You can create and apply policies in VMM 2016. Creaate in Fabric > Storage > QoS Policies. Deploy in VM Properties > Hardware Configuration > <disk> > Advanced. You can deploy via a template.

PowerShell

New-StorageQoSPolicy –CimSession FS1 –Name sdjfdjsf –PolicyType MultiInstance – MaximumIOPS 200

Get-VM –Name VM01 | Get-VMHardDiskDrive | Set-VMHardDiskDrive –QosPolicy $Policy

Get-StorageQoSPolicy –Name sdfsdfds | Get-StorageQoSFlow … see data on those flows affected by this policy. Pulls data from the PM.

Demo

The way they enforce max IOPS is to inject latency in that VM’s storage. This reduces IOPS.

Designing Policies

  • No policy: no shaping. You’re just going to observe uncontrolled performance. Each VM gets at least 1 IOPS
  • Minimum Only: A machine will get at least 200 IOPS, IF it needs it. VM can burst. Not for hosters!!! Don’t set false expectations of maximum performance.
  • Maximum only: Price banding by hosters or limiting a noisy neighbour.
  • Minimum < Maximum, e.g. between 100-200: Minimum SLA and limited max.
  • Min = Max: VM has a set level of performance, as in Azure.

Note that VMs do not use min IOPS if they don’t have the workload for it. It’s a min SLA.

Storage Health Monitoring

If total Min of all disks/VMs exceeds the storage system then:

  • QoS does it’s best to do fair share based on proportion.
  • Raises an alert.

In WS2016 there is 1 place to get alerts for SOFS called Storage health Monitoring. It’s a new service on the SOFS cluster. You’ll get alerts on JBOD fans, disk issues, QoS, etc. The alerts are only there while the issue is there, i.e. if the problem goes away then the alert goes away. There is no history.

Get-StorageSubSystem *clsuter* | Debug-StorageSubSystem.

You can register triggers to automate certain actions.

Right now – we spend 10x more than we need to to ensure VM performance. Storage QoS reduces spend by using a needle to fix issues instead of a sledge hammer. We can use intelligence to solve performance issues instead of a bank account.

In Hyper-V converged solution, the PM and rate limiters live on the same tier. Apparently there will be support for a SAN – I’m unclear on this design.

Ignite 2015–Nano Server: The Future of Windows Server

Speaker: Jeffrey Snover

Reasons for Nano Server, the GUI-less installation of Windows Server

 

  • It’s a cloud play. For example, minimize patching. Note that Azure does not have Live Migration so patching is a big deal.
  • CPS can have up to 16 TB of RAM moving around when you patch hosts – no service interruption but there is an impact on performance.
  • They need a server optimized for the cloud. MISFT needs one, and they think cloud operators need one too.

Details:

  • Headless, there is no local interface and no RDP. You cannot do anything locally on it.
  • It is a deep ra-factoring of Windows Server. You cannot switch from Nano to/from Core/Full UI
  • The roles they are focused on are Hyper-V, SOFS and clustering.
  • They also are focusing on born-in-the-cloud applications.
  • There is a zero-footprint model. No roles or features are installed by default. It’s a functionless server by default.
  • 64-bit only
  • No special hardware or drivers required.
  • Anti-malware is built in (Defender) and on by default.
  • They are working on moving over the System Center and app insights agents
  • They are talking to partners to get agent support for 3rd party management.
  • The Nano installer is on the TP2 preview ISO in a special folder. Instructions here.

Demo

  • They are using 3 *  NUC-style PCs as their Nano server cluster demo lab.  The switch is bigger than the cluster, and takes longer to boot than Nano Server. One machine is a GUI management machine and 2 nodes are a cluster. They use remote management only – because that’s all Nano Server supports.
  • They just do some demos, like Live Migration and PowerShell
  • When you connect to a VM, there is a black window.
  • They take out a 4th NUC that has Nano Server installed already, connect it up, boot it, and add it to the cluster.

Notes: this demo goes wrong. Might have been easier to troubleshoot with a GUI on the machine Smile

Management

  • “removing the need” to sit in front of a server
  • Configuration via “Core PoSH” and DSC
  • Remote management/automation via Core PowerShell and WMI: Limited set of cmdlets initially. 628 cmdlets so far (since January).
  • Integrate it into DevOps tool chains

They want to “remove the drama and heroism from IT”. Server dies, you kill it and start over. Oh, such a dream. To be honest, I hardly ever have this issue with hosts, and I could never recommend this for actual application/data VMs.

They do a query for processes with memory more than 10 MB. There are 5.

Management Tools

Some things didn’t work well remotely: Device Manager and remove event logging. Microsoft is improving in these tools to improve them and make remote management 1st class.

There will be a set of web-based tools:

  • Task manager
  • Registry editor
  • Event viewer
  • Device manager
  • sconfig
  • Control panel
  • File Explorer
  • Performance monitor
  • Disk management
  • Users/groups Manager

Also can be used with Core, MinShell, and Full UI installations.

We see a demo of web-based management, which appears to be the Azure Stack portal. This includes registry editor and task manager in a browser. And yes, they run PoSH console on the Nano server running in the browser too. Azure Stack could be a big deal.

Cloud Application Platform:

  • Hyper-V hosts
  • SOFS noes
  • In VMs for cloud apps
  • Hyper-V containers

Stuff like PoSH management coming in later releases.

Terminology

  • At the base there is Nano Server
  • Then there is Server …. what used to be Server Core
  • Anything with a GUI is now called Client, what used to be called Full UI

Client is what MSFT reckons should only be used for RDS and Windows Server Essentials. As has happened since W2008, customers and partners will completely ignore this 70% of the time, if not more.

The Client experience will never be available in containers.

The presentation goes on to talk about development and Chef automation. I leave here.

Platform Vision & Strategy–Storage Overview

Speakers: Siddhartha Roy and Jose Barreto

This will be a very interesting session for people Smile

What is Software Defined Storage?

Customers asking for cost and scales of Azure for their own data center. And this is what Microsoft has done. Most stuff came down from Azure, and some bits went from Server into Azure.

Traits:

  • Cloud-inspired infrastructure and design. Using industry standard h/w, integrating cloud design points in s/w. Driving cloud cost efficiencies.
  • Evolving technologies: Flash is transforming storage. Network delivering extreme performance. Maturity in s/w based solutions. VMs and containers. Expect 100 Gbps to make an impact, according to MSFT. According to Mellanox, they think the sweet sport will be 25 Gbps.
  • Data explosion: device proliferation, modern apps, unstructured data analytics
  • Scale out with simplicity: integrated solutions, rapid time to solution, policy-based management

Customer Choice

The usual 3 clouds story. Then some new terms:

  • Private cloud with traditional storage: SAN/NAS
  • Microsoft Azure Stack Storage is private cloud with Microsoft SDS.
  • Hybrid Cloud Storage: StorSimple
  • Azure storage: public cloud

The WS2012 R2 Story

The model of shared JBOD + Windows Server = Scale-Out File Server is discussed. Microsoft has proven that it scales and performs quite cost effectively.

Storage Spaces is the storage system that replaces RAID to aggregate disks into resilient pools in the Microsoft on-premises cloud.

In terms of management, SCVMM allows bare metal deployment of an SOFS, and then do the storage provisioning, sharing and permissions from the console. There is high performance with tiered storage with SSD and HDD.

Microsoft talks about CPS – ick! – I’ll never see one of these overpriced and old h/w solutions, but the benefit of Microsoft investing in this old Dell h/w is that the software solution has been HAMMERED by Microsoft and we get the fixes via Windows Update.

Windows Server 2016

Goals:

  • Reliability: Cross-site replication, improved tolerance to transient failures.
  • Scalability: Manage noisy neighours and demand surges of VMs
  • Manageability: Easier migration to the new OS version. Improved monitoring and incident costs.
  • Reduced cost: again. More cost-effective by using volume h/w. Use SATA and NVMe in addition to SAS.

Distributed Storage QoS

Define min and max policies on the SOFS. A rate limiter (hosts) and IO scheduler communicate and coordinate to enforce your rules to apply fair distribution and price banding of IOPS.

SCVMM and OpsMgr management with PowerShell support. Do rules per VHD, VM, service or tenant.

Rolling Upgrades

Check my vNext features list for more. The goal is much easier “upgrades” of a cluster so you can adopt a newer OS more rapidly and easily. Avoid disruption of service.

VM Storage Resiliency

When you lose all paths to VM’s physical storage, even redirected IO, then there needs to be a smooth process to deal with this, especially if we’re using more affordable standardized hardware. In WS2016:

  • The VM stack is notified.
  • The VM moves into a PausedCritical state and will wait for storage to recover
  • The VM can smoothly resume when storage recovers

Storage Replica

Built-in synchronous and asynchronous replication. Can be used to replicate different storage systems, e.g. SAN to SAN. It is volume replication. Can be used to create synch (stretch) or asynch (different) clusters across 2 sites.

Ned Pyle does a live demo of a synchronously replicated CSV that stores a VM. He makes a change in the VM. He then fails the cluster node in site 1, and the CSV/VM fail over to site 2.

Storage Spaces Direct (S2D)

No shared JBODs or SAS network. The cluster uses disks like SAS, SATA (SSD and/or HDD) or NVMe and stretches Storage Spaces across the physical nodes. NVMe offers massive performance. SATA offers really low pricing. The system is simple: 4+ servers in a cluster, with Storage Spaces aggregating all the disks. If a node fails, high-speed networking will recover the data to fault tolerant nodes.

Use cases:

  • Hyper-V IaaS
  • Storage for backup
  • Hyper-converged
  • Converged

There are two deployment models:

  • Converged (storage cluster + Hyper-V cluster) with SMB 3.0 networking between the tiers.
  • Hyper-Converged: Hyper-V + storage on 1 tier of servers

Customers have the choice:

  • Storage Spaces with shared JBOD
  • CiB
  • S2D hyper-converged
  • S2D converged

There is a reference profile for hardware vendors to comply with for this solution. E.g. Dell PowerEdge R730XD. HP appollo 2000. C3160 UCS, Lenovo x3650 M5, and a couple more.

In the demo:

4 NVMe + bunch of SATA disks in each of 5 nodes. S2D aggregates the disks into a single pool. A number of virtual disks are created from the pool. They have a share per vDisk, and VMs storage in the shares.

There’s a demo of stress test of IOPS. He’s added a node (5th added to 4 node cluster). IOPS on just the old nodes. Starts a live rebalancing of Storage Spaces (where the high speed RDMA networking is required). Now we see IOPS spike as blocks are rebalanced to consume an equal amount of space across all 5 nodes. This mechanism is how you expand a S2D cluster. It takes a few minutes to complete. Compare that to your SAN!!!

In summary: great networking + ordinary servers + cheap SATA disk gives you great volume at low cost, combined with SATA SSD or NVMe for peak performance for hot blocks.

Storage Health Monitoring

Finally! A consolidated subsystem for monitoring health events of all storage components (spindle up). Simplified: problem identication and alerting.

Azure-Consistent Storage

This is coming in a future release. Coming to SDS. Delivers Azure blobs, tables and account management services for private and hosted clouds. Deployed on SOFS and Storage Spaces. Deployed as Microsoft Azure Stack cloud services. Uses Azure cmdlets with no changes. Can be used for PaaS and IaaS.

More stuff:

  • SMB Security
  • Deduplication scalability
  • ReFS performance: Create/extend fixed VHDX and merge checkpoints with ODX-like (promised) speed without any hardware dependencies.

Jose runs a test: S2D running diskspp against local disk: 8.3 GigaBYTES ps  with 0.003 seconds latency. He does the same from a Hyper-V VM and gets the same performance (over 100 Gbps Connectx-4 card from Mellanox).

Now he adds 3 NVMe cards from Micron. Latency is down to 0.001 MS with throuput of 11 GigaBYTES per second. Can they do it remotely – yup, over a single ConnectX-4 NIC they get the same rate of throughput. Incredible!

Less than 15% CPU utilization.

Microsoft News – 23 April 2015

I’ve been really busy either preparing training, delivering training, on customer sites, or prepping my two sessions for Ignite. Here’s the roundup of recent Microsoft news for infrastructure IT pros:

Hyper-V

Windows Server

Windows 10

Azure

Office 365

Intune

Miscellaneous

Reminder: Webinar on ODX for Hyper-V and VAAI for vSphere Storage Enhancement

Here’s a reminder of the webinar by StarWind that I am co-presenting with Max Kolomyeytsev. We’ll be talking about offloading storage operations to a SAN using ODX for Wnidows Server & Hyper-V and VAAI for vSphere. It’s a great piece of functionality and there are some things to know before using it. The session starts at tomorrow at 19:00 UK/IE time, 20:00 CET, and 14:00 EST. Hopefully we’ll see you there!

Register here.

Technorati Tags: ,,,,

Altaro Webinar Recording and Slides – What’s New in Hyper-V vNext

I recently co-presented a webinar by Altaro with Rick Claus (Microsoft) and Andrew Syrewicze (MVP) on what’s coming in the next version of Windows Server Hyper-V. Altaro has a recording of the webinar online. That page will be updated soon with a written Q&A from the ssession; we had A LOT of questions and Altaro asked me to write out responses which I did last Friday night. You can also download a PDF copy of the slides from the session.

Thank you to everyone that joined us. We had a great number of people tuned in – I was stunned when the folks at Altaro broke down the numbers. Hopefully, I’ll see some of you tomorrow night in the webinar I am co-presenting for StarWind on using ODX or VAAI to enhance storage performance for Hyper-V or vSphere respectively.

Microsoft News – 8 April 2015

There’s a lot of stuff happening now. The Windows Server vNext Preview expires on April 15th and Microsoft is promising a fix … the next preview isn’t out until May (maybe with Ignite on?). There’s rumours of Windows vNext vNext. And there’s talk of open sourcing Windows – which I would hate. Here’s the rest of what’s going on:

Hyper-V

Windows Server

Windows Client

Azure

I Am Co-Hosting A Webinar On ODX/VAAI For Optimising Storage

On April 21st and 2pm ET (USA) or 7PM UK/IE, I will be co-hosting a StarWind Software webinar with Max Kolomyeytsev. I will be talking about using ODX in a Hyper-V scenario, and Max will talk about it (VAAI) from the vSphere perspective.

image

Register here.