Platform Vision & Strategy–Storage Overview

Speakers: Siddhartha Roy and Jose Barreto

This will be a very interesting session for people Smile

What is Software Defined Storage?

Customers asking for cost and scales of Azure for their own data center. And this is what Microsoft has done. Most stuff came down from Azure, and some bits went from Server into Azure.

Traits:

  • Cloud-inspired infrastructure and design. Using industry standard h/w, integrating cloud design points in s/w. Driving cloud cost efficiencies.
  • Evolving technologies: Flash is transforming storage. Network delivering extreme performance. Maturity in s/w based solutions. VMs and containers. Expect 100 Gbps to make an impact, according to MSFT. According to Mellanox, they think the sweet sport will be 25 Gbps.
  • Data explosion: device proliferation, modern apps, unstructured data analytics
  • Scale out with simplicity: integrated solutions, rapid time to solution, policy-based management

Customer Choice

The usual 3 clouds story. Then some new terms:

  • Private cloud with traditional storage: SAN/NAS
  • Microsoft Azure Stack Storage is private cloud with Microsoft SDS.
  • Hybrid Cloud Storage: StorSimple
  • Azure storage: public cloud

The WS2012 R2 Story

The model of shared JBOD + Windows Server = Scale-Out File Server is discussed. Microsoft has proven that it scales and performs quite cost effectively.

Storage Spaces is the storage system that replaces RAID to aggregate disks into resilient pools in the Microsoft on-premises cloud.

In terms of management, SCVMM allows bare metal deployment of an SOFS, and then do the storage provisioning, sharing and permissions from the console. There is high performance with tiered storage with SSD and HDD.

Microsoft talks about CPS – ick! – I’ll never see one of these overpriced and old h/w solutions, but the benefit of Microsoft investing in this old Dell h/w is that the software solution has been HAMMERED by Microsoft and we get the fixes via Windows Update.

Windows Server 2016

Goals:

  • Reliability: Cross-site replication, improved tolerance to transient failures.
  • Scalability: Manage noisy neighours and demand surges of VMs
  • Manageability: Easier migration to the new OS version. Improved monitoring and incident costs.
  • Reduced cost: again. More cost-effective by using volume h/w. Use SATA and NVMe in addition to SAS.

Distributed Storage QoS

Define min and max policies on the SOFS. A rate limiter (hosts) and IO scheduler communicate and coordinate to enforce your rules to apply fair distribution and price banding of IOPS.

SCVMM and OpsMgr management with PowerShell support. Do rules per VHD, VM, service or tenant.

Rolling Upgrades

Check my vNext features list for more. The goal is much easier “upgrades” of a cluster so you can adopt a newer OS more rapidly and easily. Avoid disruption of service.

VM Storage Resiliency

When you lose all paths to VM’s physical storage, even redirected IO, then there needs to be a smooth process to deal with this, especially if we’re using more affordable standardized hardware. In WS2016:

  • The VM stack is notified.
  • The VM moves into a PausedCritical state and will wait for storage to recover
  • The VM can smoothly resume when storage recovers

Storage Replica

Built-in synchronous and asynchronous replication. Can be used to replicate different storage systems, e.g. SAN to SAN. It is volume replication. Can be used to create synch (stretch) or asynch (different) clusters across 2 sites.

Ned Pyle does a live demo of a synchronously replicated CSV that stores a VM. He makes a change in the VM. He then fails the cluster node in site 1, and the CSV/VM fail over to site 2.

Storage Spaces Direct (S2D)

No shared JBODs or SAS network. The cluster uses disks like SAS, SATA (SSD and/or HDD) or NVMe and stretches Storage Spaces across the physical nodes. NVMe offers massive performance. SATA offers really low pricing. The system is simple: 4+ servers in a cluster, with Storage Spaces aggregating all the disks. If a node fails, high-speed networking will recover the data to fault tolerant nodes.

Use cases:

  • Hyper-V IaaS
  • Storage for backup
  • Hyper-converged
  • Converged

There are two deployment models:

  • Converged (storage cluster + Hyper-V cluster) with SMB 3.0 networking between the tiers.
  • Hyper-Converged: Hyper-V + storage on 1 tier of servers

Customers have the choice:

  • Storage Spaces with shared JBOD
  • CiB
  • S2D hyper-converged
  • S2D converged

There is a reference profile for hardware vendors to comply with for this solution. E.g. Dell PowerEdge R730XD. HP appollo 2000. C3160 UCS, Lenovo x3650 M5, and a couple more.

In the demo:

4 NVMe + bunch of SATA disks in each of 5 nodes. S2D aggregates the disks into a single pool. A number of virtual disks are created from the pool. They have a share per vDisk, and VMs storage in the shares.

There’s a demo of stress test of IOPS. He’s added a node (5th added to 4 node cluster). IOPS on just the old nodes. Starts a live rebalancing of Storage Spaces (where the high speed RDMA networking is required). Now we see IOPS spike as blocks are rebalanced to consume an equal amount of space across all 5 nodes. This mechanism is how you expand a S2D cluster. It takes a few minutes to complete. Compare that to your SAN!!!

In summary: great networking + ordinary servers + cheap SATA disk gives you great volume at low cost, combined with SATA SSD or NVMe for peak performance for hot blocks.

Storage Health Monitoring

Finally! A consolidated subsystem for monitoring health events of all storage components (spindle up). Simplified: problem identication and alerting.

Azure-Consistent Storage

This is coming in a future release. Coming to SDS. Delivers Azure blobs, tables and account management services for private and hosted clouds. Deployed on SOFS and Storage Spaces. Deployed as Microsoft Azure Stack cloud services. Uses Azure cmdlets with no changes. Can be used for PaaS and IaaS.

More stuff:

  • SMB Security
  • Deduplication scalability
  • ReFS performance: Create/extend fixed VHDX and merge checkpoints with ODX-like (promised) speed without any hardware dependencies.

Jose runs a test: S2D running diskspp against local disk: 8.3 GigaBYTES ps  with 0.003 seconds latency. He does the same from a Hyper-V VM and gets the same performance (over 100 Gbps Connectx-4 card from Mellanox).

Now he adds 3 NVMe cards from Micron. Latency is down to 0.001 MS with throuput of 11 GigaBYTES per second. Can they do it remotely – yup, over a single ConnectX-4 NIC they get the same rate of throughput. Incredible!

Less than 15% CPU utilization.

2 thoughts on “Platform Vision & Strategy–Storage Overview”

  1. I really thought they’d be trying to add StorSimple in as a standard Role in Windows Server. Would be an instant win to offload old data on servers to Azure storage.

    1. StorSimple’s tiering is the Lego Duplo equivalent of tiering compared to Storage Spaces (the Lego Technic of tiering). Microsoft should dump StorSimple and add Azure storage accounts as a tier to Storage Spaces.

Leave a Reply

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.