{"id":14863,"date":"2013-06-06T14:50:45","date_gmt":"2013-06-06T13:50:45","guid":{"rendered":"https:\/\/aidanfinn.com\/?p=14863"},"modified":"2013-06-06T14:50:45","modified_gmt":"2013-06-06T13:50:45","slug":"teched-na-2013-software-defined-storage-in-windows-server-system-center-2012-r2","status":"publish","type":"post","link":"https:\/\/aidanfinn.com\/?p=14863","title":{"rendered":"TechEd NA 2013 &#8211; Software Defined Storage In Windows Server &#038; System Center 2012 R2"},"content":{"rendered":"<p>Speakers: Elden Christensen, Hector Linares, Jose Barreto, and Brian Matthew (last two are in the front row at least)<\/p>\n<p>4:12 SSDs in 60 drive jbod. <\/p>\n<p>Elden kicks off. He owns Failover Clustering in Windows Server.<\/p>\n<p><strong><u>New Approach To Storage<\/u><\/strong><\/p>\n<ul>\n<li>File based storage: high performance SMB protocol for Hyper-V storage over Ethernet networks.&#160; In addition: the scale-out file server to make SMB HA with transparent failover.&#160; SMB is the best way to do Hyper-V storage, even with backend SAN. <\/li>\n<li>Storage Spaces: Cost-effective business critical storage <\/li>\n<\/ul>\n<p><strong><u>Enterprise Storage Management Scenarios with SC 2012 R2<\/u><\/strong><\/p>\n<p>Summary: not forgotten.&#160; We can fully manage FC SAN from SysCtr via SMI&#8217;-S now, including zoning.&#160; And the enhancements in WS2012 such as TRIM, UNMAP, and ODX offer great value.<\/p>\n<p>Hector, Storage PM in VMM, comes up to demo.<\/p>\n<p><strong><u>Demo: SCVMM<\/u><\/strong><\/p>\n<p>Into the Fabric view of the VMM console.&#160; Fibre Channel Fabrics is added to Providers under Storage.&#160; He browses to VMs and Services and expands an already deployed 1 tier service with 2 VMs.&#160; Opens the Service Template in the designer.&#160; Goes into the machine tier template.&#160; There we see that FC is surfaced in the VM template.&#160; It can dynamically assign or statically assign FC WWNs.&#160; There is a concept of fabric classification, e.g. production, test, etc.&#160; That way, Intelligent Placement can find hosts with the right FC fabric and put VMs there automatically for you. <\/p>\n<p>Opens a powered off VM in a service.&#160; 2 vHBAs.&#160; We can see the mapped Hyper-V virtual SAN, and the 4 WWNs (for seamless Live Migration).&#160; In Storage he clicks Add Fibre Channel Array.&#160; Opens a Create New Zone dialog.&#160; Can select storage array and FC fabric and the zoning is done.&#160; No need to open the SAN console.&#160; Can create a LUN, unmask it at the service tier \u2026. in other words provision a LUN to 64 VMs (if you want) in a service tier with just a couple of mouse clicks \u2026 in the VMM console. <\/p>\n<p>In the host properties, we see the physical HBAs.&#160; You can assign virtual SANs to the HBAs.&#160; Seems to offer more abstraction than the bare Hyper-V solution \u2013 but I\u2019d need a \u20ac50K SAN and rack space to test <img decoding=\"async\" class=\"wlEmoticon wlEmoticon-smile\" style=\"border-top-style: none; border-left-style: none; border-bottom-style: none; border-right-style: none\" alt=\"Smile\" src=\"https:\/\/aidanfinn.com\/wp-content\/uploads\/2013\/06\/wlEmoticon-smile6.png\" \/><\/p>\n<p>So instead of just adding vHBA support, but they\u2019ve given us end-end deployment and configuration.<\/p>\n<p>Requirement: SMI-S provider for the FC SAN.<\/p>\n<p><strong><u>Demo: ODX<\/u><\/strong><\/p>\n<p>In 30 seconds, 3% of BITS VM template creation is done.&#160; Using same setup but with ODX, but the entire VM can be deployed and customized much more quickly.&#160; In just over 2 minutes the VM is started up.<\/p>\n<p>Back to Elden<\/p>\n<p><strong><u>The Road Ahead<\/u><\/strong><\/p>\n<p>WS2012 R2 is cloud optimized \u2026 short time frame since last release so they went with a focused approach to make the most of the time:<\/p>\n<ul>\n<li>Private clouds <\/li>\n<li>Hosted clouds <\/li>\n<li>Cloud Service Providers <\/li>\n<\/ul>\n<p>Focus on capex and opex costs.&#160; Storage and availability costs<\/p>\n<p><strong><u>IaaS Vision<\/u><\/strong><\/p>\n<ul>\n<li>Dramatically lowering the costs and effort of delivering IaaS storage services <\/li>\n<li>Disaggregated compute and storage: independent manage and scale at each layer. Easier maintenance and upgrade. <\/li>\n<li>Industry standard servers, networking and storage: inexpensive networks. inexpensive shared JBOD storage.&#160; Get rid of the fear of growth and investment. <\/li>\n<\/ul>\n<p>SMB is the vision, not iSCSI\/FC, although they got great investments in WS2012 and SC2012 R2.<\/p>\n<p><strong><u>Storage Management Pillars<\/u><\/strong><\/p>\n<p align=\"center\"><a href=\"https:\/\/aidanfinn.com\/wp-content\/uploads\/2013\/06\/picture053.jpg\"><img loading=\"lazy\" decoding=\"async\" title=\"picture053\" style=\"border-left-width: 0px; border-right-width: 0px; background-image: none; border-bottom-width: 0px; padding-top: 0px; padding-left: 0px; display: inline; padding-right: 0px; border-top-width: 0px\" border=\"0\" alt=\"picture053\" src=\"https:\/\/aidanfinn.com\/wp-content\/uploads\/2013\/06\/picture053_thumb.jpg\" width=\"504\" height=\"285\" \/><\/a><\/p>\n<p><strong><u>Storage Management API (SM-API)<\/u><\/strong><\/p>\n<p align=\"center\"><a href=\"https:\/\/aidanfinn.com\/wp-content\/uploads\/2013\/06\/DSCN0086.jpg\"><img loading=\"lazy\" decoding=\"async\" title=\"DSCN0086\" style=\"border-top: 0px; border-right: 0px; background-image: none; border-bottom: 0px; padding-top: 0px; padding-left: 0px; border-left: 0px; display: inline; padding-right: 0px\" border=\"0\" alt=\"DSCN0086\" src=\"https:\/\/aidanfinn.com\/wp-content\/uploads\/2013\/06\/DSCN0086_thumb.jpg\" width=\"504\" height=\"379\" \/><\/a><\/p>\n<p><strong><u>VMM + SOFS &amp; Storage Spaces<\/u><\/strong><\/p>\n<ul>\n<li>Capacity management: pool\/volume\/file share classification.&#160; File share ACL.&#160; VM workload deployment to file shares. <\/li>\n<li>SOFS deployment: bare metal deployment of file server and SOFS. <\/li>\n<li>Spaces provisioning <\/li>\n<\/ul>\n<p><strong><u>Guest Clustering With Shared VHDX<\/u><\/strong><\/p>\n<p>See yesterday\u2019s post.<\/p>\n<p><strong><u>iSCSI Target<\/u><\/strong><\/p>\n<ul>\n<li>Uses VHDX instead of VHD.&#160; Can import VHD, but not create. Provision 64 TB and dynamically resize LUNs <\/li>\n<li>SMI-S support built in for standards based management, VMM. <\/li>\n<li>Can now manage an iSCSI cluster using SCVMM <\/li>\n<\/ul>\n<p>Back to Hector \u2026<\/p>\n<p><strong><u>Demo: SCVMM<\/u><\/strong><\/p>\n<p>Me: You should realise by now that System Center and Windows Server are developed as a unit and work best together.<\/p>\n<p>He creates a Physical Computer Profile.&#160; Can create a VM host (Hyper-V) or file server.&#160; The model is limited to that now, but later VMM <em>could<\/em> be extended to deploy other kinds of physical server in the data centre. <\/p>\n<p>Hector deploys a clustered file server.&#160; You can use existing machine (enables roles and file shares on existing OS) OR provision a bare metal machine (OS, cluster, etc, all done by VMM).&#160; He provisions the entire server, VMM provisions the storage space\/virtual disk\/CSV, and then a file share on a selected Storage Pool with a classification for the specific file share.<\/p>\n<p>Now he edits the properties of a Hyper-V cluster, selects the share, and VMM does all the ACL work.<\/p>\n<p>Basically, a few mouse clicks in VMM and an entire SOFS is built, configured, shared, and connected.&#160; No logging into the SOFS nodes at all.&#160; Only need to touch them to rack, power, network, and set BMC IP\/password.<\/p>\n<p><strong><u>SMB Direct<\/u><\/strong><\/p>\n<ul>\n<li>50% improvement for small IO workloads with SMB Direct (RDMA) in WS2012 R2. <\/li>\n<li>Increased performance for 8K IOPS <\/li>\n<\/ul>\n<p><strong><u>Optimized SOFS Rebalancing<\/u><\/strong><\/p>\n<ul>\n<li>SOFS clients are now redirected to the \u201cbest\u201d node for access <\/li>\n<li>Avoids uneccessary redirection <\/li>\n<li>Driven by ownership of CSV <\/li>\n<li>SMB connections are managed by share instead of per file server. <\/li>\n<li>Dynamically moves as CSV volume ownership changes \u2026 clustering balances CSV automatically. <\/li>\n<li>No admin action. <\/li>\n<\/ul>\n<p><strong><u>Hyper-V over SMB<\/u><\/strong><\/p>\n<p>Enables SMB Multichannel (more than 1 NIC) and Direct (RDMA \u2013 speed).&#160; Lots of bandwidth and low latency.&#160; Vacate a host really quickly.&#160; Don\u2019t fear those 1 TB RAM VMs <img decoding=\"async\" class=\"wlEmoticon wlEmoticon-smile\" style=\"border-top-style: none; border-left-style: none; border-bottom-style: none; border-right-style: none\" alt=\"Smile\" src=\"https:\/\/aidanfinn.com\/wp-content\/uploads\/2013\/06\/wlEmoticon-smile6.png\" \/><\/p>\n<p><strong><u>SMB Bandwidth Management<\/u><\/strong><\/p>\n<p>We now have 3 QoS categories for SMB:<\/p>\n<ul>\n<li>Default \u2013 normal host storage <\/li>\n<li>VirtualMachine \u2013 VM accessing SMB storage <\/li>\n<li>LiveMigration \u2013 Host doing LM <\/li>\n<\/ul>\n<p>Gives you granular control over converged networks\/fabrics because 1 category of SMB might be more important than others.<\/p>\n<p><strong><u>Storage QoS<\/u><\/strong><\/p>\n<p>Can set Maximup IOPS and Minimum IOPS alerts per VHDX.&#160; Cap IOPS per virtual hard disk, and get alerts when virtual hard disks aren\u2019t getting enough bandwidth \u2013 could lead to auto LM to another better host.<\/p>\n<p>Jose comes up \u2026<\/p>\n<p><strong><u>Demo: <\/u><\/strong><\/p>\n<p>Has a 2 node SOFS.&#160; 1 client: a SQL server.&#160; Monitoring via Perfmon, and both the SOFS nodes are getting balanced n\/w utilization caused by that 1 SQL server.&#160; Proof of connection balancing.&#160; Can also see that the CSVs are balanced by the cluster. <\/p>\n<p>Jose adds a 3rd file server to the SOFS cluster.&#160; It\u2019s just an Add operation of an existing server that is physically connected to the SOFS storage.&#160; VMM adds roles, etc, and adds the server.&#160; After a few minutes the cluster is extended.&#160; The CSVs are rebalanced across all 3 nodes, and the client traffic is rebalanced too. <\/p>\n<p><em>That demo was being done entirely with Hyper-V VMs and shared VHDX on a laptop.<\/em><\/p>\n<p>Another demo: Kicks off an 8K IO worklaod.&#160; Single client talking to single server (48 SSDs in single mirrored space) and 3 infiniband NICs per server.&#160; Averaging nearly 600,000 IOPS, sometimes getting over that.&#160; Now he enables RAM caching.&#160; Now he gets nearly 1,000,000 IOPS.&#160; CPU becomes his bottleneck <img decoding=\"async\" class=\"wlEmoticon wlEmoticon-smile\" style=\"border-top-style: none; border-left-style: none; border-bottom-style: none; border-right-style: none\" alt=\"Smile\" src=\"https:\/\/aidanfinn.com\/wp-content\/uploads\/2013\/06\/wlEmoticon-smile6.png\" \/>&#160;<\/p>\n<p>Nice timing: question on 32K IOs.&#160; That\u2019s the next demo <img decoding=\"async\" class=\"wlEmoticon wlEmoticon-smile\" style=\"border-top-style: none; border-left-style: none; border-bottom-style: none; border-right-style: none\" alt=\"Smile\" src=\"https:\/\/aidanfinn.com\/wp-content\/uploads\/2013\/06\/wlEmoticon-smile6.png\" \/>&#160; RDMA loves large IO.&#160; 500,000 IOPS, but now the throughput is 16.5 GIGABYTES (not Gbps) per second.&#160; That\u2019s 4 DVDs per second.&#160; No cheating: real usable data, going to real file system, nor 5Ks to raw disk as in some demo cheats.<\/p>\n<p>Back to Elden \u2026<\/p>\n<p><strong><u>Data Deduplication<\/u><\/strong><\/p>\n<p>Some enhancements:<\/p>\n<ul>\n<li>Dedup open VHD\/VHDX files.&#160; Not supported with data VHD\/VHDX.&#160; Works great for volumes that only store OS disks, e.g. VDI. <\/li>\n<li>Faster read\/write of optimized files \u2026 in fact, faster than CSV Block Cache!!!!! <\/li>\n<li>Support for SOFS with CSV <\/li>\n<\/ul>\n<p>The Dedup filter redirects read request to the chunk store.&#160; Hyper-V does buffered IO that bypasses the cache.&#160; But Dedup does cache.&#160; So Hyper-V read of deduped files is cached in RAM, and that\u2019s why dedupe can speed up the boot storm.<\/p>\n<p><strong><u>Demo: Dedup<\/u><\/strong><\/p>\n<p>A PM I don&#8217;t know takes the stage.&#160; This demo will be how Dedup optimizes the boot storm scenario.&#160; Starts up VMs\u2026 one collection is optimized and the other not.&#160; Has a tool to monitor boot up status.&#160; The deduped VMs start up more quickly.<\/p>\n<p><strong><u>Reduced Mean Time To Recovery<\/u><\/strong><\/p>\n<ul>\n<li>Mirrored spaces rebuild: parallelized recovery <\/li>\n<li>Increased throughput during rebuilds. <\/li>\n<\/ul>\n<p><strong><u>Storage Spaces<\/u><\/strong><\/p>\n<p>See yesterday\u2019s notes.&#160; They heapmap the data and automatically (don\u2019t listen to block storage salesman BS) promote hot data and demote cold data through the 2 tiers configured in the virtual disk (SSD and HDD in storage space).<\/p>\n<p>Write-Back Cache: absorbs write spikes using SSD tier.<\/p>\n<p>Brian Matthew takes the stage<\/p>\n<p><strong><u>Demo: Storage Spaces<\/u><\/strong><\/p>\n<p>See notes from yesterday<\/p>\n<p>Back to Elden \u2026<\/p>\n<p><strong><u>Summary<\/u><\/strong><\/p>\n<p align=\"center\"><a href=\"https:\/\/aidanfinn.com\/wp-content\/uploads\/2013\/06\/DSCN0087.jpg\"><img loading=\"lazy\" decoding=\"async\" title=\"DSCN0087\" style=\"border-top: 0px; border-right: 0px; background-image: none; border-bottom: 0px; padding-top: 0px; padding-left: 0px; border-left: 0px; display: inline; padding-right: 0px\" border=\"0\" alt=\"DSCN0087\" src=\"https:\/\/aidanfinn.com\/wp-content\/uploads\/2013\/06\/DSCN0087_thumb.jpg\" width=\"504\" height=\"379\" \/><\/a><\/p>\n<div id=\"scid:0767317B-992E-4b12-91E0-4F059A8CECA8:6520c0f8-4e4e-4a66-8284-e0595f49938d\" class=\"wlWriterSmartContent\" style=\"float: none; padding-bottom: 0px; padding-top: 0px; padding-left: 0px; margin: 0px; display: inline; padding-right: 0px\">Technorati Tags: <a href=\"http:\/\/technorati.com\/tags\/Event+Notes\" rel=\"tag\">Event Notes<\/a>,<a href=\"http:\/\/technorati.com\/tags\/Windows+Server+2012+R2\" rel=\"tag\">Windows Server 2012 R2<\/a>,<a href=\"http:\/\/technorati.com\/tags\/Hyper-V\" rel=\"tag\">Hyper-V<\/a>,<a href=\"http:\/\/technorati.com\/tags\/System+Center\" rel=\"tag\">System Center<\/a>,<a href=\"http:\/\/technorati.com\/tags\/VMM\" rel=\"tag\">VMM<\/a>,<a href=\"http:\/\/technorati.com\/tags\/Storage\" rel=\"tag\">Storage<\/a><\/div>\n","protected":false},"excerpt":{"rendered":"<p>Speakers: Elden Christensen, Hector Linares, Jose Barreto, and Brian Matthew (last two are in the front row at least) 4:12 SSDs in 60 drive jbod. Elden kicks off. He owns Failover Clustering in Windows Server. New Approach To Storage File based storage: high performance SMB protocol for Hyper-V storage over Ethernet networks.&#160; In addition: the &hellip; <a href=\"https:\/\/aidanfinn.com\/?p=14863\" class=\"more-link\">Continue reading<span class=\"screen-reader-text\"> &#8220;TechEd NA 2013 &#8211; Software Defined Storage In Windows Server &#038; System Center 2012 R2&#8221;<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"om_disable_all_campaigns":false,"_monsterinsights_skip_tracking":false,"_monsterinsights_sitenote_active":false,"_monsterinsights_sitenote_note":"","_monsterinsights_sitenote_category":0,"_uf_show_specific_survey":0,"_uf_disable_surveys":false,"footnotes":""},"categories":[14],"tags":[176,181,99,193,196,120],"class_list":["post-14863","post","type-post","status-publish","format-standard","hentry","category-eventnotes","tag-eventnotes","tag-hyper-v","tag-storage","tag-system-center","tag-vmm","tag-windows-server-2012-r2"],"aioseo_notices":[],"jetpack_featured_media_url":"","amp_enabled":true,"_links":{"self":[{"href":"https:\/\/aidanfinn.com\/index.php?rest_route=\/wp\/v2\/posts\/14863","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/aidanfinn.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/aidanfinn.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/aidanfinn.com\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/aidanfinn.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=14863"}],"version-history":[{"count":0,"href":"https:\/\/aidanfinn.com\/index.php?rest_route=\/wp\/v2\/posts\/14863\/revisions"}],"wp:attachment":[{"href":"https:\/\/aidanfinn.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=14863"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/aidanfinn.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=14863"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/aidanfinn.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=14863"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}