{"id":12115,"date":"2012-03-03T20:20:28","date_gmt":"2012-03-03T20:20:28","guid":{"rendered":"https:\/\/aidanfinn.com\/?p=12115"},"modified":"2012-03-03T20:20:28","modified_gmt":"2012-03-03T20:20:28","slug":"windows-server-8-hyper-v-virtual-fibre-channel","status":"publish","type":"post","link":"https:\/\/aidanfinn.com\/?p=12115","title":{"rendered":"Windows Server 2012 Hyper-V Virtual Fibre Channel"},"content":{"rendered":"<p>You now have the ability to <a href=\"http:\/\/technet.microsoft.com\/en-us\/library\/hh831413.aspx\" target=\"_blank\">virtualise a fibre channel adapter in WS2012 Hyper-V<\/a>.\u00a0 This synthetic fibre channel adapter allows a virtual machine to directly connect to a LUN in a fibre channel SAN.<\/p>\n<p><strong><span style=\"text-decoration: underline;\">Benefits<\/span><\/strong><\/p>\n<p>It is one thing to make a virtual machine highly available.\u00a0 That protects it against hardware failure or host maintenance.\u00a0 But what about the operating system or software in the VM?\u00a0 What if they fail or require patching\/upgrades?\u00a0 With a guest cluster, you can move the application workload to another VM.\u00a0 This requires connectivity to shared storage.\u00a0 Windows 2008 R2 clusters, for example, require SAS, fibre channel, or iSCSI attached shared storage.\u00a0 SAS is right for connecting VMs to storage.\u00a0 iSCSI consumers were OK.\u00a0 But those who made the huge investment in fibre channel were left in the cold, sometimes having to implement an iSCSI gateway to their FC storage.\u00a0 Woudn\u2019t it be nice to allow them to use their FC HBAs in the host to create guest clusters?<\/p>\n<p>Another example is where we want to provision really large LUNs to a VM.\u00a0 As I just posted a little while ago, VHDX expands out to 64 TB so really we would need to have a requirement for LUNs beyond 64 TB to justify this reason to provide physical LUNs to a VM and limit mobility.\u00a0 But I guess with the expanded scalability of VMs, big workloads like OLTP can be virtualised on Windows 8 Hyper-V and they require <em>big<\/em> disk.<\/p>\n<p><strong><span style=\"text-decoration: underline;\">What It Is<\/span><\/strong><\/p>\n<p>Virtual Fibre Channel allows you to virtualise the HBA in a Windows 8 Hyper-V host, have a virtual fibre channel in the VM with it\u2019s own WWN (actually, 2 to be precise) and connect the VM directly to LUNs in a FC SAN.<\/p>\n<p>Windows Server 2012 Hyper-V Virtual Fibre Channel is not intended or supported to do boot from SAN.<\/p>\n<p>The VM will share bandwidth on the host\u2019s HBA, unless I guess you spend extra on additional HBAs, and cross the SAN to connect to the controllers in the FC storage solution.<\/p>\n<p>The SAN must support NPIV (<a href=\"http:\/\/en.wikipedia.org\/wiki\/NPIV\" target=\"_blank\">N_Port ID Virtualization<\/a>).\u00a0 Each VM can have up to 4 virtual HBAs.\u00a0 Each HBA has it\u2019s own identification on the SAN.<\/p>\n<p><strong><span style=\"text-decoration: underline;\">How It Works<\/span><\/strong><\/p>\n<p>You create a virtual SAN on the host (parent partition), for each HBA on the host that will be virtualised for VM connectivity to the SAN.\u00a0 This is a 1-1 binding between virtual SAN and physical HBA, similar to the old model of virtual network and physical NIC.\u00a0 You then create virtual HBAs in your VMs and connect them to virtual SANs.<\/p>\n<p>And that\u2019s where things can get interesting.\u00a0 When you get into the FC world, you want fault tolerance with MPIO.\u00a0 A mistake people will make is that they will create two virtual HBAs and put them both on the same virtual network, and therefore on a single FC path on a single HBA.\u00a0 If that single cable breaks, or that physical HBA port fails, then the VM has pointless MPIO because both virtual HBAs are on the same physical connection.<\/p>\n<p>The correct approach for fault tolerance will be:<\/p>\n<ol>\n<li>2 or more HBA connections in the host<\/li>\n<li>1 virtual SAN for each HBA connection in the host.<\/li>\n<li>1 virtual HBA in each VM, with each one connected to a different virtual SAN<\/li>\n<li>MPIO configured in the VM\u2019s guest OS.\u00a0 In fact, you can (and should) use your storage vendor\u2019s MPIO\/DSM software in the VM\u2019s guest OS.<\/li>\n<\/ol>\n<p>Now you have true SAN path fault tolerance at the physical, host, and virtual levels.<\/p>\n<p><strong><span style=\"text-decoration: underline;\">Live Migration<\/span><\/strong><\/p>\n<p>One of the key themes of Hyper-V is \u201cno new features that prevent Live Migration\u201d.\u00a0 So how does a VM that is connected to a FC SAN move from one host to another without breaking the IO stream from VM to storage?<\/p>\n<p>There\u2019s a little bit of trickery involved here.\u00a0 Each virtual HBA in your VM must have <em>2<\/em> WWNs (either automatically created or manually defined), not just one.\u00a0 And here\u2019s why.\u00a0 There is a very brief period where a VM exists on two hosts during live migration.\u00a0 It is running on HostA and waiting to start on HostB.\u00a0 The switchover process is that the VM is paused on A and started on B.\u00a0 With FC, we need to ensure that the VM is able to connect and process IO.<\/p>\n<p>So in this below example, the VM is connecting to storage using WWN A.\u00a0 During Live Migration the new instance of the VM on the destination host is set up with WWN B.\u00a0 When LM un-pauses on the destination host, the VM can instantly connect to the LUN and continue IO uninterrupted.\u00a0 Each subsequent LM, either to the original host or any other host, will cause the VM to alternate between WWN A and WWN B.\u00a0 That&#8217; holds true of each virtual HBA in the VM.\u00a0 You can have up to 64 hosts in your Hyper-V cluster, but each virtual fibre channel adapter will alternate between just 2 WWNs.<\/p>\n<p><img decoding=\"async\" src=\"http:\/\/i.technet.microsoft.com\/dynimg\/IC564260.jpg\" alt=\"Alternating WWN addresses during a live migration\" \/><\/p>\n<p>What you need to take from this is that your VM\u2019s LUNs need to be masked or zoned for two WWNs for every VM.<\/p>\n<p><strong><span style=\"text-decoration: underline;\">Technical Requirements and Limits<\/span><\/strong><\/p>\n<p>Fist and foremost, you must have a FC SAN that supports NPIV.\u00a0 Your host must run Windows Server 2012.\u00a0 The host must have a FC HBA with a driver that supports Hyper-V and NPIV.\u00a0 You cannot use virtual fibre channel adapters to boot VMs from the SAN; they are for data LUNs only.\u00a0 The only supported guest operating systems for virtual fibre channel at this point are Windows Server 2008, Windows Server 2008 R2, and Windows Server 2102.<\/p>\n<p>This is a list of the HBAs that have support built into the Windows Server 2012 Beta:<\/p>\n<table border=\"0\" cellspacing=\"0\" cellpadding=\"0\">\n<tbody>\n<tr>\n<td width=\"60\" valign=\"top\"><strong><span style=\"text-decoration: underline;\">Vendor<\/span><\/strong><\/td>\n<td width=\"175\" valign=\"top\"><strong><span style=\"text-decoration: underline;\">Model<\/span><\/strong><\/td>\n<\/tr>\n<tr>\n<td width=\"60\" valign=\"top\">Brocade<\/td>\n<td width=\"175\" valign=\"top\">BR415 \/ BR815<\/td>\n<\/tr>\n<tr>\n<td width=\"60\" valign=\"top\">Brocade<\/td>\n<td width=\"175\" valign=\"top\">BR425 \/ BR825<\/td>\n<\/tr>\n<tr>\n<td width=\"60\" valign=\"top\">Brocade<\/td>\n<td width=\"175\" valign=\"top\">BR804<\/td>\n<\/tr>\n<tr>\n<td width=\"60\" valign=\"top\">Brocade<\/td>\n<td width=\"175\" valign=\"top\">BR1860-1p \/ BR1860-2p<\/td>\n<\/tr>\n<tr>\n<td width=\"60\" valign=\"top\">Emulex<\/td>\n<td width=\"175\" valign=\"top\">LPe16000 \/ LPe16002<\/td>\n<\/tr>\n<tr>\n<td width=\"60\" valign=\"top\">Emulex<\/td>\n<td width=\"175\" valign=\"top\">LPe12000 \/ LPe12002 \/ LPe12004 \/ LPe1250<\/td>\n<\/tr>\n<tr>\n<td width=\"60\" valign=\"top\">Emulex<\/td>\n<td width=\"175\" valign=\"top\">LPe11000 \/ LPe11002 \/ LPe11004 \/ LPe1150 \/ LPe111<\/td>\n<\/tr>\n<tr>\n<td width=\"60\" valign=\"top\">QLogic<\/td>\n<td width=\"175\" valign=\"top\">Qxx25xx Fibre Channel HBAs<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p><strong><span style=\"text-decoration: underline;\">Summary<\/span><\/strong><\/p>\n<p>With supported hardware, virtual fibre channel support allows supported Windows Server 2012 Hyper-V guests to connect to and use fibre channel SAN LUNs for data purposes that enable extreme scalable storage and in-guest clustering without compromising the uptime and mobility of Live Migration.<\/p>\n<div id=\"scid:0767317B-992E-4b12-91E0-4F059A8CECA8:ad0730fa-26b4-455a-83ce-029559a3b451\" class=\"wlWriterEditableSmartContent\" style=\"margin: 0px; display: inline; float: none; padding: 0px;\">Technorati Tags: <a rel=\"tag\" href=\"http:\/\/technorati.com\/tags\/Windows+Server+2012\">Windows Server 2012<\/a>,<a rel=\"tag\" href=\"http:\/\/technorati.com\/tags\/Hyper-V\">Hyper-V<\/a>,<a rel=\"tag\" href=\"http:\/\/technorati.com\/tags\/Virtualisation\">Virtualisation<\/a><\/div>\n","protected":false},"excerpt":{"rendered":"<p>You now have the ability to virtualise a fibre channel adapter in WS2012 Hyper-V.\u00a0 This synthetic fibre channel adapter allows a virtual machine to directly connect to a LUN in a fibre channel SAN. Benefits It is one thing to make a virtual machine highly available.\u00a0 That protects it against hardware failure or host maintenance.\u00a0 &hellip; <a href=\"https:\/\/aidanfinn.com\/?p=12115\" class=\"more-link\">Continue reading<span class=\"screen-reader-text\"> &#8220;Windows Server 2012 Hyper-V Virtual Fibre Channel&#8221;<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"om_disable_all_campaigns":false,"_monsterinsights_skip_tracking":false,"_monsterinsights_sitenote_active":false,"_monsterinsights_sitenote_note":"","_monsterinsights_sitenote_category":0,"_uf_show_specific_survey":0,"_uf_disable_surveys":false,"footnotes":""},"categories":[20],"tags":[181,195,118],"class_list":["post-12115","post","type-post","status-publish","format-standard","hentry","category-hyper-v","tag-hyper-v","tag-virtualisation","tag-windows-server-2012"],"aioseo_notices":[],"jetpack_featured_media_url":"","amp_enabled":true,"_links":{"self":[{"href":"https:\/\/aidanfinn.com\/index.php?rest_route=\/wp\/v2\/posts\/12115","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/aidanfinn.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/aidanfinn.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/aidanfinn.com\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/aidanfinn.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=12115"}],"version-history":[{"count":0,"href":"https:\/\/aidanfinn.com\/index.php?rest_route=\/wp\/v2\/posts\/12115\/revisions"}],"wp:attachment":[{"href":"https:\/\/aidanfinn.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=12115"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/aidanfinn.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=12115"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/aidanfinn.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=12115"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}