{"id":10044,"date":"2009-11-26T12:33:11","date_gmt":"2009-11-26T12:33:11","guid":{"rendered":"https:\/\/aidanfinn.com\/index.php\/2009\/11\/w2008-r2-hyper-v-network-speed-comparisons\/"},"modified":"2009-11-26T12:33:11","modified_gmt":"2009-11-26T12:33:11","slug":"w2008-r2-hyper-v-network-speed-comparisons","status":"publish","type":"post","link":"https:\/\/aidanfinn.com\/?p=10044","title":{"rendered":"W2008 R2 Hyper-V Network Speed Comparisons"},"content":{"rendered":"<p><a href=\"http:\/\/hyper-v.nu\/blogs\/hans\/\" target=\"_blank\">Hans Vredevoort<\/a> asked what sort of network speed comparisons I was getting with Windows Server 2008 R2 Hyper-V.\u00a0 With W2008 R2 Hyper-V you get new features like Jumbo Frames and VMQ (Virtual Machine Queue) but these are reliant on hardware support.\u00a0 Hans is running HP G6 ProLiant servers so he has that support.\u00a0 Our current hardware are HP G5 ProLiant servers.\u00a0 I decided this was worth a test.<\/p>\n<p>I set up a test on our production systems.\u00a0 It\u2019s not a perfect test lab because there are VM\u2019s doing their normal workload and thing like continuous backup agents running.\u00a0 This means other factors that are beyond my control have played their part in the test.<\/p>\n<p>The hardware was a pair of HP BL460C \u201cG5\u201d blades in a C7000 enclosure with Ethernet Virtual Connects.\u00a0 The operating system was Windows Server 2008 R2.\u00a0 The 2 virtual machines were also running Windows Server 2008 R2.\u00a0 I set them up with just 512MB RAM and a single virtual CPU.\u00a0 Both VM\u2019s had 1 virtual NIC, both in the same VLAN.\u00a0 They had dynamic VHD\u2019s. The test task would be to copy the W2008 R2 ISO file from one machine to the other.\u00a0 The file is 2.79 GB (2,996,488 bytes) in size.<\/p>\n<p>There were three tests.\u00a0 In each one I would copy the file 3 times to get an average time required.<\/p>\n<h4>Scenario 1: Virtual to Virtual on the Same Host<\/h4>\n<p>I copied the ISO from VM1 to VM2 while both VM\u2019s were running on host one.\u00a0 After I ran this test I realised something.\u00a0 The first iteration took slightly longer than all other tests.\u00a0 The reason was simple enough \u2013 the dynamic VHD probably had to expand a bit.\u00a0 I took this into account and reran the test.<\/p>\n<p>With this test the data stream would never reach the physical Ethernet.\u00a0 All data would stay within the physical host.\u00a0 Traffic would route via the NIC in VM1 to the virtual switch via its VMBus and then back to the NIC in VM2 via its VMBus.<\/p>\n<p>The times (seconds) taken were 51, 55 and 50 with an average of <strong><span style=\"text-decoration: underline;\">52 seconds<\/span><\/strong>.<\/p>\n<h4>Scenario 2: Virtual to Virtual on Different Hosts<\/h4>\n<p>I used live migration to move VM2 to a second physical host in the cluster.\u00a0 This means that data from VM1 would leave the virtual NIC in VM1, traverse VMBus and the Virtual Switch and physical NIC in host 1, the Ethernet (HP C7000 backplane\/Virtual Connects) and then the physical NIC and virtual switch in physical host 2 to reach the virtual NIC of VM2 via its VMBus.\u00a0<\/p>\n<p>I repeated the tests.\u00a0 The times (seconds) taken were 52, 54 and 66 with an average of <span style=\"text-decoration: underline;\"><strong>57.333 seconds<\/strong><\/span>.\u00a0 We appear to have added 5.333 seconds to the operation by introducing physical hardware transitions.<\/p>\n<h4>Scenario 3: Virtual to Virtual During Live Migration<\/h4>\n<p>With this test we would start with the scenario in the first set of tests.\u00a0 We would introduce Live Migration to move VM2 from physical host 1 to physical host 2 during the copy.\u00a0 This is why I used on 512MB RAM in the VMs; I wanted to be sure the live migration end-to-end task would complete during the file copy.\u00a0 The resulting scenario would have VM2 on physical host 2, matching the second test scenario.\u00a0 I want to see what impact Live Migration would have on getting from scenario 1 to scenario 2.<\/p>\n<p>The times (seconds) taken were 59, 59 and 61 with an average of <span style=\"text-decoration: underline;\"><strong>59.666 seconds<\/strong><\/span>.\u00a0 This is 7 seconds slower than scenario 1 and 2.333 seconds slower than scenario 2.<\/p>\n<p><em>Note that Live Migration is routed via a different physical NIC than the virtual switch.<\/em><\/p>\n<p><strong>Scenario 4: Physical to Physical<\/strong><\/p>\n<p>This time I would copy the ISO file from one parent partition to another, i.e. from host 1 to host 2 via the parent partition NIC.\u00a0 This removes the virtual NIC, virtual switch and the VMBus from the equation.<\/p>\n<p>The times (seconds) taken were 34, 28 and 27 with an average of <span style=\"text-decoration: underline;\"><strong>29.666 seconds<\/strong><\/span>.\u00a0 This makes the test scenario physical data transfer 22.334 seconds faster than the fastest of the virtual scenarios (scenario 1).<\/p>\n<h4>Comparison<\/h4>\n<table border=\"0\" cellspacing=\"0\" cellpadding=\"2\" width=\"465\">\n<tbody>\n<tr>\n<td width=\"236\" valign=\"top\"><strong>Scenario<\/strong><\/td>\n<td width=\"227\" valign=\"top\"><strong>Average Time Required (seconds)<\/strong><\/td>\n<\/tr>\n<tr>\n<td width=\"236\" valign=\"top\">Virtual to Virtual on Same Host<\/td>\n<td width=\"227\" valign=\"top\">\n<p align=\"center\">52<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td width=\"236\" valign=\"top\">Virtual to Virtual on Different Hosts<\/td>\n<td width=\"227\" valign=\"top\">\n<p align=\"center\">57.333<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td width=\"236\" valign=\"top\">Virtual to Virtual During Live Migration<\/td>\n<td width=\"227\" valign=\"top\">\n<p align=\"center\">59.666<\/p>\n<\/td>\n<\/tr>\n<tr>\n<td width=\"236\" valign=\"top\">Physical to Physical<\/td>\n<td width=\"227\" valign=\"top\">\n<p align=\"center\">29.666<\/p>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h4>Waiver<\/h4>\n<p>As I mentioned, these tests were not done in lab conditions.\u00a0 The parent partition NIC\u2019s had no traffic to deal with other than an OpsMgr agent.\u00a0 The Virtual Switch NIC\u2019s had to deal with application, continuous backup, AV and OpsMgr agent traffic.<\/p>\n<p>It should also be noted that this should not be a comment on the new features Windows Server 2008 R2 Hyper-V.\u00a0 Using HP G5 hardware I cannot avail of the new hardware offloading improvements such as VMQ and Jumbo Frames.\u00a0 I guess I have to wait until our next host purchase to see some of that in play!<\/p>\n<p>This is just a test of how things compare on the hardware that I have in a production situation.\u00a0 I\u2019m actually pretty happy with it and I\u2019ll be happier when we can add some G6 hardware.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Hans Vredevoort asked what sort of network speed comparisons I was getting with Windows Server 2008 R2 Hyper-V.\u00a0 With W2008 R2 Hyper-V you get new features like Jumbo Frames and VMQ (Virtual Machine Queue) but these are reliant on hardware support.\u00a0 Hans is running HP G6 ProLiant servers so he has that support.\u00a0 Our current &hellip; <a href=\"https:\/\/aidanfinn.com\/?p=10044\" class=\"more-link\">Continue reading<span class=\"screen-reader-text\"> &#8220;W2008 R2 Hyper-V Network Speed Comparisons&#8221;<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"om_disable_all_campaigns":false,"_monsterinsights_skip_tracking":false,"_monsterinsights_sitenote_active":false,"_monsterinsights_sitenote_note":"","_monsterinsights_sitenote_category":0,"_uf_show_specific_survey":0,"_uf_disable_surveys":false,"footnotes":""},"categories":[20],"tags":[64,181,117],"class_list":["post-10044","post","type-post","status-publish","format-standard","hentry","category-hyper-v","tag-hp","tag-hyper-v","tag-windows-server-2008-r2"],"aioseo_notices":[],"jetpack_featured_media_url":"","amp_enabled":true,"_links":{"self":[{"href":"https:\/\/aidanfinn.com\/index.php?rest_route=\/wp\/v2\/posts\/10044","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/aidanfinn.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/aidanfinn.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/aidanfinn.com\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/aidanfinn.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=10044"}],"version-history":[{"count":0,"href":"https:\/\/aidanfinn.com\/index.php?rest_route=\/wp\/v2\/posts\/10044\/revisions"}],"wp:attachment":[{"href":"https:\/\/aidanfinn.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=10044"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/aidanfinn.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=10044"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/aidanfinn.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=10044"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}