WIM2VHD has been around for quite a while now but I don’t know that many people realised what it could offer. Mikael Nystrom (Server deployment MVP) has blogged a reminder. You can use WIM2VHD to quickly create a VHD from a WIM file, e.g. the install.wim file in the Vista/Windows7/Server 2008/Server 2008 R2 installation media, and then attach that VHD to a Hyper-V virtual machine. This is a quicker way to build a set of lab machines than doing an installation, e.g. WDS, MDT, sneakernet, etc, if you don’t have a set of library images (VMM). I’ve been guilty of not doing this … reminder to self: use WIM2VHD in the future when I need to build a lab template. Mikael has the notes you’ll need to do the job in his blog post.
Month: August 2010
KB2264080: Hyper-V Rollup Update
Some eagle-eyed MVPs reported that Microsoft has issued a rollup-update for Windows Server 2008 R2 Hyper-V. Microsoft is recommending the installation of the rollup update to avoid the described issues. You’ll not that the affect CPUs in issue 1 and 3 are Intel CPUs. The update rolls up 3 updates into one installer:
Issue 1 – KB975530
When a computer has one or more Intel CPUs code-named Nehalem installed, you receive the following Stop error message:
0x00000101 ( parameter1 , 0000000000000000, parameter3 , 000000000000000c) CLOCK_WATCHDOG_TIMEOUT
Note The Nehalem CPU for a server is from the Intel Xeon processor 5500 series and for a client computer is from the Intel Core-i processor series.
Issue 2 – KB974909
Consider the following scenario:
- You run a virtual machine (VM) on the computer.
- You use a network adapter on the VM to access a network.
- You establish many concurrent network connections. Or, there is heavy outgoing network traffic.
- In this scenario, the network connection on the VM may be lost. Additionally, the network adapter is disabled.
- Note: You have to restart the VM to recover from this issue.
Issue 3 – KB981791
When a computer has an Intel Westmere processor, you receive an error message that resembles the following:
STOP: 0x0000001a ( Parameter1 , Parameter2 , Parameter3 , Parameter4 ) MEMORY_MANAGEMENT
Credit: Artem Pronichkin, MVP
CAO Website Hit by DDOS Attack Yesterday
Yesterday I talked briefly about the college course application process. This is managed by a government organization called the CAO. Students can find out about their coolege course offers via a website, or later via the post.
The website in question was a victim of a DDOS attack yesterday, the day the announcements were posted online.
A DDOS (distributed denial of service) attack is a ccordinated attack that makes use of comprimised PCs from around the world. These PCs are infected with trojan downloaders. A DDOS client is downloaded and installed. The DDOS client receives instructions from an IRC channel or a website on a regular basis. The entire architecture is referred to as a botnet. There are many such botnets in the world, some containing a few hundred machines, some a few thousand, some with hundreds of thousands of DDOS clients, and it’s rumoured that there are some with millions of machines under their control.
The owner of these botnets will sell their services or even access to parts of the botnet. The botnets can be easy to use; there are even online videos to train you in the use of a simple GUI command console.
Together, even a few hundred bots (or DDOS clients) can fire an amazing amount of traffic at a web server or online presence. These requests can be valid, or they can be simple TCP connect handshakes that aren’t completed by the client (SYN attack). The recipient server or intermediary network appliances can be overwhelmed. A TCP conenct table can be filled, a CPU can be driven to 100% utilization, or a network connection can be filled.
The motive for an attack can be varied. Sometimes it is a practice run: an attacker will go after a small target to verifiy the system works before hitting a bigger target. It can be a case of blackmail. An email will be received by the victim soon after the attack starts to demand payment to cease the attack. Sometimes it is a case of someone getting their jollies for bragging rights, e.g. “I took down XYZ!!!” on some blackhat forum. It can even be a case of corporate espionage (this does happen!). And it can be political: Al Jazeera was allegedly hit not long after the George W. Bush & Tony Blair Iraq war. There is talk of Georgia being hit during their troubles with Russia a few years ago.
A past customer of mine was once hit. They were a small business. It started on a Sunday with a SYN attack. The web servers couldn’t deal with it. We configured the network appliances to deal with it by reducing the TCP handshake timeout. All was well for a few hours. Then the attacker simply increased the size of the attack. The network appliances were overwhelmed and we had to implement filters to block all attempts to reach the web servers.
This attack went after the URL of the website in question. Changing the IP address of the server would make no difference (and it didn’t – the customer demanded it was done). Changing the location of the server would make no differnce. Distributing the website across servers in many locations might have worked for a while … until the DDOS attack grew in size once again. The customer thought about buying a dedicated DDOS prevention appliance. Nice idea but:
- They are not perfect. They have false positives (blocking legitamte connections and losing online customers) and they also allow a certain amount of attack traffic through.
- The appliance will start out by handling the attack. This requires network, memory, and CPU resources. The attacker can simply grow the attack with a few mouse clicks and the spend of a few Euros or Rubels. This will cause one of those resources to become a bottleneck and the website is offline once again.
These _very_ expensive appliances cannot grow to match the capabilities of a DDOS attack at the same pace or even the same price.
What hope is there? Only the most serious of attacks will last more than 3 days. I know, 3 days is an eternity in the online world. There are certain *ahem* professionals out there who can trace the botnet coordinator and DDOS it. That will terminate an attack. You can pay the ransom … but that means the attacker knows you are desperate enough to pay. Pay once and you might pay again and again. You can call the authorities but that might do little for you. If the botnet is rented or it’s a relatively small attack then it will prbably end after 3 days because that appears to be the normal period to rent a botnet. That’s what I was told by a security expert when my old customer was hit. Sure enough, the attack ended after 3 days.
The only real defence I can see is an IDS (intrusion detection system) that is hosted and maintained by your ISP. This has to be a massive system. The bad news is that gaining access to these systems is very expensive. The configuration is a pain for the admins. Some schemes will initiate the IDS for your IP addresses when you inform the ISP of an attack, taking a short while for the defence to kick in. Some are online all of the time but you risk false positives with legitimate traffic being filtered.
What about the CAO? A consultant that was quoted in the article said:
“This is something every website is vulnerable to. There is not really anything they can do short of spending huge sums of money on extra servers in differing places around Ireland,”
The computer says “no!”. Sorry, but if an attack is hitting a URL then it doesn’t matter where you move the site to or how you load balance it. Eventually the DNS record TTL will expire and the attack will commence on the new location. Load balancing just scales out your system and a DDOS will scale out much quicker and more economically than you can. The attackers aren’t idiots. Even if you do succesfully come up with alternative URLs, they can update their attack instructions in seconds.
He said hackers usually go after more high-profile sites such as Amazon or eBay.
The computer says “no!”. The Irish media reported that there were a spate of attacks on small Irish businesses earlier this year. They were ransom attacks, i.e. “we’ll stop the attack if you pay us”. The irish police and an associated research unit confirmed the story. We don’t hear about these attacks because companies are embarressed. They see them as a breach of security (they aren’t). We only hear about these attacks when they are visible, i.e. big attacks that might take down a Twitter, an Amazon, or the CAO.
Unfortunately, DDOS is a result of the fairly trusting nature of the basics of Internet technology. Firewalls, IDS appliances, and all that stuff can only do so much. You can do your bit to reduce the risk by ensuring that your computers are up to date with patches every month. This vastly reduces the risk of being infected with a trojan downloader.
Remove a Missing Host From VMM
My current Hyper-V lab includes a VMM server and a Hyper-V host. The VMM server used to manage another Hyper-V host which no longer exists. It appears as offline in the VMM console, which is a bit painful because I’ll be doing some demos with this machine in the coming months. I’d like it to be a bit cleaner. Try to remove the missing host and I am asked for credentials and then I get this failure:
Error (406)
Access has been denied while contacting the server <servername>.
Recommended Action
1. Verify that the specified user account has administrative privileges on <servername>.
2. Verify that DCOM access, launch, and activation permissions are enabled on <servername> for the Administrators group. Use dcomcnfg.exe to modify permissions, and then try the operation again.
It seems like VMM wants to verify the credentials with the missing machine before removing it. Catch-22! You can use PowerShell to overcome this. You’ll start out by running PowerShell from the VMM program group in the Start menu (or by adding the VMM module into a normal PowerShell window or script). Then you connect to the VMM server:
get-vmmserver <Vmm_Server_Name>
You can forcibly remove the missing host server by running:
remove-vmhost <Host_Server_Name> –force
Wait a few seconds and you’ll have a nice and clean VMM console once again.
Surge in Points for College Science Course Placement
It’s being reported this morning that the points requirements for entry to science course, including computing, have risen this year. This is a reversal of a trend that was happening.
Kids who are finishing secondary (high) school sit a series (usually 7) of government run and scored exams. The better their grade, the more points each exam gives. Their top 6 scores are summed and the total is used to compare/contrast them with other applications to courses. In the Spring, they submit a form to the CAO with an ordering of their preferred course/college placements. The top X students for each course in each college are sent out offers in the first round (like an NFL draft). More popular subjects tend to see an increase each year. For quite some time, construction and business courses saw huge spikes. Computer science was popular and increasing in the 90’s but the IT recession of 2000-2002 put an end to that.
The construction crash that started in 2007 (before the world recession) saw huge increases in unemployment in the construction sector. The banks overplayed their hand and gave out money, sometimes without even doing paperwork, to the construction firms collapsed. The increased cost of interbank trading on 2008 made things worse. The result: construction is dead in Ireland (with an oversupply of office space and housing to keep us going to the middle of the century) and the finance business is laying off staff every day. Obviously the desired careers of kids entering the college system will follow (our colleges/universities are career choice driven rather than some journey of self exploration as seen on American TV).
My advice to kids doing IT courses: Focus on software development and project management. Business Intelligence might not sound interesting but it is a hot topic, even during a recession … maybe even more so in a recession. Development pays better and there is way more work in it than in IT infrastructure work. There has been a shortage of BI developers for some time and that continues. There is a niche industry in games development: Tipp IT seems to be the place to go for that based on what I hear on the grapevine (an XNA MVP is teaching down there). If IT infrastructure is something that interests you then find out where you can get your hands on the MSDN kit that your college has rights to distribute. Some colleges provide hands-on labs and exam centres. Read books such as MS Prep exam guides. I promise you that the college course material will not prepare you for work as an IT infrastructure person (my networking lecturer was possibly a pot head who was stuck in 1960’s and thought it was important that we learn about token ring – in the mid 1990s).
Oracles Virtualization Package Goes Together Like Bourbon Creams and Baked Beans
Another story from the Register: Oracle is claiming that they have the best unified enterprise server virtualisation solution on the market. It is comprised of:
- Oracle VM for x64 (Oracle’s Xen)
- Oracle VM for Sparc (Sun’s LDoms)
- OpsCenter (from Sun)
- Oracle Enterprise Manager
That’s a lot of stuff that’s thrown together. If I want a unified virtualization solution that is part of a greater systems management solution (flogging a dead horse here) then I go:
- Hyper-V
- System Center (VMM, OpsMgr, and maybe DPM and/or ConfigMgr).
That’s one solution that gives me virtualization (for servers and desktops [via XenDesktop]) and enterprise management for the entire IT infrastructure and applications.
Oracle also pushed the Sun (purple) blade package. Hmm, I think not! I’ve seen how much purple hardware costs. I used to be able to buy several fully kitted servers for the price of a single 4GB stick of reconditioned purple RAM. I giggled a bit when the Oracle marketing pitch made it sound like 10Gbps networking was something that only they could do.
One big gotcha: if you run Oracle s/w then you need to know that it is not supported on a non-Oracle virtualisation platform. That means no running of Oracle software on Amazon E2C, on Hyper-V, on Citrix XenServer or on VMware. But there are stories out there where Oracle customers have threatened to switch to the MS stack and have gotten bespoke support for running the s/w on non-Oracle virtualisation.
Hyper-V Cluster with Different Capacity Hosts
Last week I was asked about how you would introduce new, bigger, Hyper-V hosts to a cluster. For example, there was a time (not long ago) when the sweet spot for RAM in a host was 32GB RAM. You might have a number of these hosts in a cluster. For example, a cluster with 8 or less hosts would have 1 redundant node with 32GB RAM. If one host fails, then the redundant host can take up the slack.
In reality, virtual machines will be running across all of the hosts in a load balanced environment. You will have the equivalent of 1 host in redundant capacity.
A cluster with 9-16 nodes will probably have 2 redundant hosts (or equivalent capacity).
Say I have 5 hosts with 32GB RAM with 1 of those being redundant. Now I can purchase hosts with 64GB of RAM at a decent rate because servers have many more memory slots and I don’t need to buy the exponentially more expensive 8GB or 16GB memory boards. Can I buy just one of those servers and add it to the existing cluster of 32GB RAM hosts? Sure you can. But you will have trouble when you add more VM’s to it than you could add to a 32GB RAM host.
Let’s put it this way: Say I have 2 * 1 gallon buckets. 1 is full of water. I can put .5 gallon in each bucket. I can pour .5 gallon from one bucket to another to wash it or repair the original bucket. I always have 1 gallon of water … clustered between my 2 * 1 gallon buckets. Now I want to carry much more water and I buy a 2 gallon bucket to add to my collection of buckets. I have 1 * 1 gallon bucket that is full. I have 1 * 2 gallon bucket that is full. I have 1 empty 1 gallon bucket as a spare. But it can only be a spare for the other 1 gallon bucket. I will have to throw away 1 gallon if I need to wash or repair the 2 gallon bucket and pour its contents into the spare 1 gallon bucket.
The same goes for Hyper-V hosts (purely on RAM capacity). A 32GB RAM host cannot offer full redundancy for a 64GB RAM host. Half of the virtual machines can migrate and stay running/reboot, but the other half will not be able to start.
Here’s what you can do in a growth scenario:
- Add a single 64GB RAM host. You don’t need to add more hosts while it hosts no more than the capacity of a 32GB RAM host (probably around 28GB of committed virtual machine RAM (I say committed because Dynamic Memory makes things a little more complicated).
- Once the 64GB RAM host exceeds the capacity of a 32GB RAM host you will need to add a second 64GB RAM host. This will provide you will the capacity for providing redundancy for all running virtual machines on the original 64GB RAM host.
I haven’t got the h/w to test this out but I suspect that VMM will scream at you about loss of redundancy in the cluster if you don’t do this right.
Google Exec Questions The Cost of Utility Cloud Computing
I just read a story on The Register where a Google executive says that it makes no financial sense to use utility based cloud computing services such as Amazon or Azure. I’ve been saying this for 2 years. I’m out of the cloud business and have no vested interest any more and I still say this.
Unpredictable Costs
Let’s forget the certain people out there who claim that heaven resides in an Amazon data centre and that the costs are predictable. The truth is that the only thing predictable about your end of month bill will be that you’ll have no idea what it will be.
Here’s Amazon E2C pricing (Ireland data centre):
|
Standard On-Demand Instances |
Linux/UNIX Usage |
Windows Usage |
|
Small (Default) |
$0.095 per hour |
$0.12 per hour |
|
Large |
$0.38 per hour |
$0.48 per hour |
|
Extra Large |
$0.76 per hour |
$0.96 per hour |
|
High-Memory On-Demand Instances |
||
|
Extra Large |
$0.57 per hour |
$0.62 per hour |
|
Double Extra Large |
$1.34 per hour |
$1.44 per hour |
|
Quadruple Extra Large |
$2.68 per hour |
$2.88 per hour |
|
High-CPU On-Demand Instances |
||
|
Medium |
$0.19 per hour |
$0.29 per hour |
|
Extra Large |
$0.76 per hour |
$1.16 per hour |
OK …. not so bad so far. You can tell how many hours there are in a month and can budget for that. Don’t be fooled. That’s just the start of it.
Here’s the bandwidth-out costs (starting in November 2010):
|
Data Transfer Out |
US & EU Regions |
|
First 1 GB per Month |
$0.00 per GB |
|
Up to 10 TB per Month |
$0.15 per GB |
|
Next 40 TB per Month |
$0.11 per GB |
|
Next 100 TB per Month |
$0.09 per GB |
|
Over 150 TB per Month |
$0.08 per GB |
Do you know how much bandwidth will be going out from your web/application servers to the net? We’re getting into finger-in-the-air territory now.
More charges exist for licensing, storage, etc.
Azure pricing is more nuts. It makes choosing a mobile phone plan look easy.
I dare you to give me a guaranteed price for a web server and database server on either platform on a monthly basis, that will be good for a year, and then be able to stand over that prediction.
Management Isn’t Reduced That Much
One of the ideas of cloud computing is to get rid of all the management. I’ve got some news for you: you might not have physical machines to look after any more but you still have virtual machines running Linux or Windows to look after if on Amazon E2C. Amazon do not look after your operating systems or applications. You have to do it. You have to lock them down. You have to patch them. You have to maintain them. I’m betting there’s no OS upgrade option either!
Supplier Lock-In
I’m looking squarely at Platform-as-a-Service (PaaS) here, including Azure. Say you get ticked off with the unpredictable pricing of MS Azure. Or maybe you want to move to another hoster. How do you do that right now? You can’t just take your application and data and put them on a virtual machine in another hosting company because the thing is completely tied into Azure. That’s only going to increase with the rumoured Orleans cloud development model that is on the way.
At least with Amazon, you develop on a familiar Windows/LAMP stack and can port your application and data to another Windows/LAMP machine with another hoster. OVF might even open up the possibility of being able to just move a VM.
The answer from MS might be Azure appliance. I’m not buying into that. That’s going to be a very limited product and will limit your purchasing options.
What To Do?
My advice is that you go with a platform that is available with many vendors and that has a fixed monthly cost based on hard and predictable sizing if you are putting together a production system that will be used 100% of the time. Services such as Amazon E2C and MS Azure are great for temporary solutions, e.g you need temporary web or computing capacity but their unpredictable charges make them a financial nightmare.
Project Kensho OVF
One fo the reasons I love virtual machines is because they are mobile. Most of them (except RDM and passthrough disks) are just files, making them easy to migrate, copy, export, and import. But this is lmiited to the same virtualisation platform. Changing virtualisation platforms requires a tricky V2V process that vendors have made one-way.
Citrix has unveiled a solution with the codename of Project Kensho. It leverages the Open Virtualisation Format (OVF) standard (developed by Citrix, VMware, Dell, HP, IBM and Microsoft) to allow the movement of virtual machines from on virtualisation platform to another. You can think of OVF as playing the same role as XML in business integration solutions: it’s a stepping stone.
The solution is expected to ship by Citrix in September.
What does this mean to you? OVF gives us a standard way to V2V virtual machines between many virtualisation platforms, depending on the support offered by those platforms for OVF.
According to wikipedia:
“VirtualBox supports OVF since version 2.2.0 (April 2009). AbiCloud Cloud Computing Platform Appliance Manager component now supports OVF since version 0.7.0 (June 2009). Red Hat Enterprise Virtualization supports OVF since version 2.2. VMware supports OVF in ESX 3.5, vSphere 4, Workstation 6.5, and other products using their OVF tool. OVF version 1.1 is supported in Workstation 7.1 and VMware Player 3.1 (May 2010). IBM supports OVF format VM images on the POWER server platform for the AIX operating system and Linux, and for Linux on z/VM on the mainframe using IBM Systems Director (via VMControl Enterprise Edition plug-in, a cross-platform VM manager”.
MS executives have confirmed in the past that VMM v.Next will include added support for XenServer management. We know MS and Citrix are veery tight. MS staff recommened XenDesktop as a VDI solution and Citrix are currently recommending Hyper-V for virtualisation. It won’t surprise me to see OVF turning up in Hyper-V v.Next and VMM v.Next. This would offer huge fleixibility:
- Private cloud made up of many platforms (as found in medium/large organisations)
- Switching seamlessly between public cloud (would require some form of broker application – there’s a startup opportunity!)
- Migrating VM’s seamlessly between any virtualisation platform in public/private clouds, e.g. develop in house on Hyper-V with VSS 2010 Lab Management and upload the final VM via OVF to the cloud service provider of choice, no matter what virtualisation solution they use.
It sounds like Nirvana! I’m sure that there will be niggling things that will cause problems:
- Licensing: moving a VM with a MSDN license key up to a cloud environment that requires SPLA provided by the hoster will be a mess.
- Technical: Build a VM with 8 vCPUs on VMware and migrate it to Hyper-V and you’ll lose 4 vCPUs.
- Technical: VM additions or itnegration components are virtualisation platform specific. Something will need to be done to be able to add/remove them seamlessly.
It’s going to take a while, and it might even be impossible for business reasons, to get to an automated, seamless solution. But OVF will give us something where, with a tiny amount of admin work (product key and addition removal), we will have a format to make virtual machines even more mobile.
Visual Studio 2010 Lab Management Update
You won’t find me talking about Visual Studio 2010 very often. The days of me developing ended 9 months into my career with Visual C++ 4.0 or something like that. But this is worth braking the habit.
Visual Studio Lab Management is a killer feature. But not for VS 2010; for Hyper-V! Let’s face it: we IT pro’s wouldn’t mind not seeing or dealing with our colleagues who sit in basements and code into the wee hours of the night. They annoy us with their frequent requests for new development and test machines. It never seems to end! They feel the same way about us. We hold them back by taking too long to give them the machines they want and they never seem to get quite what they need. They’d like to take control and build their own stuff. But there’s no way in hell we’ll let them near a computer room.
The answer: virtualisation and self-service provisioning. The VMM 2008 R2 self service portal is a good start. The private cloud solution, SCVMM SSP 2.0 *take a coffee break to rest my fingers … and I’m back* are a good start. But you know what? Dev’s like to work in a familiar environment such as VS 2010. Plus they tend to want sets of machines at once, rather than one at a time.
VSLM gives them what they want: self service provisioning of a lab environment from within Visual Studio. Microsoft recently released an update to bring this feature to RTM.
You can follow Amit Chatterjee’s blog to learn more.