TechEd EMEA 2008 Wrapup

That’s the last of my blog entries on the week.  My battery is at 3% and I don’t have an adaptor.  Thanks to Enda and Dave from MS Ireland and Nathan for the loan of a power adapter.  And thank you too to my work who made it possible for me to take another week for this sort of thing.  I learned loads and hopefully you did too from my posts.

I will be returning to normal service again next week once I’ve gotten home, had a chance to recharge the batteries and I’ve caught up on work.

Day 5: How IT Will Change In The Next 10 Years And Why Should You Care.

Speaker: Miha Kralj, Senior Architect Microsoft in Redmond.  Let’s just say he won’t have gotten that title by accident. It’s a repeat session and the room is full.  I think everyone heard about the first run of the session.

Facts:

  • The new data centre can host 12 times more servers than it did before.
  • >500,000 IT graduates in China speaking English every year.
  • 74% of email is spam
  • 119 million spam mails delivered to mailboxes every day.  That’s over 50% of mail.

This session will not be technical.  It’s about trends and where we think they’re going, i.e. the future isn’t what it used to be, i.e. where’s my hover car and personal jet pack that the BBC promised me when I was a child?  I’m quite irate!  Seriously, some of this session will work out, some not.

Change Is Inevitable

Story: IBM was the IT giant and nearly disappeared because they didn’t evolve through research and development to match future requirements.  From what I saw in a documentary, it was Lotus Notes that saved them.  That’s sad (just joking Declan).  IBM did not have stupid people – they did eventually get out of the hole.  They did have good technology (mainframes) – a "sacred cash cow", i.e. once you achieve it you don’t mess with it.  Everything was built around the mainframe.  MS nearly killed the mainframe (and IBM) with DOS and Windows and client/server computing.

Once things change history you can’t go back.  IT changes more quickly than anything else.  Trends are unstoppable.  You must be prepared: research-wise and in terms of agility and flexibility.  Key to success is identifying the right things and preempt them; once the trend arrives, it’s too late to change.  You probably need to get on the next trend.  For example, the Dot-Bomb boom.  I was employed by a data centre firm that started too late and was competing with every other IT company … everyone who had a modem started a hosting company back in 2000/2001.  Those who survived either started before the data centre boom or just after the dot.bomb crash with a new market of web service hosting.

People

The current generation of graduates expect huge Internet access, large cheap storage, etc.  Using restricted computing, e.g. GPO, mail controls, browser controls, etc, might send them to other companies, leaving you with lesser quality staff.

Peripherals

Look at input devices.  They’re all changing.  The Wii is a good example.  Input and output devices will change.  This could have the biggest impact in terms of universal accesability. 

Communications

We’re in the communication era.  What method of communication do you use to talk to someone?  Phone, email, IM, video, etc?  Upgrade the technology and then upgrade the people is the advice … sound like license sales for unified messaging 😉

Vendors

They’re all disappearing.  Where’s Wang and Compaq and Amdahl and Olivetti and Tandem and … they’re merged.   In 1997 there were 24 major server/PC vendors.  Now there are 6 major vendors left.  It happened in IT and cars and air transport, etc.  Will this change?  No.  It will happen with the different industries within IT.  In fact, we can see it already happening in web hosting in Ireland.  The numbers are consolidating.  This is inevitable in competitive environments.

What are we buying? A chassis with some management.  The components from all brands are pretty much the same.

Carbon Footprint

The IT is as inefficient as aviation.  We’ll be targeted by taxation inevitably.  Fuel/power costs and taxes are only going one way.  Where does the power go?  Cooling, power, server components and the CPU.  Applications actually use 0.001% of the power generated in the power plant that started out travelling to your data centre.  50% of it is lost before it even gets to the data centre.  MS deliberately tries to place data centres near efficient power plants.  I posted earlier about on Swedish firm that has 3 of its own wind turbines at their plant.

I posted yesterday how W2008 R2 will make changes.  Blade servers and SAN use less power.  Virtualisation further reduces the amount of hardware you need.

And the speaker says what I’ve been saying… how green is IT?  The components are built globally and shipped to China for assembly.  Then the built servers and disks are shipped around the world.

He reckons Iceland is the ideal place to build a data centre.  Reasons?  Lots of power coming straight out of the ground (steam), they need immigration and external investment, cheap land and it’s cold (good for cooling). 

Past Success Is Your Worst Enemy

The new player is desperate and has nothing to loose.  They also started because they have a new idea.  They’re smaller so they are more agile and can quickly implement new innovations.

As A Service

Software as a Service, e.g. a taxi.  It’s not about selling servers.  That’s not SaaS.  Do you ask for a Ford Mondeo with a 2.5litre engine?  No, you ask to go to Leeson Street.  The customer doesn’t care about Hyper-V, VMware, VMM, Virtual Centre, etc.  They care about service, availability, access, etc.

Cloud Promotes New Relationships

Right now we sell technology and buy assets from.  That will change to selling and buying services on a recurring unit basis, e.g. a hotel chain.  Right now we thing, CRM server, database server, AD server, etc.  It will change to protocols.  The boxes and their locations will not be as important in the future.

Transition To Utility Computing

Technology:

  • Service Level Agreements (SLA) will be critical.  IT is moving to external service providers.
  • Standards: Whose standard do you choose to be mobile (customer and service provider decision)
  • Bandwidth
  • Virtualisation
  • New application development
  • Massive Provisioning: economies of scale

Providers:

  • Mega-computer providers: economies of scale will make it harder for the small operator to compete on cost.  You have to compete on service.
  • Many SaaS providers: lots of competition and they will consolidate just like in the air and auto industries.
  • H/W Providers will become cloud providers: HP, Dell, etc.  There will be fewer hardware consumers so they need to use what they know to provide new services.
  • Cloud providers will become customers: The clouds will consolidate so companies will become partners (or resellers).  These are customers in the chain of cloud services sales.
  • Power, power, power: This is so true.  It’s already a huge issue.

Consumers:

  • Commoditized IT will go to the cloud – email first (already started).  Web hosting is well gone.
  • Lower barrier to entry: it will be easier to get a business going (already true).
  • Dynamic multi-sourcing: you can choose from many partners and potentially work with many of them.
  • Modernization of internal IT organisations: it’s easier now to remove legacy deployments and replace with modern hosted solutions (shared costs and centralised experience).
  • No more on-site IT: therefore no cost centre and no onsite IT infrastructure engineers.

Regulation and Legislation

No one trusts an unregulated supplier.  What’s to stop anyone from setting up and providing a cheap service that is unreliable, insecure, unstable and not highly available.  Your data is moving into a cloud so you need regulation to control this.  Eventually, someone will be sued for not meeting their SLA.

Social Web

The 2008 USA election has shown the arrival of the social web.  It’s true that it impacts hosted IT now.  Web developers blog like crazy about the tiniest of issues and make a huge deal from them.  Major issues are like atomic explosions.  Be aware of social web activity as both a positive and negative thing.

Web.Next

There will be pervasive and ubiquitous networking.  Identity, privacy and trust are the key issues.

Will people in the future care about physical presence?  20-somethings consider MySpace and SecondLife to be the same as in-person interaction.  Your digital identity will be very important.  See the previous point.

How do you target people?  How do you know what/who they are?  Marketing uses demographics.  If you’re in a digital world you can be anything you want to be, e.g. a 2 foot green dwarf.

Purchasing

There’s the future, emerging, wide application and obsolete stages in a product.  The best value is in the mid-end of the emerging phases.

Sell The Sizzle, Not The Steak

Everyone has steak.  Make it sparkle.  Be different.  Find out what the others are doing.  Don’t do it better, do something else.

Globalisation

IT is available everywhere.  As a graduate, how do you start?  It’s all outsourced to low cost countries where there is more motivation to succeed.  Will China, India, etc, dominate the cloud?

Jobs Of The Future

PHD educated staff won’t be important anymore.  7 year old knowledge is useless.  We live in a current knowledge and skill economy.  In house IT jobs will be rare when cloud computing.  There will be more automation and less humans (already happening with optimised IT).  Those humans are split between a few architects and a few junior operators.  The IT career of today will not exist in 10 years time.  You will need to be a dual identity person, e.g. IT/Sales, IT/management, IT/marketing.

Advice

  • Think beyond tomorrow.  Think about what we have now, where we will be tomorrow and where we will be beyond that.
  • Identify what the trends will do to your business and what it is you want to do to compete or survive.  Be prepared.
  • What is it you want to do in 10 years?

Summary Of My Thoughts

This is not the first time I’ve heard this message.  The operator/architect thing is a common theme.  I’m in the hosting industry and I know these trend has already started.

I don’t necessarily agree with everything:

  • Some of what was said by a MS employee coincides with MS launching Azure and also trying to encourage early adopter consumption of technologies such as Vista and Windows Server 2008.
  • I think the small operator will always have a niche role.
  • Not everything will globalize.  Their are privacy, data protection and espionage issues to be concerned with.
  • Not everyone will outsource like we think of it now.  Some will be big enough to internally outsource to a subsidiary.  Some will just not outsource for secrecy reasons.
  • Development of staff will require a middle tier of IT staff to be retained to bridge the gap between going from operator to architect.  Otherwise, all IT would surely end after the architect generation dies.

Day 5: Understanding Vista’s Two Least Understood Security Stars

This is the only session in the early slot this morning that interests me.  It’s the traditional post Tech-Ed party morning.  However, there was no traditional Tech-Ed party… it was an invite only affair.  I didn’t bother going because I was shattered.  A Mickie-Dee burger and an early night to catch up on sleep was enough for me.

This presentation is by Mark Minasi.  Again, just the highlights here.  Attending Mark’s sessions is highly recommended.  There’s much more content than shown here.

  • UAC: User Account Control
  • Windows Integrity Levels (WILs)

UAC

Originally intended as a security solution to protect you against accidental malware installation.  It failed.  MS "lied" about the original intention.

UAC is still good, not as good as it should be.  You  normally run as non-admin, even if not admin.  Prompted to elevate when you need it.  Avoids the other solution: admin has two accounts, admin and non-admin.  When you logon as administrator you get two tokens: standard and administrative.  Default token used for the admin’s new processes is the standard.  When you try to run some programs, you’re prompted to use the admin token.  How does Vista know?  The program is coded to say it needs elevation.  Once a process starts, you cannot change the associated token is to restart the process.  If a program isn’t coded, you can do "run as administrator".

Even if this isn’t a 100% secure anti malware solution, it allows admins to have the recommended dual ID solution with a single admin account.

Run As is still there but … when you run as you run as there standard token.

You can disable this prompt but still have UAC.  Use GPO to : elevate without prompting, prompt for consent (default), prompt for credentials. 

The secure desktop is where the screen grays out when the prompt comes up.  It’s a special session with your desktop as a screenshot.  You can configure this but it’s best left as default.

Tip: to configure a program to prompt automatically, edit the properties of the exe to "run as administrator".  That’s OK but not built into the program. It doesn’t travel when you copy the file, etc.  Vista catches anything called setup, install or update in the file name it knows to prompt for elevation.  That’s a "sometimes" workaround.

Proper solution for dodgy applications is to use a manifest.  You can simply place it in the same folder as the exe.  If the file is called myexe.exe then the manifest is called myexe.exe.manifest.  There’s some caching behaviour with this so it may fail if you’ve been testing.  Create test folders when experimenting to avoid this.  There’s also a bug where you set it not to prompt but it can still prompt.  Might be fixed in SP2.

Best way is to build the manifest into the exe.  You use a tool called MT (in visual studio including the free Lite edition): mt /manifest <my.manifest> -outputsource:<myexe>;#1.

Windows 7 has a different UAC control, kind of like the IE security slide controls, varying from maximum to minimum/off.

Windows Integrity Levels

Every user token, object and process has an integrity level (IL).  One object cannot change another unless it has an IL greater than or equal to the subject’s IL.  This is also known as "mandatory integrity controls" or "windows integrity controls". 

MS didn’t really use it in RTM but they left the mechanisms in place.  It is possible for a user/attacker to create a file that even an administrator cannot delete.

3 types of WIL label:

  • no read up
  • no write up
  • no execute up

Two tools you can use to see these:

  • icacls
  • chml – a tool that Mark wrote.

Use "icacls <file> /setintegritylevel <level>" to set a WIL label on a file.

As an admin, you cannot set a WIL level as system.  Icacls won’t do it.  Use chml instead:

  • WinPE
  • Use PSExec as system

If you cannot delete a file you have permission for:

  • Uses these tools to look for "System Mandatory Level".
  • Boot to WinPE to reduce the WIL label.
  • Delete it.

Use whoami /groups /fo list to see your session’s WIL label.

Those are the highlights.  Much more in Mark’s Vista security book.

Day 4: Inside Windows 2008 R2 Virtualisation Improvements And Native VHD Support

The speaker was Mark Russinovich, Microsoft Fellow and former owner of Sysinternals and Winternals.

This was a very exciting session with lots of news.  Anyone attending was left quite geeked out by what’s happening with Microsoft Windows Server.

VM Migration

Live Migration is Microsoft’s answer to VMware VMotion.  At a high level it seems quite similar.  It requires a shared cluster file system between hosts and gradual memory transfer before switching VM’s over.

How it works:

  • A connection is established between the source and target hosts.
  • Transfer the VM configuration.
  • Transfer RAM.
  • Suspend the VM on the source and transfer any remaining state (CPU and maybe some RAM).
  • Resume the VM on the target.

The goal is that this entire process complete in less than 20 milliseconds.

The RAM transfer works as follows:

  • The source host creates a "dirty" bitmap of the memory pages used by the VM.
  • The pages are copied to the target host one by one.  As this happens, they are marked as "clean".
  • If the VM changes a memory page then it is marked as dirty, thus requiring that it be copied again.
  • This decremental copying process is done 10 times with remaining dirty pages, attempting to mark all pages as clean.
  • The process stops either when (1) all pages are clean (copied) or (2) all 10 passes have been completed.

At this point the VM the VM is suspended (frozen).  The remaining state (CPU and the few dirty pages) are copied to the target host where it can be reanimated.

Clustered Shared Volumes (CSV)

  • This was needed for live migration, just like VMFS.  It allows multiple hosts to access a single volume to facilitate near instant migration of VM’s from one host to another.
  • It simplifies storage configuration.  We will no longer need 1 LUN for every VM.
  • Allowing large CSV LUN’s with many VM’s will make the self service portal much more acceptable for end user usage (with quota).

How it works:

  • One host owns the namespace (LUN), e.g. the directory structure and metadata.
  • Any host can read/write(lock) a file.
  • Relatively rare operations such as create file, delete file and resize file are sent to the LUN owner.
  • The host that owns a VM opens the VHD file for exclusive use.

Network Virtual Machine Queues

Right now, processing network packets for VM’s requires:

  • VLAN lookup to determine the destination VLAN.
  • MAC lookup to determine the destination machine.
  • Copying the packet (via the VM Bus) to/from the VM to/from another VM or the parent partition (for the physical network).

This requires 2 context switches by the host CPU.

Microsoft has worked with NIC hardware vendors to introduce Network VM Queues.  The VMQ works as follows:

  • The hardware participates in the process.
  • The parent partition is removed from the process.
  • The context switch numbers are reduced.

NIC Embedded Switch:

  • This is used in VM to VM traffic.
  • The hardware provides virtual switch trafficking.
  • If VM’s are on the same virtual switch then they communicate via the hardware only.

Hyper-V Power Management

Windows 7 (and 2008 R2) brings:

  • Core Parking: Each core is put to sleep when it is idle for a predefined time frame – calculated by the cost of bringing the core back into operation.  If all cores in a socket go to sleep then the core goes to sleep.  The sleep time might be milliseconds but this saves power throughout the day.
  • Timer Coalescing: Now, Hyper-V wastes tiny amounts of CPU cycles by synchronising timers in every VM, one VM at a time over different schedules.  In Windows Server 2008 R2, all VM’s are synchronised at the same time.  This provides more opportunities for Core Parking.
  • Hyper-V uses these technologies.  However, VM CPU rules are guaranteed.

VM Memory Management

  • This uses new technology from both Intel and AMD.  This allows the CPU to maintain 2 levels of memory mapping.
  • In W2008, Hyper-V maintains a shadow table to map both the VM RAM and the host’s physical RAM.  It is estimated that this causes 10% of CPU activity and consumes roughly 1MB RAM/VM.
  • Second Level Address Translation (SLAT) means that there is no need for a shadow table.  The hypervisor has less activity.  CPU utilisation is reduced to 2%.  We also remove the consumption of roughly 1MB RAM/VM.

Native VHD

This is easily the most exciting development I’ve heard this week and will change server computing in the data centre over the next few releases of Windows Server.

The aim of Microsoft is to increase the usage of the published format of VHD:

  • Reduce format explosion (eventually replace WIM).
  • Leverage existing tasks.
  • Give a consistent experience for partners and administrators.

Remember that we have 3 types of VHD:

  • Fixed: A static sized virtual disk file.
  • Dynamic: A maximum size is defined but the file only consumes the disk space that it requires for it’s containing data.
  • Differencing: This is an extension of a targeted fixed or dynamic disk.  The idea is that you load a differencing disk and it stores and disk data that is different to the targeted VHD file.

Notes on VHD:

  • VHD’s have a maximum size of 2TB.
  • MS aims that VHD performance should be within 10% of raw physical disk.  They gotten within 2% of raw physical disk performance in large scale lab tests.
  • The term "surface" means mounting the VHD file as an accessible volume on the physical server.

The purpose of native VHD is that you can mount a VHD file from a physical server.  You can use it as an ordinary volume or you can use BCDEDIT to boot the physical machine from the VHD file.  In this scenario there are two volumes.  A small volume has the minimum boot files and the paging file for the server.  A storage volume contains the VHD file that contains the operating system.

We get a demonstration showing the disk management in operation.  Mark is actually running his demonstration operating system from a differencing disk.  The clean demonstration is in VHD file 1.  VHD file 2 is a differencing disk that points to VHD1 as it’s source.  The machine boots from VHD2.  Anything new that stores data to its disk stores the data on VHD2. 

Requirements:

There is a requirement that Hyper-V is installed before you "surface" a VHD.  You can only boot from VHD if you surface it.  The paging file must exist on the physical boot disk.

Here’s the strategy:

  • This is the long term data centre strategy.
  • There will be one image format from Microsoft.  They want to figure out how to make VHD do everything that WIM can before WIM is killed off.
  • This general image will allow easy migration, e.g. Physical to Virtual, Virtual to Physical and Physical to Virtual.  Imagine not needing to worry about drivers when going from one generation of hardware to another?  Consider automatically migrating VM’s from a Hyper-V cluster to a dedicated physical server or vice versa when performance/resource requirements change?
  • Reduced total cost of ownership (TCO).
  • Patching will become a safer process: freeze the machine, create a differencing disk, boot from the differencing disk.  If all is well then merge the disks.  If there is a problem, remove the differencing disk and boot from the original VHD.

How Is It Deployed?

  • A boot agent is installed on the hardware onto a boot disk.  The paging file resides
    here.
  • A VHD is created and surfaced.
  • An operating system is deployed to the VHD.
  • You can now manage the server as you always have.

Limitations:

  • The differencing disks currently must reside on the same physical LUN as the targeted VHD.
  • Dynamic disks are pre expanded by default – not sure what that means to be honest.
  • The nesting depth for booting from VHD is limited to no more than 2 levels.
  • Non boot VHD’s are not auto mounted at the moment.

Other Hyper-V Improvements

  • Hot add storage.
  • Performance improvements "everywhere".
  • Support for 32 logical processors.

Day 4: User Authentication Using Kerberos and NTLM

The speaker is Mark Minasi.  This is the 3rd time I’ve seen this talk and I’ve heard it twice before on Mark’s security audio book.  It’s impossible to take notes … trust me; this one is complex.  This is the basis of everything with do in Windows (see my previous SharePoint post) and a refresher is very valuable.

After this, I’ll be off to the Auditorium for Mark Russinovich’s boot from VHD talk on Windows Server 2008 R2.  That’s one seriously powerful feature.

Day 4:Internet SharePoint Authentication

I had problems working with and Internet based SharePoint server with Internet based clients.  I was talking to Mark Minasi about this (Kerberos talk) and a kind stranger came in with a fix.

  • Patch for SharePoint (KB number unknown at the moment – released this past Summer).
  • Enable forms based authentication for the application.
  • Enable delegation for the SharePoint server.  Need to create 2 SPN’s: one for machine name and a domain based server account with no user rights.

Thanks to Graeme Hill (CT, Chalmers University of Technology).

Day 4: Running and Maintaining The System Center Suite on MS Hyper-V

The speakers are Gordon McKenna (MVP OpsMgr) and Justin Kimber.  Both from Inframon.  The subject is very interesting to me and I’d consider it.

Oh man!  This is a small world.  The guy sat beside me twisted his laptop around to show me he was reading one of the whitepapers from my blog.  That is totally cool!  If you’re reading … Thank you!

These guys have P2V’d their own System Center servers at the office.  They’re doing a live demo of it here.  Very brave!  If I was wearing a hat, I’d tip it.  They’re doing the MS styled one asks a question and the other answers as a consultant.

My concern is disk performance.  They’ve brought up I/O for things like ACS.  We know SQL performs well on fixed size VHD, but is ideal on pass through disks.  Fixed size VHD is more flexible but they recommend (correctly IMO) pass through.  This is not a solution for huge deployments.  Remember that virtualisation is not for everyone.  Use System Center to analyse the appropriateness of virtualisation for each candidate machine.

Self Service Portal

The self service portal is brought up as being nakedly presented to the Internet via SSL.  This allows remote console access to the VM.  Combined with roles and you have a nice SSL based KVM for the virtual machines.  Combine this with VLAN tagging (see my Hyper-V subject posts from a few months back) and you have a good combination for Hyper-V security.  What I like about the web site is the simplicity.  Very cleanly laid out and makes it ideal for delegated operators to manage machines they are responsible for.

For remote access, I’d alternatively suggest TS Web and TS Gateway.  Publish a shortcut to MSTSC.EXE and you can bounce to any internal server without VPN.  Haven’t tried it with TS but I did do it with Citrix Metaframe years ago.

Backup

Interesting point, they do bare metal backups of the VM’s using DPM 2007 and replicate the backup to a DR location.  That simplifies backup recovery.  The normal is to have alternative OpsMgr servers and sacrifice a goat for ConfigMgr.  The DPM solution allows for much simpler and rapid recovery.

Tips

  • System Centre is fully compatible (not fully supported) on Hyper-V.

OK, these guys are light on facts and there’s a purely 100% wrong statement on their slides for RAM requirements.  They’ve suggested dynamic disks for some production usage.  Don’t tell PSS!  Every MS document I’ve read says fixed size and pass through are the only supported disks in production.  I’ve had enough.  Time to leave this room.  These guys are guessing.

My Advice

Be careful about what consultants you hire when you want System Center work done in the UK and watch out for MCS subcontracting to others.  Ask loads of questions that you’ve already researched.

Day 4: Steve Riley on Hyper-V Patching

I caught the end of Steve Riley talking about the urgency of patching and a solution for VM’s.  He recommended patching as soon as possible.  His thoughts were the risks of not patching while waiting for testing were greater than the risks of something going wrong with a patch.  Rather simplistic point of view.  If an admin follows process of testing then he won’t get fired for an attack.  If he deploys an update without testing then …. ouch.

Steve’s suggested process was:

  • Snapshot your VM’s.
  • Patch
  • If something goes wrong, rollback the VM.
  • If it goes well, remove the snapshot.

There’s two problems here:

  • In my fun’n’games with PSS, I’ve learned that PSS do not support snapshots in a production environment in Hyper-V.  You must use bare metal backups using a Hyper-V VSS writer certified backup solution, e.g. DPM 2007.
  • You need to be careful about rollbacks to snapshots/bare metal backups.  Active Directory domain controllers should never be recovered in this manner in a network with more than one domain controller.  There is a risk of a USN rollback.

Personally, I won’t be giving up my 3 phase process: virtual lab test, pilot deployment and live deployment.

Day 4: Speaker Idol And Afterwards

Speaker Idol is kind of like Pop/American Idol except its for technical presentation speakers.  The final was held today and my friend, the "queen of deployment" Rhonda Layfield won it.  Her prize is a paid for trip to TechEd EMEA next year and a slot as a session speaker.

It was interesting.  Each speaker gets 5 minutes to talk about a subject of their choice.  Their judged on the quality, speaking style, presentation skills, accuracy and the slide deck.  The judges are very picky and the final judges included Mark Russinovich and Steve Riley.  Rhonda won with a session on Network Monitor 3.x.  Other presentations included Powershell performance improvement, MS Desktop Optimisation Pack for Software Assurance *phew* and a dodgy session on "hacking" Win7 to get something called Superbar. The sessions were recorded so they could be put online.  I’m left wondering if the Win7 session will be online – it did talk about downloading dodgy tools and the judges were not impressed.

Muggins here was askedon Sunday if I’d participate.  I got together a session on Monday morning on Hyper-V and rehearsed.  I never got called in despite hearing I was in.  As an apology, I got a voucher which was spent on a Geek-Shirt and a guaranteed slot next year in Berlin if I’m a delegate.

Afterwards I went wandering around the stands in the exhibitors hall. It was cool to look at the HP storage blades (a blade that only hosts disks).  Right now they take up to 6 * SFF 146GB drives.  In January or thereabouts, they increase to 300GB drives.  There’s also some new G6 stuff on the way.  I’ve got an invite into an NDA room to see the stuff in action tomorrow.

I caught up with TS guru Alex Yushchenko.  He was unfortunately able to confirm that the thin terminals that are currently available don’t support XPS drivers => no TS 2008 Easy Print.  We need to wait for updates from MS for XPe.  That’s rather unfortunate.  I love how EasyPrint works and performs: zero configuration and LAN-like printing over latent links.

Day 3: Name Resolution 208 Style: DNS, WINS and NetBIOS

The speaker is Mark Minasi.  I will only blog a few points on this presentation only because it’s material that Mark make’s a living from.  Despite the data here, I really recommend you attend Mark’s sessions of you get a chance … there’s always much more to be learned from him in person.

  • DNS is the cause of most Active Directory issues.  True enough based on my experiences.
  • WINS is not dead.  Still used by many technologies.  Try disabling it in a lab first.  WINS is a W2008 feature.  IPv6 is not WINS aware. 
  • Computer Browser (network neighbourhood service) is turned off by default.  Network Discovery (multicast instead of broadcast) is disabled by default.  Uses UDP 3702, TCOP 5357 (HTTP) and 5358 (HTTPS).  Based on WS-Discovery.  Removing legacy (pre Vista/W2008) machines reduces LAN traffic.
  • Background zone loading: LOTS (thousands) of AD integrated zones can take 1 hour to boot a DC – DNS loads and checks all zones before completing service startup.  Now, DNS fires up and loads zones, thus allowing DC to boot faster.  DNS multithreaded.  DNS can do LDAP query to another DC while the AD-I zone is unavailable.  Not able to accept updates until all zones are loaded.

Administration

  • Can install DNS and/or ADDS on Server Core.  Use DNSCMD to manage DNS.  Now in the OS, not in resource kit.
  • For your first zone create on Core using DNSCMD, restart the DNS service to make it work.  There’s a weirdness there in the DNS service.  After first zone, everything is fine.
  • Keep reverse zones to facilitate site based GPO and to quell DNS chatter on PTR records.  All computers attempt to register PTR records even if you have no ADI PTR zone.  In that case, the registration attempt can go out onto the Internet.  Not nice at all!  See "prisoner.iana".  Or use GPO to disable PTR registration.
  • Beware the dodgy DCPROMO DNS wizard trying to create a delegation of .com, etc for your root domain.  Just say "no".  And even if things are OK, you get a warning about the zone already existing.  It’s a nonsense error.
  • RODC’s cannot accept changes to AD-I zones.  That DNS traffic will want to go to a read/write copy of AD across the WAN.  Use ADSIEdit to modify the permissions of that zone to allow the group of RODC’s to write to the zone.
  • Branch Office DC offline => PC’s in the branch office will hit any random DC on the WAN for logon.  We now have "Rediscover".  Automatic on W2008 and Vista.  KB939252 for XP and W2003.  GPO: Computer ConfigurationAdministrative TemplatesSystemzNetlogonDC Locator DNS RecordsForce Rediscovery Internal.  The default value is 12 hours (measured in seconds).  Vista and W2008 will operate differently – uses site links to find the next nearest site.  Another reason to put in sites and site links – DO NOT USE DEFAULT SITE LINK!  It’s lazy and leaves other things unprepared for other services, e.g. Exchange 2007.

IPv6 and Name Resolution

  • Uses LLMNR 0 link local multicase name resolution.  Requestor multicasts to UDP 5335.  Answerer unicasts to requestor on UDP 5335. 
  • AAAA (quad-A) gives name-IPv6 name resolution.  Vista and 2008 automatically registers AAAA.  Link local addresses that start with FE80 don’t register in DNS.  W2003 DNS handles AAAA.

New DNS Record Types

  • DNAME: map nasty long DNS names to short friendly ones.  It’s similar to CNAME, just for domain names.  Handy in migration scenarios.  It’s an RFC record type.  Example.  Move A or AAAA records to new zone.  Create a DNAME record in the old zone.  You cannot do this in the GUI – use DNSCMD.  dnscmd /recordadd oldzone.com  @ DNAME newzone.com.  The response is like "Oh sorry that doesn’t exist.  Did you mean this instead?".  Records in the old zone then DNAME stops working … leave the defaults there, e.g. SOA, NS, etc.
  • Post-WINS single label names: Use NetBIOS style names for DNS lookups, e.g. myserver instead of myserver.myzone.com.  Requires 2008 be on all DNS server.  Use a zone called "GlobalNames".  Enable global name resolution on all DNS servers with that zone.  Now add CNAME’s in this zone, e.g. myserver maps to myserver.myzone.com.  Best to use AD integrated zones.  Put it in ForestDNSZones makes sense for this – it’s a global zone.  You can use it for WINS replacement for manageable numbers of records; they’re manually created.