Longhorn's Mainframe Man

The making of windows Server 2008 -- a Q&A with Bill Laing, General Manager, Microsoft Windows Server Division.

Bill Laing got his start at Microsoft as an architect in the Windows Server group, where he focused on building products that offered enterprise-class scalability and availability. Today he's general manager of the division, a role he assumed after Bob Muglia was promoted to senior vice president in charge of Microsoft's Server and Tools Business in October 2005. Laing oversees the delivery of all Windows Server products. He's also responsible for spearheading the Common Engineering Criteria for the Windows Server System; managing the development of Windows Computer Cluster Server; and overseeing the technical teams that engage with original equipment manufacturers (OEMs), as well as the central User Assistance teams for Windows Server.

Scotland native Laing came to Microsoft eight years ago with a lot of experience in big iron. Like many top server managers, he spent many years at minicomputer vendor Digital Equipment Corp. He also worked at Tandem Computer before joining Microsoft.

Mary Jo Foley chatted with Laing on the Redmond campus in April, just days before Microsoft dropped the first public beta (3) of Windows Server 2008, formerly code-named "Longhorn."

How do you run Windows Server 2008 development, on a day-to-day project level?
Every day there's what we call a ship room where people evaluate the daily status of bugs and what we're going to accept and where we are. I think we're actually on two [meetings] a day, at the moment. And then weekly, we have a project status meeting, and people will come in and report to Iain McDonald [director of Windows Server Program Management] and me about the app compatibility. What's the percentage of them passing or what issues are there?

We also get data back from people who have an Internet connection, and we know which roles were installed and which features. We have about 20 roles. We're up to 32 or 33 optional features that aren't installed by default.

I managed Internet Information Services [IIS] for Windows 2003, and I remember we had these incredibly heated debates because in Windows 2000, IIS was turned on by default. It was listening on port 80 when you installed it. We turned it off. Even in upgrades we turned it off, and people loved that. And then we kind of got bolder and said, 'Hey, let's not even put the bits in the directory if people aren't willing to use IIS.' So, that's kind of been the evolution of that -- that was a very powerful thing.

Bill Laing
"The Microsoft Live infrastructure uses Windows, so we want to be a good server for that. That's kind of a big influence."

Bill Laing, General Manager,
Microsoft Windows Server Division

Could you provide an example of an optional feature in Windows Server 2008?
Probably the most controversial thing we did at the time, which now in retrospect everybody says was obvious, was the Desktop Experience Pack. That's all of the things that we could make optional that we think of as client features. People said, 'Why are you even including that? It's really for developers.'

Lots of developers we talk to -- like half a million or something -- run server as their desktop box. So, they want to do other things on it, Media Player, wallpaper, all of that stuff you can kind of turn on, but by default isn't there at all on the server. Somewhere around 15 percent of people turn it on, which is kind of interesting. I just had no way of guessing. And so we see over time that we continue to get this data.

Any surprising feedback as you've been testing and rolling out Windows Server 2008?
We'd built this Read-Only Domain Controller. And we had completely thought about it for branch office. So the idea there is that historically, if you administer a machine with a domain controller, you had to have domain privileges. If you put one of these domain controllers out in a branch, the person who had admin privileges had privileges to the whole domain for the whole company. People didn't like that. There was also a theft aspect to it. If somebody had one of these in a store, they [might] not have physical security.

This big customer came in and wanted to take the Read-Only Domain Controller server core -- they weren't interested in BitLocker [encryption] -- and put it out in the DMZ, the public Internet. They wanted to authenticate their partners and employees who are on the road. So, they wanted to join the domain but over the Internet.

And that really pushed us. So they came in to the Enterprise Engineering Center and worked through that, and we had to make some little design changes. So that's something that sort of evolved.

How did the whole "modularity" concept become a focal point in the design and development of Windows Server 2008?
Bob Muglia was the key in driving this [design]. Servers are all about workloads. It's actually [a question of], what do people run on their servers? The way to succeed in the marketplace is to have very competitive workloads. I have a very simple view of competing against Linux -- you have to build a better product. With all this other noise, if you don't offer more value to the customer than Linux does, they'll beat you with that product. But when you actually then dig into that, you say, 'Well, what does it mean to build a better product?' And you get more traction if you think of it [in terms of] workload.

Would you say the idea to build around workloads was inspired by competing with Linux, or how did the idea come about?
No. I think people feel fairly empowered by workload. I mean, they're organized that way. And one of my jobs on the core server team is to enable those workloads to run well on top of the core Windows Server.

So, that made us start to think about how we would architecturally deliver the server, because that's probably one of the things that influenced the server a lot. We're delivering it in a somewhat different way from previous releases.

Did the idea of modularity evolve out of the idea of workload, or are they separate?
No. The people who had the idea first, who ended up actually working on the setup tools and the delivery tools, had actually worked on embedded XP. They had been [working on] this problem for when people went to their Windows in a point of sale terminal or some device -- what we called the server appliance kit at one time. [How to deliver] something that was serviceable and supportable, moving forward, was a key part to us.

So, they'd been kind of puzzling [over] this idea and the notion of creating what's called a manifest, which is the various components that are needed. So, we said, 'Gee, we could think of the server and these building blocks.' And we've been somewhat more aggressive in using that than the client. They've mostly used it for SKUs differentiation. We actually let customers decide what to install in a pretty aggressive way.

The other thing going on in our minds is -- and it was influenced a lot by security as well, and service-people said, 'Gee, it's really frustrating when you issue a patch for Internet Explorer, and I have to reboot my domain controller.' So, we weren't really fixated on size; it was more service and footprint, and when we met with customers we talked about that.

Are there any features you think will surprise people in Windows Server 2008?
One of the things that I'm going to make a plug for -- because I come from a high-end background of big machines, and it's why I came here -- is what I call taking the mainframe to the mainstream. There are actually a surprising number of things in Longhorn Server that are really mainframe-like features that aren't super visible to most people. But people who care about that are very passionate about them.

One of these features is the Windows Hardware Error Architecture [WHEA]. On a VAX, if the machine had some errors, it wrote stuff in the log, and you could go look at those errors, and the service engineer could say, 'The memory's starting to go bad, or there's a parity error starting to show up on the cache line,' or something like that. And we've really been pushing hard to kind of get that technology out, and Intel and AMD have been great on that, and there's been a real change.

What we wanted to be able to do is log recovered errors, because if you start seeing a recovery take place, you can predict that it's going to get worse in the future.

Machines are now starting to become pretty incredibly powerful. A four socket -- what we'd call a four-processor machine in the past, with four cores and with 128GB [of memory] -- is bigger than mainframes were three years ago. So we said, 'Customers are going to run big apps, and they're going to consolidate more.' The machines have to be more reliable, or at least [customers should be able to] predict when they need to service them. There's been a big push in this from the high-end partners.

Has the rise in popularity of delivering Software as a Service affected your plans for Windows Server 2008?
Well, there are two separate aspects. The Microsoft Live infrastructure uses Windows, so we want to be a good server for that. That's kind of a big influence. And there are surprisingly few things [the Microsoft Live team] actually needs from us.

We've done power management by default in Longhorn Server. And we think average machines will see maybe 20 percent reduction in power use. You kind of slow the clock down when it's not busy. And it's dynamic enough that you can literally slow the clock down across a disk I/O. If you've got nothing to do while you're doing a disk I/O, it actually drops the power use for that short period of time. It's not like sleeping [for] the laptop; this is really short, what they call P-state for processor state.

The other part is what we think of as Microsoft Managed Services. What does it mean to take some of those services and run them? In the short term I'd say the feedback is around management costs -- how to drive that TCO down.

That group now works for Bob Muglia. They're a big deployment customer. They've committed to 350 servers in production with beta 3: some for infrastructure, some for MSN or Microsoft.com, and then some for line of business apps.

But I think that the biggest impact of the whole service is yet to come. The way to think about it is with each workload: What does it mean for them to have an on-premise server and a service, and is there a spectrum in between those?

About the Author

Mary Jo Foley is editor of the ZDNet "All About Microsoft" blog and has been covering Microsoft for about two decades. She's the author of "Microsoft 2.0" (John Wiley & Sons, 2008), which examines what's next for Microsoft in the post-Gates era.

comments powered by Disqus


  • What's New in TypeScript 5.5, Now Generally Available

    Microsoft shipped the latest iteration of its type-infused superset of JavaScript, TypeScript 5.5, introducing inferred type predicates, control flow narrowing, JSDoc @import and other enhancements.

  • GitHub Copilot for Azure Gets Preview Glitches

    This reporter, recently accepted to preview GitHub Copilot for Azure, has thus far found the tool to be, well, glitchy.

  • New .NET 9 Templates for Blazor Hybrid, .NET MAUI

    Microsoft's fifth preview of .NET 9 nods at AI development while also introducing new templates for some of the more popular project types, including Blazor Hybrid and .NET MAUI.

  • What's Next for ASP.NET Core and Blazor

    Since its inception as an intriguing experiment in leveraging WebAssembly to enable dynamic web development with C#, Blazor has evolved into a mature, fully featured framework. Integral to the ASP.NET Core ecosystem, Blazor offers developers a unique combination of server-side rendering and rich client-side interactivity.

  • Nearest Centroid Classification for Numeric Data Using C#

    Here's a complete end-to-end demo of what Dr. James McCaffrey of Microsoft Research says is arguably the simplest possible classification technique.

Subscribe on YouTube