Cyber Crime's Chief Investigator
Microsoft's first CSO shares his views on data security and Redmond's new trust model.
Howard A. Schmidt has used technology to thwart crime since his early career as a policeman and pioneer in computer forensics. He started working with the U.S. Air Force in the early '90s -- helping the Office of Special Investigations to counter some hacks in the Department of Defense systems, building better processes to protect the systems -- and, in his words, "a switch flipped." He began to focus on information security. Schmidt continued in his role as an information security advisor to the government for more than 30 years, working for the FBI, the U.S. Air Force and the Bush administration after Sept. 11, 2001.
Recruited by Microsoft in the mid-'90s, Schmidt served as the company's first chief security officer, and in April 2001 helped launch the Trustworthy Computing initiative. He retired from public service in 2003, and became the CSO of eBay Inc. Today, Schmidt is the president and CEO of R&H Security Consulting LLC. He sits on multiple boards and advises companies and non-profits. Senior Editor Kathleen Richards caught up with Schmidt the week after the RSA Conference to find out where information security in a Web 2.0 world is headed.
How did you become interested in security and technology?
In the mid-'70s I was a ham radio operator. I built my first computer in 1976 and was involved in bulletin board systems and that sort of thing through the '70s and '80s. When I became a policeman, one of the things we were living with at that time was sort of the older MIS [Management Information Systems] departments that weren't real keen on moving over to a more distributed PC environment. So I wrote a couple grants, and got some federal money to put together my own sort of in-house network of PC databases for organized crime investigations. Because of that, once we started to see criminals using computers -- everything from keeping ledgers of their drug stuff to writing plans on how to rob banks -- I started to work in computer forensics and started to do some of the early development in that area.
You think of security as a business process?
Correct. I think in the early days of security, we viewed security as the necessary evil -- myself included -- the cost center, the bad guys out there. And in the past few years, we've fully recognized that we have to do the business of security.
So is the idea that if you follow the process, you'll produce more secure software?
The business looks to define process with a desired outcome to generate revenue, to run an HR system, whatever the desired state would be, and with that security has got to be baked into that from the very outset. So it's not just a matter of creating an application that has a really good user interface, where it's on or under budget and easy to use -- it has also got to be secure, so that has got to be part of the business plan itself.
What kind of tools should developers be using?
We have to look across the entire spectrum. We shouldn't be asking our developers to develop software and then throw it over the fence and say, 'OK, quality assurance will find the problems with it.' We should be giving the developers the tools right from the very outset to do the software scanning and the source code analysis. And that does two things. One, it helps them develop better code as they discover things through the automated scanning process on the base code itself. But also, once it gets to quality assurance, it gives them the ability to focus more on quality stuff then looking at security things which you can eliminate in the first round.
The second thing, when you look at the compiled binaries and stuff like that, the way those things work, generally we look at the pen-test side of the thing. We can't ignore that because that's really one of those things when you put it in the production environment, there may be other linkages somewhere that may create a security flaw in the business process while the code itself is secure.
Then clearly the third level of that is in a Web application: Web 2.0 environments, for example. Now you have the ability not just to pull information down but to interact directly -- this creates a really, really dynamic environment, and even simple things like cross-site scripting and SQL injection have to be tested for, at the end result once things are out in the wild.
||"We shouldn't be asking our developers to develop software and then throw it over the fence and say, 'OK, quality assurance will find the problems with it.' We should be giving the developers the tools right from the very outset to do the software scanning and the source code analysis."
|Howard A. Schmidt, President and CEO,
R&H Security Consulting LLC
How can developers reconcile the tradeoffs between data accessibility and security?
One, they're not mutually exclusive. And I think that's one thing that we've seen with the efforts at Microsoft, Oracle -- all the big companies. They have really focused on this whole software dev lifecycle-building security in from the very beginning. Because as I think you'd agree, data is the gold, the silver and the diamonds of the world we live in today. It's where the value is and protecting it through better security in apps is critical.
One of the things that I'm concerned about is the small and midsize developers that are developing a lot of the things that we see on our laptops and desktops. Things to burn DVDs with, things to play music with, which aren't often times recognized as critical applications, but nonetheless, still have interactions with the Internet, still have vulnerabilities and still give a bad guy a way to get into your system or server.
Another aspect of this is, even those companies that buy the majority of their software from large software houses that are doing better, that are doing the code analysis, that are looking at this from a 360-degree perspective. They're writing their own local applications to interface between applications, and I worry because they can inadvertently be introducing vulnerabilities in the software they're developing because they're not using the same rigor that some of the bigger guys are using now in doing software code analysis.
What's your take on identity-management protocols like OpenID and Windows CardSpace?
Those are things that I've been looking for, for years. I think many of us agree that the more complex it is to do something, the more security will suffer. We tell people all the time: 'Use strong passwords, change your passwords frequently.' So obviously with us having more accessibility -- particularly in a Web 2.0 world -- that means we have to manage more user IDs, manage more passwords in order to do it right, and what happens just by pure human nature is people will not be as robust about their security. So using an OpenID or using some sort of an identity-management schema gives us the ability to have strong identity-management, multifactor generally and have the ability to have it recognized across multiple environments, whether it's e-commerce, online banking or interacting with the government. That's a big plus. But as we move forward in this we shouldn't let the software applications become our Achilles' heel and have really good systems that are undermined by the vulnerabilities in the applications that we're using with them.
You worked at Microsoft for five years and were one of the founders of their Trustworthy Computing Strategies Group. Craig Mundie outlined an "end-to-end trust" model at the recent RSA conference. What's your take -- is there something new there?
I don't know that there's something new. I think it's just a continuation of the fact that there's no single-point solution in any of these things in any environment. It's not a hardware solution. It's not a software solution. It's not a business process solution. It's not an identity-management solution.
All of these pieces -- and the way I like to explain it -- have you ever been to the Taj Mahal? The Taj Mahal is not just a building; it's actually comprised of literally millions and millions of little tiles that are sort of inlays that make up the entire thing -- it's just this huge mosaic ... When you put it all together, you have this fantastic building. I relate that to where we are today -- the hardware guys, the firmware people, the big software houses, the small software houses, the ISPs. Individually they all work, but when you put them together you get something really fantastic.
So the direction that Craig is talking about is clearly, I think, that sort of a concept. The problem is that when someone is building red tiles and someone else is building blue tiles, somebody has got to reach an agreement somewhere on how those things are going to link up -- and once again I had no role in creating Craig's speech or anything else -- it's my interpretation of it. We need to spend some time looking for the way that those tiles that we're all building will fit together. But the end result when we get together is we're all going to be securing our part of cyberspace to make it better for all of us overall.
Does Microsoft's recent interoperability pledge change the security equation?
It does, and that's one of the things when you start looking at one of the complaints that people had over the years is the inability to write security-related APIs because they didn't know what it was going to do with the other ones. So having access to the APIs, knowing what function calls are out there, knowing how the security that you implement is going to impact that is going to once again take us a step further.
In what other ways can developers address security that have yet to catch on?
It's really hard to do these things, when you're told you've got two weeks to get this done, and here's the budget you've got to do it with, so to go back over and spend some focused time looking at security stuff is a challenge in some environments. And that's one of the things that we need to figure out. The development community is going to figure out a way to say, 'Sure we can pump it out, but do you want to have something where six months from now we're spending three times the resources to fix it -- the reputation hit and the impact it would have on the business and everything else?'
And then the second piece of that is the use of automated tools. When you start looking at some of the development that we've had over the years, when people are dealing with 25,000 or 75,000 lines of code in certain applications, it's not feasible by any stretch of the imagination to have somebody manually going over that. And in the past few years, there's actually been the development of automated process to do software analysis at all the different levels. There are now tools available that make that more reliable but also make it a lot quicker, and we've got to make it so that they get those tools in their hands as well.
What did you find noteworthy at the recent RSA Security Conference?
As we develop greater dependency on mobile devices, the bad guys will start using unsigned applications on the mobile device to commit the next-gen of cyber crimes. We need to look at it now and build that into the phones that we'll start using in the near future.
About the Author
Kathleen Richards is the editor of RedDevNews.com and executive editor of Visual Studio Magazine.