Q&A

Security Sentry

DShield inventor sits in the eye of the storm.

Johannes Ullrich is chief research officer of the SANS Institute's Internet Storm Center, where he's responsible for keeping tabs on Internet-borne security threats and helping run many of the Institute's security-related training and certification programs. Ullrich came to the SANS Institute six years ago, after developing the DShield application to poll log activity from firewalls over the Internet so companies could better assess threat trends. He remains concerned about software development security issues today.

How did you get involved in covering security-related issues?
I started out by training as a physicist. I got my Ph.D. in physics. While working as a physicist I ended up using computers a lot and writing a lot of code. I started the DShield thing with the idea of collecting firewall logs from firewalls worldwide ... collecting data about security. I started the site incidents.org and it became the Internet Storm Center. That's kind of how I ended up in the security field.

What was the genesis for your DShield software?
One was me just experimenting with security and firewall logs-seeing the logs and seeing how people were trying to attack me more or less constantly. The other thing was reading about ISACs [Information Sharing and Analysis Centers] at the time. I saw all these logs and thought I was evaluating them. But I also wanted to correlate them with other users-kind of like an ISAC for regular users.

What did you use to create DShield?
I used the standard LAMP stack with Linux, Apache, MySQL and PHP. I used Shell scripts and Perl scripts as glue in the background. We added support for various Windows firewalls and so forth. It's very simple-a very simple and lean system. There isn't that much code behind it. It's still something I maintain, the entire stack. The back-end database is open source.

When creating Dshield, did you ever worry that the software itself might become a target of a hack?
There was definitely a concern with that. First of all, we don't want the software to be installed on firewalls and cause security vulnerabilities. One way to prevent that is to keep it as simple as possible. It can only run with very limited privileges, which is kind of how we protected ourselves.

One problem for software ... is that security software typically tries to do too much. And by doing too much it introduces vulnerabilities.

Security through simplicity seems like a good approach. Do you see Microsoft following this path?
It's not a Microsoft-specific issue-it's industry wide. People buy software based on features on the box. The problem, really, from a marketing standpoint is that it's hard to prove what's more secure and what's less secure.

That's the tradeoff. You kind of need your features to sell your software, so you assume it's vulnerable, but you try to limit the chances of an exploit succeeding against the software.

Is complexity a problem for internal development?
Oh yes, definitely. Internal development projects definitely have the very same issue as commercial projects. The root cause is in the entire sales aspect of software. And of course, internal software also has to sell itself to internal customers.

Are attacks shifting from the operating system to the application stack?
On the defensive side, what it comes down to is that the knowledge about secure coding practices really has to move from the people who write the OS to the application people. Operating system people were the first ones to be concerned about security, because they were the first ones to be hit by the problem in the past. Next were the server [application] people.

Is Microsoft doing a better job of enabling application security?
I do see it recently with developer tools that really seem to take security more into account than they used to. If you look at the .NET platform-which of course is a big target out there, with all of the applications being written in .NET and network accessible in nature-these days a package like .NET includes a lot of tools for developers to make it easy to write secure code.

In the past the packages to make the language work really left it up to the developers. Here's a language, use it. But they really didn't give you a lot of tools to write secure code.

One thing .NET does a good job with is validators, to validate your input. That's where 90 percent of the problems happen. Now with modern languages, these input validators come already written.

Code is going to get messed up. At least you want to give [developers] the tools and provide things like best practices that document some of the issues, how they crop up and how they solve them. That has improved a lot over the last couple of years.

Does writing secure software require a fundamental change in how shops go about the task?
Yes, it does. You really need to include that security component in every piece of the software lifecycle, from original requirements and definitions to testing, in particular.

The sad part is a lot of computer science education doesn't really take that into account at this point. The companies that write code, like Microsoft, have to do the education internally.

So are any schools doing a good job with this yet?
Not really. There are computer security programs, but they really focus on the network and not secure coding.

Those tools are really a little bit lacking right now. A couple of the more modern software packages do include those switches. There's a lot more to do in that realm, particularly in the area of software testing, like fuzzing, which is often used on the offensive side. A lot of these coders don't know how to use that and should have used it before, and attackers use it against their software.

How real is the threat to private business?
It's a very real threat and it happens all the time, particularly to Web-based applications. High-value targets like banks and e-commerce sites that run their own software are pretty regularly exploited [when they] have bugs in in-house software. And they're very rarely publicly reported. They only report if they have to, like with the law in California. It's a very big problem.

What tools do you recommend to defend yourself?
Johannes Ullrich On the Web-development end, most of the common vulnerability scanners-Nikto is an open source one, and the commercial one from SPI Dynamics. Watchfire is another company. They make fairly easy-to-use commercial software that you can test your own site with. Some, like SPI Dynamics, also look at the source code.

Some others do source code analysis, like Fortify SCA. Those are pretty expensive and require some skill to work with. You need to spend a couple days and maybe a week to get familiar with them. Sometimes you might ask for outside help from a consultant or someone else to work with you.

What about in the course of writing software? Any thoughts on making sure you don't introduce vulnerabilities?
For security-critical parts like session tracking in Web applications and input validation, you should try and find an already written library for that. You shouldn't try to reinvent the wheel in this security-exposed area. There's a little bit of developer psychology here, because most developers don't like to use code written by others.

Web developers change jobs very frequently, and every new developer you hire wants to rewrite code that was done before. You should try to reuse code and not write everything from scratch.

That's a good point about code issues introduced by developers leaving the company.
It's a huge problem in the industry. The first thing is if you can retain good developers.

The other is having well-developed policies on how code is written, [and also] having some form of formal training program. With our Web applications security course, we have companies that go there every couple years to train their developers in secure coding practices. Having something like that either internally or externally really helps developers get up to speed.

Six months ago or so, a particularly nasty JavaScript vulnerability emerged around Adobe's PDF Helper applet, which sits in the taskbar. What do you think those guys were thinking?
The developer is thinking, 'It's a really cool taskbar applet I've got there.' And again, the niftier the features are the better. And it's like the conversion to Web 2.0 and AJAX. In the original model you submitted one page at a time and with a fairly well-designed application interface. But now, because it's cool and you want to have the latest keyword in your resume, you're exposing the entire API via AJAX.

For me, whenever I think about a feature, I'm always thinking about vulnerabilities. But traditionally if you think about features, you think about selling points. That's just a mindset you have to change there.

Microsoft's ActiveX seems to fit that description. The end of the wired-up feature gravy train, yet many enterprises are still married to what is essentially a very risky technology. What are they doing about it?
They live with it and cross their fingers. That's the other problem: It's so hard to take back a feature. It just takes a couple users to use an ActiveX control and then you can't [afford to] turn it off.

Last year SAP's accounting software wouldn't work anymore because some ActiveX feature got disabled with a patch. That's really very difficult, to take a feature back to some extent. It's almost impossible.

The true cost of a feature in software isn't well understood. People always think if you add a feature, how much coding time does it take to complete it. But by adding a feature you add risk, [and] you add maintenance costs down the road.

Is managed code in the form of .NET and Java a step forward in terms of application security?
It has been a good step forward. [When] you talk about management, the one thing that's really still lacking is code management. Who's writing code? I think that's a big problem and becomes an even bigger problem as more and more people write code.

Those more complete frameworks have to be a step in the right direction. But you still need to know how to use them, particularly as you go to more GUIs with click-and-drop code writing.

Looking down the road a bit, what kinds of threats do you see emerging?
Further out, I really think it's more distributed network-centric computing. As you receive more and more input to your software from untrusted network sources, that becomes a huge issue: Where every application essentially becomes a network application, that becomes the big challenge.

Are there specific concerns around rich Internet applications based on Silverlight, JavaFX and Adobe AIR?
It always comes down to validating input and user input. That's where 99 percent of the problems come from. If you don't have good coding practices and good standards on how to do very well-structured applications, the harder it gets to keep up with these more network-computing approaches. If you think about the Java runtime and things like this, you have all of a sudden a lot of untrusted code running on your system that you may not even know is running there.

So what can developers do to prevent getting swamped by these threats?
I think the main issue is to really think like an attacker more and play with attack tools. Every developer at some point should really attack their own software and use tools from attackers, so they can recognize the threat and know how to defend against it. You should have a playpen there, a network you can use to run insecure tools like hacking tools and such.

comments powered by Disqus
Upcoming Events

.NET Insight

Sign up for our newsletter.

I agree to this site's Privacy Policy.