Practical .NET
The Sociology of Application Security: Less Can Be More
The critical part of security isn't your code; it's the people using your applications. Because of that, increasing your security often makes your applications less secure.
First rule of security: If a user is regularly violating your security procedures then it's not his fault, it's your fault. You've made security too expensive.
For example, I was at a client's site where a development team was busy setting up an environment to try out a new application. As they were setting up the environment, they were also describing to me which of their organization's security procedures for setting up these environments they were violating. These people weren't stupid or lazy, and they weren't ignorant of the security procedures -- in fact, they were able to give me a very detailed description of which of the security procedures they were happily violating.
So, why was the team busy violating security? Because, like most human beings, the team was constantly doing cost/benefit analyses: They compared the costs of following the security procedures against the benefits of ignoring those procedures. For setting up a test environment, those procedures were difficult, awkward and hard to implement. As a result, the costs of not following those procedures vastly exceeded any benefits the team could see from following the procedures. Those procedures had actually made the organization less, not more, secure because the procedures almost ensured that people would violate them.
But that doesn't really answer the question -- it just moves the question around to: "Why were the security procedures difficult, awkward and hard to implement? And why didn't the team see the benefits of following them?"
Trying to Manage the Cost Horizon
Part of the blame for those procedures lies on the side of the people designing the procedures: From the point of view of the people designing the procedures, no cost was too high for someone else to pay. But part of the blame also lies on the side of the team not following the procedures: They were ignoring the costs of violating the security procedures.
The benefits of following security procedures are, unfortunately, all about "cost avoidance." Specifically, they're all about avoiding the costs of a security breach. Unfortunately, people have a "cost horizon," which describes how close a cost has to be in order for someone to be aware of it. When it comes to security, most people's cost horizon is very close: Any cost that isn't immediately in front of them is invisible. From the team's point of view, they couldn't see any costs that needed to be avoided and, as a result, no benefits to be gained by following the security procedures.
You might think the solution to this problem is to move people's cost horizon out far enough to see the costs of a security breach. You have only three tools for changing people's cost horizon: scolding them, educating them and punishing them. I think we can all agree that scolding doesn't work. Personally, I don't think education can move the horizon out very far; I believe that, soon after the education process is over, the horizon moves back to its original position.
That means your only real option in moving the cost horizon is to insert your own costs inside the horizon: To punish people for not following procedures, even if nothing bad actually happens. For punishment to work without being severe, the punishment must be certain: People have to believe they'll be caught. If the punishment isn't certain, then you have to increase the punishment's severity. Either you must implement an intrusive surveillance system that ensures you catch everyone, or you must ratchet up the severity of the punishment to the point where you're firing the odd person you do catch for doing relatively trivial things. Neither of these is good for morale.
Trying to move the cost horizon is a waste of your time. Give up now.
The Right Answer
In fact, the only real solution to getting people to follow security procedures is to lower the costs of following those procedures to the point where the resulting cost/benefit analysis makes sense to your users. The team at my client's site would have been perfectly willing to follow their company's security procedures if the costs of doing so had seemed anywhere near what the team would call reasonable. I can't discuss the team's situation (at least not without violating security), so I'll use an example we're all familiar with to demonstrate this: passwords.
First rule of passwords: Three people can keep a secret if two of them are dead. Your password is secure only if the there's exactly one record of the password and it's in your head. Therefore, any procedure that causes people to record their passwords makes an application less, not more, secure. In addition, if people use the same password for multiple applications, the security of all of those applications is reduced.
Unfortunately, the current security procedure for passwords is to ask people to do something very expensive: Use longer, more complicated passwords and to not recycle them. This procedure pretty much guarantees that people will both record their password somewhere and recycle them. Pretending otherwise is, at best, a polite fiction and, at worst, a lie. The demand for increasingly more complicated passwords makes applications less, not more, secure.
But think of your ATM card's PIN number: It has only four characters, all of them digits. This is far less secure than any password people currently demand. Yet the four-digit PIN is the standard for some of the most security-conscious organizations in the world: banks. Why is this?
It's because banks have a four-pronged approach to security that reduces the costs to the user. First, your PIN is just one part of a two-factor authorization (your card is the other factor). When you have two-factor authentication, each factor can be relatively insecure. Further, the other factor (the card) is a physical device that's less open to automated attacks than passwords. On top of that, banks obsessively monitor activity on your accounts and are ruthless in shutting down accounts if they detect "unusual" behavior; banks are willing to raise a lot of false positives. These false positives would drive their users crazy except that banks also staff their help desks appropriately to ensure that, if your account is shut down by a false positive, you can open your account back up relatively quickly.
This four-pronged approach (two-factor authentication, physical device, constant monitoring that raises many false positives and staffing up to resolve false positives quickly) keeps the cost to the user of managing PINs low. And users, with their costs kept low, actually follow procedures: users are less likely to write their PINs down and less likely to use the same PIN with multiple applications. The relatively weak PIN creates a more, not less, secure application.
I know, for example, that I don't use the same PINs for any two applications and that I've never written any of my PINs down. I cannot say the same thing about any of my passwords. Can you?
About the Author
Peter Vogel is a system architect and principal in PH&V Information Services. PH&V provides full-stack consulting from UX design through object modeling to database design. Peter tweets about his VSM columns with the hashtag #vogelarticles. His blog posts on user experience design can be found at http://blog.learningtree.com/tag/ui/.