Human Error: Living With the Weakest Link

Opinion

“We have met the enemy, and he is us.”
– Walt Kelly’s Pogo

Computer security breaches have become so common as to seem like a force of nature we can’t stop or control, like hurricanes or epidemics. After each one, experts scramble to plug holes, rewrite security plans, and explain at length why that particular problem will never happen again. We want to believe that with just a few more bug fixes, our systems will be truly secure.

Unfortunately, perfect security will elude us for as long as human beings are involved, because humans—for all our greatness—are imperfect. From the data entry clerk to the Director of Security to the CEO, everyone makes mistakes, and sometimes those mistakes result in security breaches, either immediately or long after the original mistakes were made.

The truth is, our security systems are indeed improving steadily, and tend to get better after each breach. But even if we believe we might create perfect security technology, human nature doesn’t fundamentally change. Sometimes we’re fooled by social engineering attacks (like spear phishing), sometimes we misunderstand what we’re supposed to do when confronted with a cyber threat, and sometimes we just plain mess up. Software can be more or less perfected, but not human behavior. In fact, the more complex, the more stressful, or more fast-paced the environments we work in, the more likely we are to make mistakes; this “human factor” has been well proven in industries that rely on human performance for safety, such as aviation.

Given that human error is inevitable, we need to start supplementing our important efforts to educate users about security with more explicit plans about how to handle the next security-threatening human error. Just as we can prepare for the next tsunami by building higher sea walls, and zoning to discourage building in flood-prone areas, we can anticipate the ways some users will inevitably err, and plan around them.

Pre-Empting Human Behavior

One thing we can do is deflect and redirect errors to where they’ll do the least harm. Workers in nuclear power plants have often replaced generic-looking but potentially hazardous switches with beer tap handles or other things that stand out and warn workers that this is a particularly dangerous switch. This may not decrease the likelihood of a worker throwing the wrong switch, but it may decrease the likelihood of throwing the worst possible switch, and certainly helps us outwit our own tired, stressed, or panicked subconscious that could throw the switch without thinking.

Paradoxically, an overall system is often safer if we give users fewer warnings, not more. A warning that users see too frequently can be like the boy who cried wolf; users can become so accustomed to ignoring it that they will barely notice it when it truly matters.

As our software systems grow more complex, it is high time that they developed a more complex knowledge of the user. For a brand new user, there might be many risky actions worth warning them about. But by the time a user has become adept at using a system, warnings that were initially useful become nuisances to be ignored, which can eventually cause the user to overlook the occasional truly important warnings scattered among the rest.

For example, if I tell a system to delete a whole directory full of files, and I’ve never done anything like that before, a warning is probably in order. If I’ve been using the system for years and have deleted directories often in the past with no bad consequences, the warning may do more harm—by desensitizing me to warnings—than good.

For this kind of user modeling to work, it is probably important that it be made completely automatic. To the extent that a human being decides who is a sophisticated user and who is not, the decision becomes a political one, with possible career-affecting consequences, and is subject to all the interpersonal drama of a performance review. Such a system could become an administrative nightmare, with the potential to negatively affect team morale and performance as the status quo becomes the acceptance of the lowest common denominator in an effort to avoid risk and doubt.

It seems much more likely to be effective if the system itself decides who is a sophisticated user, and matches its warnings and other constraints to the capabilities it ascribes to the user. Paradoxically, people are more likely to accept being labelled and categorized by an “impartial” computer than by a human being with whom they are already enmeshed in a complex web of relationships and dependencies.

Striking a Balance

The future of managing risks associated with human behavior will be a balancing act: we need technology that can pre-empt human error, and even replace human labor wholesale in some cases with automated processes; on the other hand, we must avoid scenarios where users feel they are living and working in a robotic “nanny state” in which they have little control.

At the end of the day, while human error is the weakest link in an organization’s security, human creativity and ingenuity are still the critical factors in overall security—and we can’t risk disrupting that. If we can provide technology that empowers users to do their best work while helping them to avoid the small errors that can have large consequences, then we will strike a balance that offers the best of both worlds—security and freedom—in the workplace.

Nathaniel Borenstein is chief scientist at e-mail management firm Mimecast. Based in Michigan, he is the co-creator of the MIME e-mail standard and previously co-founded First Virtual Holdings and NetPOS. Follow @drmime

Trending on Xconomy