Data Security Is a Risk-Management Problem
It’s unproductive to think of security as a series of threats to be overcome, a computer scientist argues.
Computer security is an unsolvable problem. So instead of trying to solve it, companies should think of network security as a set of risks that are inherent in doing business online. Viewing security from that perspective will lead to better decisions and superior technological design.
Obviously, security gives rise to some straightforward problems, and businesses should examine whether they have solved them. The recent revelation that the payment protocols in some widely used e-commerce sites allowed customers to purchase even physical goods without paying is an example of a security problem that is quantifiable and solvable.
But more often, computer security is better tackled with a risk-management approach, one that does not require exact quantification. It’s a personnel problem—much like office conflict, minor theft, misrepresentation of employee credentials, and employee health. Consider that employees who take risks to get their jobs done are both assets to the organization and threats to computer security. For example, an employee who manages to tunnel around the corporate firewall to log in remotely sees the positive results of access from home. The employee’s supervisor sees only increased productivity. Security risks are part of getting the job done. Networking and connectivity inherently include risk just as hiring a human being inherently carries risk.
Security as risk management is not a new idea, but recognizing that the risks can never be entirely eliminated requires a different way of thinking. “Threats” must be countered or neutralized. In contrast, risks are mitigated or shifted. In the 1970s environmental regulators observed that the greatest risks may come from the pursuit of zero risks, and much the same can apply when critical business functionality is limited by security.
Because effective security management requires managing the human element, risk communication needs to be part of any mitigation strategy. Individuals will work around security constraints that prevent them from working effectively: “The computer wouldn’t let me” is not an acceptable reason for failure. If the choice is between computer security compliance and getting the job done, security compliance will lose every time. An employee who takes files home in order to work on the weekend experiences only increased output, and intends only the best for the organization. Conversely, if employees understand that they are indeed taking risks, and putting the organization at risk, then they can be persuaded to protect the organization. Computer security can be transformed from a set of seemingly arbitrary requirements (created by some technical others who do not understand the work to be done) into a reality of daily living, like locking the car, that everyone has to do.
Consider the difference between role-based access control (in which the data an employee is allowed to access is determined by his or her place in the organization) and a newer model called risk-based or incentive-based access control. Role-based access control assumes a perfect understanding about an individual’s role and his or her need for access to resources. Access may be difficult to gain, but it is typically long-lived and inflexibly spelled out in IT or HR departments. Employees are motivated to have as much access as possible, in order not to be blocked from seeing a file, for example, at what appears to them to be an arbitrarily determined moment. That often leads to workarounds, such as password sharing among employees who must access multiple systems to do their jobs. Yet sharing passwords and access structurally, systematically subverts access control.
In contrast, incentive-based access control provides a rough estimate of the risk inherent in accessing information. It grants an employee access to company data for the time period requested—for a price. Each employee has a risk budget, which is spent at a rate that corresponds to particular access rights and actions. So the employee who keeps access rights when they are no longer needed will have less freedom to work with sensitive data later. But an employee who urgently needs access can obtain it quickly and efficiently. Of course, certain tasks can remain always prohibited, such as approving a bidder or issuing a check. But overall, giving employees a risk budget rather than a static role gives them the incentive to minimize the employer’s risks.
Some researchers have proposed broadening the adoption of security standards by changing default settings in software and hardware so that security is difficult to avoid, or by striving to make overall systems easier to use. Inarguably, it is sometimes appropriate to force or persuade people to use security technologies. But each of these steps fails as a model for security as a whole because it approaches security as a goal that can be reached rather than a risk that must be managed.
Viewing security as a series of threats that must be overcome is a vestige of the military model of computer security. Computer security is not a war to be won against the malicious. Rather, computer security is a continuing interaction with the networked environment. Just as the natural environment has inherent risks and benefits, so does the networked environment.
L. Jean Camp is professor of informatics and computing at Indiana University and the author of Trust and Risk in Internet Commerce.
The inside story of how ChatGPT was built from the people who made it
Exclusive conversations that take us behind the scenes of a cultural phenomenon.
ChatGPT is about to revolutionize the economy. We need to decide what that looks like.
New large language models will transform many jobs. Whether they will lead to widespread prosperity or not is up to us.
Sam Altman invested $180 million into a company trying to delay death
Can anti-aging breakthroughs add 10 healthy years to the human life span? The CEO of OpenAI is paying to find out.
GPT-4 is bigger and better than ChatGPT—but OpenAI won’t say why
We got a first look at the much-anticipated big new language model from OpenAI. But this time how it works is even more deeply under wraps.
Get the latest updates from
MIT Technology Review
Discover special offers, top stories, upcoming events, and more.