In early August, Black Hat celebrated 25 years of business at its annual event in Las Vegas. The primary show, held on Aug. 10-11, featured trainings, briefings and keynotes. Industrial Cybersecurity Pulse attended several of the briefings, including Harm Reduction: A Framework for Effective and Compassionate Security Guidance, by Kyle Tobener, vice president and head of security and information technology (IT) for Copado.
Harm reduction is a concept borrowed from health care that Tobener suggested could be useful in dealing with cybersecurity problems. It has been used for decades in other fields and might be a more effective way of curbing risky online behaviors that are harming our companies and making our nations less secure.
What is harm reduction?
Cybersecurity practitioners in defensive roles are regularly confronted with high-risk behaviors from the populations they protect. As evidenced by high-leverage industrial breaches like Oldsmar and SolarWinds, it just takes an instance of poor security hygiene to let an intruder in. In theory, the security response should be simple: Inform the user of the risks and get them to stop. Phishing email? Don’t click those links. Dangerous software on the internet? Don’t download it. Unfortunately, all-or-nothing guidance like this rarely fits all members of a population and can lead to unintended consequences and increased harm. Tobener’s talk set out to explore how cybersecurity defenders can help those who can’t or won’t stop engaging in risky behaviors.
For several decades, health care practitioners have been exploring an alternative to all-or-nothing guidance called harm reduction. Originally designed in response to the spread of HIV amongst intravenous drug users in the 1980s, harm reduction focuses on decreasing the negative consequences of high-risk behaviors without requiring abstinence. In doing so, harm reduction recognizes that people engaging in high-risk behaviors can still make positive changes to protect themselves and others. At its core, harm reduction is a pragmatic response to inherently complicated humans: If a high-risk behavior with harmful consequences is going to happen regardless, the focus should be on reducing risks for the individual and the community around that individual.
Tobener’s Black Hat presentation explored the core principles of harm reduction, reviewed the research that informs its strategies and proposed a framework for applying harm reduction to cybersecurity risks.
Harm reduction vs. abstinence
According to medical research, fully eradicating risk-taking behaviors is simply impossible. In fact, abstinence-based guidance may actually increase harm for individuals and populations. Implementing harm reduction strategies and taking a pragmatic, compassionate approach to security may actually be more effective, cost less and even reduce burnout among cybersecurity practitioners, Tobener said.
For years, various programs used fear to curb bad behaviors such as drinking and driving or drug use. These programs didn’t work and instead had a tendency to make the behaviors worse.
Fear is also a common tactic in trying to curb harmful security behaviors. There are lots of risky behaviors we’re trying to curb, Tobener said, but are we making the problem worse through these fear-based tactics?
The goal is always to give better security guidance for everyone, and Tobener suggested harm reduction is the way to accomplish this. He provided three major takeaways about harm reduction:
- It accepts that risk-taking behaviors are here to stay.
- It prioritizes the reduction of negative consequences.
- It embraces compassion when giving guidance.
The typical guidance in cybersecurity is straightforward, direct and fear-based. “Don’t click that link. It’s unsafe!” “Your passwords must be complicated and ever-changing!” “Don’t ever use social media like Tik Tok. Your information is at risk!”
This advice all falls under the category of use reduction. While use reductions does attempt to get people not to do bad things, it does nothing about the original incentives that drove people to that behavior in the first place. For example, “It’s hard to remember a ton of complicated passwords,” or, “Tik Tok is fun.”
Using fear-based messaging that preaches abstinence can add social stigma to bad behaviors and make people feel less than their peers. Tobener used a medical term to describe the effect: iatrogenic, or a treatment that may cause more harm than good.
Why harm reduction?
Instead of shaming people into reducing risk, Tobener suggested a set of practical strategies and ideas aimed at reducing the negative consequences associated with human behaviors.
The concept of harm reduction began in Liverpool, England, where they used needle exchanges to prevent the spread of HIV. Needle exchange programs have been proven to help prevent the spread of diseases by providing clean needles to drug users and disposing of used ones.
Cybersecurity personnel must accept that risk-taking behaviors are here to stay. We take risks for reasons, and the incentives often outweigh the risks. Use reduction is not efficient here. It has diminishing returns and can have unintended consequences.
Two things Tobener pointed out were the iron law of prohibition and the abstinence violation effect. The iron law of prohibition posits that when you outlaw something, it will increase in potency and become harder to detect. The abstinence violation effect states that when people are faced with impractical use reduction goals, it can increase risk taking because people know they can’t meet those overly high goals. According to Tobener, this is backed by extensive medical research into topics like the war on drugs, DARE, alcohol prohibition and teenage pregnancy prevention.
Cybersecurity professionals should accept that eradication is not the goal and that it’s a waste of time to pursue the impossible. While use reduction is relevant, it can’t be the only treatment. It needs support from other, more pragmatic measures like harm reduction.
People understand and are given password guidance, but they make poor decisions because of the incentives. Risky behaviors like simple passwords, reusing passwords or keeping password journals save time and mental energy. Because it’s likely humans will always reuse passwords, we instead need to prioritize the reduction of negative consequences.
Risk is not binary; it exists on a spectrum, Tobener said. Any steps on the risk spectrum toward lower harm are valuable. The big problem in health care and cybersecurity is that people who cannot abstain take risks for themselves and the population around them. Giving feasible, practical, effective guidance is how harm reduction and use reduction can work together. Tobener cited three examples:
- e-cigarettes in the U.S. vs. the U.K. The U.S. banned them, while the U.K. regulated them. It turned out the U.S. had higher concentrations of e-cigarettes, a black market, more deaths, etc.
- Sex education vs. virginity pledges. Teens who pledged were found to be equally as likely to have sex, but those who received sex education were more likely to use contraception, get tested, etc.
- The war on drugs. The “war” only increased use, while needle exchanges have reduced HIV infections by 70%, and safe injection sites have decreased overdoses by 35%.
This is similar to defense-in-depth in cybersecurity. No individual control is enough, but you can layer controls. The basic idea is to embrace compassion while providing guidance. It’s not, “Don’t do that!” It should be, “Try not to, but if you do, here are some ways to be safer.” If you can educate people with your expertise, you will give better guidance and build trust. Instead of instilling fear, encourage change through collaboration, accessibility and kindness.
Tobener suggested hitching your wagon to developer productivity before concluding his presentation with several reasons to use harm reduction when trying to curb risky cyber behaviors:
- There is stigma associated with high-risk behaviors.
- It improves quality of life. If you can’t stop them, support them.
- People make better choices when they have access to support and education.
- Compassion makes you more effective; shaming less effective.
- Caring for people is more fun, reduces burnout and improves practitioner efficacy.
- Shaming and stigmatizing reduce efficacy and increase harm.
- Compassionate care improves trust and reduces harm.
- Knowledge shaming is making us less secure.