Show Me the Incentives, and I'll Show You the Outcomes

Don't blame people as the weakest link in cybersecurity- blame their incentives.
Here’s a twist on a recent article titled It’s a knowledge problem! Or is it?.
A Convenient Myth

Every time there's a data breach in the news, the cybersecurity industry performs the same ritual. Executives issue solemn statements about taking security seriously. Someone sells more security awareness training. And everyone nods along when they inevitably hear "humans are the weakest link in security."

This is a convenient myth.

The most secure organizations don't have better training—they have better-aligned incentives.

(Mostly) Rational People

When we blame Bob (sorry, Bob) in accounting for clicking a phishing link and sharing his credentials with a credential harvesting page, we're avoiding an underlying truth: Bob made a rational choice given his incentives. His performance review depends on quickly processing invoices, not maintaining perfect security vigilance through 200 daily emails.

The fact that he clicked the link isn't a training failure—it's the predictable outcome of the system. Security and compliance teams demand "zero clicks" on suspicious links while simultaneously requiring employees to process tons of emails daily, virtually all containing legitimate links, and ignoring technical controls that protect users if they do.

Admit Failure

Security practitioners are fond of saying that users will "always choose convenience over security." This statement isn't an insight—it's an admission of failure. If your security strategy depends on people making inconvenient choices, you're set up for failure.

System Design Failures

Consider passwords. We've spent decades crafting increasingly byzantine password policies, then act surprised when people circumvent them. The standard response is more training and stricter policies. But if a system requires users to remember 16-character passwords with special characters that change every 90 days, the rational response is to write them down or keep iterating the last number. That's not user failure—it's system design failure.

Managing dozens of complex, unique passwords without a password manager is cognitively expensive. If we force this burden onto employees without making it easy, password reuse isn't a failure of compliance—it's an efficient adaptation to a poorly designed system. People aren't trying to be insecure; they're trying to get their work done with the least friction possible.

The same pattern repeats everywhere in security. We expect perfect compliance with policies that make work harder. We demand zero clicks on suspicious links in a world where clicking links is the primary way people do their jobs. We insist on pristine system hardening while providing neither the time nor the tools to accomplish it. We build security to protect idealized environments rather than the messy realities where people actually work.

Or consider compliance requirements that force documentation and box-checking around security controls that have no discernable impact. We've built entire industries around checking boxes rather than reducing risk, and then we wonder why breaches happen anyway.

Developers Under Pressure

Consider developers rushing to meet deadlines or any team under time pressure. When leadership is breathing down their necks about a delayed release, security becomes an inconvenient checkbox rather than a priority.

We shouldn't be surprised when developers create "temporary" workarounds to pass security scans or when they push code directly to production to meet deadlines. That's not negligence—it's a rational response to incentives that reward shipping over security.

In organizations where security delays are career-limiting while shipping buggy code has minimal consequences, developers make the obvious choice.

Ship It vs Security

Security teams are incentivized to add more controls, while product teams are incentivized to ship faster. This fundamental tension explains most security dysfunction in organizations. Security teams are rewarded for implementing controls and avoiding breaches—the more comprehensive, the better. Meanwhile, product teams are measured on velocity and customer satisfaction.

Neither side is wrong; they're simply optimizing for what their organization actually values. The predictable result is an adversarial relationship where security becomes the "department of no" and product teams view security as an obstacle to be overcome rather than a partner in delivering value.

Consider popular authentication libraries and software development kits that protect access to a website or resources. Few teams are incentived to protect their users from selecting common and leaked passwords. If the library doesn’t support that feature, of course it’s not getting prioritized.

The Core Problem

The fundamental error here lies in treating security as something separate from work rather than part of it. When security and productivity goals conflict, productivity will win nearly every time.

Not because people are careless, but because that's what they're actually measured on and rewarded for.

Align Incentives

Ask teams one question: "What security measure slows you down the most?" And work to make their life easier without significantly reducing security.

  1. Align security with business goals. Ask: "How do our security efforts support our business priorities?" How could they facilitate better business outcomes?
  2. Measure secure outcomes. Google's Project Zero team measures security success by vulnerability remediation times, not by controls implemented. Pick just one metric that matters (like time to fix critical vulnerabilities or percent of systems with EDR software running). Track it manually if needed- no fancy dashboard required.
  3. Reward secure behavior. Recognize small security wins and give kudos to team members who spot and share phishing emails with the team. Send a personal thank-you when someone reports a security issue.
  4. Make security the easy path. Netflix has pioneered paved roads within application security. Engineers don't need to choose between security and productivity because security is built into their workflow. Find one repeated security task and create a reusable template. Identify one annoying security approval and try eliminating it with a paved road approach.

The best security strategies don't rely on users making the right choice every time—they make it hard to make the wrong choice. Until we start demanding that security systems acknowledge how humans actually behave under pressure, with limited attention and competing priorities, we'll continue to build systems that fail predictably.

So the next time you're tempted to blame the human element, remember: if you show me the incentives, I'll show you the outcomes. And if the outcomes aren't what you want, it's the incentives—not the humans—that need fixing.

Have a project in mind? Let’s talk

Get in touch