Cybersecurity and compliance have always been good at one thing: taking uncertainty and forcing it into structure.

We take something abstract, threat, and make it governable. We assign likelihood and impact, define ownership, implement controls, and monitor continuously. Over time, that discipline has turned cyber risk from something intangible into something organisations can actively manage.

Burnout hasn’t made it onto most risk registers, and yet it is widespread, measurable, and increasingly well understood. It affects performance in roles where small lapses carry outsized consequences. That isn’t an oversight. It reflects a gap in how organisations define risk in the first place.

The Assumption We Don’t Talk About

Most security and compliance models are built on a quiet assumption: that the people operating them are functioning at full cognitive capacity.

Consistently.

In the UK alone, poor mental health now costs employers £51 billion annually, with more than 22 million workdays lost each year to stress, anxiety, and depression. Nearly half of HR leaders now identify burnout as the biggest business risk for 2026. In technology roles, as many as 82% of employees report feeling close to burnout.

Set against that backdrop, the assumption of consistently high human performance starts to look less like a baseline and more like a vulnerability.

As mental health specialist, Maria Drakoula, put it: “Organisations assume perfect performance by employees, which, as a principle, is a security risk in itself.”

Most security frameworks are built for a workforce operating at full capacity. That workforce doesn’t exist.

Burnout, Reframed as Risk

If organisations treated burnout with the same discipline as other operational risks, I think the conversation would look very different. And it would begin, as all risk conversations do, with likelihood.

Across tech and security functions, burnout is not a fringe issue or a periodic spike; it is persistent. ISACA research found that 73% of European IT professionals have experienced work-related stress or burnout, while separate research found that 91% of CISO’s report moderate or high levels of stress. In the UK, roughly four in five employees say they are close to burnout, with higher concentrations in technical roles.

This is not a tail risk. It is part of the operating environment.

The impact is more concrete than it is often treated. Burnout affects three things security depends on: attention, judgement, and motivation. When those degrade, so does the effectiveness of controls. Not dramatically at first, but incrementally; missed alerts, slower decisions, small errors, inconsistent process adherence.

The cumulative effect is measurable. Productivity can fall by up to 35% under poor mental health conditions. Employees lose nearly five hours a week to stress-related inefficiency. Presenteeism: people working while unwell; costs organisations two to three times more than absence.

None of this sits comfortably in a “wellbeing” category. It is, in effect, a degradation of the human layer within the control environment.

Burnout doesn’t arrive as a single event. Short bursts of pressure, audit cycles, and major incidents are manageable. But when those conditions persist, they stop being exceptional and become structural. Security teams, in particular, operate under sustained load: constant alerting, periodic audit peaks, and reactive response models. Over time, that produces familiar patterns, alert fatigue, cognitive overload, and process shortcuts. The system may still pass audits. It may still look compliant. But it becomes less reliable.

What It Would Mean to Take This Seriously

Recognising burnout as a first-class risk wouldn’t require a new framework. It would require using the one that already exists.

A burnout risk register would not look out of place alongside other operational risks. It would describe, in plain terms, the conditions that create exposure: sustained workload intensity, fragmented tooling, audit-driven spikes, and under-resourced teams. It would track leading indicators, not just absence or attrition, but signals already familiar to operational leaders:

  • rising backlogs and unresolved alerts
  • increasing error rates or rework
  • extended working hours beyond the contract
  • declining engagement in critical processes

The controls that follow look remarkably familiar too:

  • workload and capacity modelling aligned to known risk cycles
  • alert tuning and prioritisation to reduce noise
  • automation of repetitive, low-value tasks
  • clearer ownership across fragmented systems
  • a shift from peak-based audits to more continuous approaches

The goal is not to medicalise burnout or reduce it to a metric. It is to make it visible, owned, and governable, which is exactly what organisations already do with every other risk that carries this level of prevalence and consequence.

Burnout as a Risk Multiplier

I believe one of the reasons burnout remains under-governed is that it rarely appears as a standalone failure. It amplifies other risks.

As Maria Drakoula highlights, “burnout degrades attention, judgement, and motivation, opening the door not just to human error but, in extreme cases, to malicious behaviour.” In security terms, that translates into well-understood failure modes:

  • susceptibility to social engineering, where fatigue and urgency collide
  • misconfigurations caused by cognitive overload or rushed decisions
  • alert fatigue leading to missed or dismissed threats
  • access control shortcuts taken under pressure
  • breakdowns in process adherence

Taken individually, each of these is familiar. Together, they form a pattern. Burnout doesn’t introduce new risks. It increases the probability of the ones you already have.

Systems, Not Individuals

Treating burnout as an individual issue about personal resilience, coping, and wellbeing moves the problem out of the system and governance and into the person. That’s a design choice, and it’s the wrong one. The drivers: audit models that create unsustainable peaks, tooling that fragments workflows and increases manual effort, operating models that assume constant availability, compliance approaches that remain reactive rather than continuous, sit firmly within governance.

These are design decisions. And design decisions are governable.

Remote and distributed work has also changed how burnout manifests rather than whether it does. As Maria Drakoula points out, “isolation combined with cognitive strain creates conditions where errors can occur, propagate, and remain undetected for longer.” From a security perspective, this is less a culture issue and more a detection and resilience one. The risk is not that people work remotely. It is that the system has fewer natural points of interruption and correction.

The Wider Design Problem

This connects to a broader argument about how organisations manage risk overall. The same organisations that wouldn’t dream of running a single annual security audit and calling it continuous risk management are running their human capacity the same way: an annual engagement survey, a wellbeing week, and a one-off training cycle.

The logic of continuous monitoring, which applies to information security, data privacy, and AI governance, applies here too. Human performance is a control. It degrades. It needs modelling, not just noting. When information security, privacy obligations, and AI governance are managed as a connected, continuous system rather than separate point-in-time exercises, organisations become genuinely harder to disrupt. Extending that same logic to the human layer of the control environment is not a significant leap. It is the same principle, applied consistently.

The Question for Leadership

Security leaders already carry accountability for control effectiveness, regulatory compliance, and operational resilience. Burnout intersects with all three. But it rarely appears within the same governance structures, leaving a critical dependency on human performance, largely assumed rather than actively managed.

The more useful question is not: how do we reduce burnout? It is: where in our control environment are we relying on sustained human performance that we have no structured way to model or manage?

Answering that question honestly is what continuous risk management actually looks like. Not an annual review. Not a wellbeing initiative. A structured, owned, monitored dependency, like every other one in the system.

Cybersecurity has evolved by learning to design systems that remain resilient in the face of failure, except in one area. We still design as if the human layer is stable, consistent, and endlessly adaptable. It isn’t.

If burnout carried the same properties as a traditional cyber risk, high prevalence, measurable impact, increasing over time, it would already be modelled, monitored, and owned. The fact that it isn’t doesn’t make it less real.

It just means the decision to leave it unmanaged is, itself, a risk decision. One that most organisations haven’t consciously made.

Expand Your Knowledge

Blog: Cybersecurity Is Battling A Mental Health Crisis – Here’s How To Solve It

Blog: Leadership Strategies for Balancing Security Workload and Compliance Success