Why Does A.5.26 Matter So Much for MSP Incident Response?
A.5.26 matters for MSP incident response because it replaces ad‑hoc reactions with consistent, auditable handling of security incidents across every client, so you reduce downtime, disputes and uncertainty when something serious happens. When your response is governed by clear procedures instead of whoever happens to be on call, you protect customer relationships, strengthen your position with auditors and insurers, and give engineers a calmer framework to operate in when pressure spikes.
Information here is general guidance only; it does not replace legal, regulatory or specialist advice for your organisation.
Most MSPs already know the feeling of a messy incident: conflicting advice in a chat thread, unclear authority to isolate systems, and days spent rebuilding trust with an angry customer. Control A.5.26 asks whether you still rely on that kind of heroism, or whether you can show that incidents are handled according to documented procedures which reflect your services, risks and clients.
Done well, this is not a “paper exercise.” It is a way to capture hard‑won experience so engineers stop solving the same problem from scratch every time. It strengthens commercial positions in bids and renewals, because you can show exactly how you respond to ransomware, business email compromise or a compromised remote management account, instead of giving vague assurances.
Clear playbooks turn midnight incident chaos into calm, predictable action for your engineers and your clients.
In our 2025 ISMS.online State of Information Security report, most organisations say they have already been hit by at least one third‑party or vendor‑related incident in the past year.
Over time, formal response also protects margins. Multi‑tenant incidents can easily hit several clients at once; without standardised playbooks, you risk inconsistent actions, extended outages and confusion about liability. Public guidance on supply‑chain and service‑provider attacks, such as CISA insights on supply‑chain attacks, underlines how quickly a single weakness can cascade across many customers in exactly this way. A.5.26 gives you the lever to redesign that experience so that your teams, your customers and your auditors are all working from the same script.
The hidden cost of ad‑hoc incident response for MSPs
Ad‑hoc incident handling creates hidden technical, commercial and compliance debt that undermines MSPs later, even when individual engineers appear to “save the day” in the moment. It may feel faster to improvise under pressure, but decisions are rarely captured in a repeatable form, evidence is scattered across tools, and nobody can explain clearly why one client received a different response to another facing the same threat.
Engineers often make sound judgements under pressure, yet those decisions are rarely captured in a way that others can repeat, challenge or improve. Outcomes become highly dependent on a few senior people, creating fragility if they are unavailable or leave the business, and making onboarding for new staff slow and risky.
A structured response capability forces you to define what counts as an information security incident, how severity is assessed, which actions are allowed at each level, and how communication flows. The investment pays back quickly. Mean time to contain typically falls, communication becomes more predictable, and management no longer wonder whether the MSP is quietly improvising every time an alert fires. Incident‑handling best‑practice guides, including resources such as the SANS incident handler’s handbook, echo this by stressing that rehearsed, documented procedures shorten containment times and support clearer communication.
From a compliance point of view, inconsistent handling is also risky. When incidents become part of an ISO 27001 audit sample, you need more than ticket numbers; you need to show that steps followed match documented procedures and that lessons learned were fed back into the information security management system. Independent Annex A.5.26 summaries, such as this ISO 27001 control overview, emphasise exactly this combination of documented procedures, records and learning.
What Does ISO 27001:2022 Annex A.5.26 Actually Require?
Annex A.5.26 requires you to respond to security incidents using documented procedures that fit your risks, services and relationships, so you can show that incidents are handled consistently and improved over time. In plain terms, the standard asks whether you know what to do when an incident hits, who does it, how quickly they act, who you tell, and how you prove afterwards that you followed an agreed process. ISO’s own description of the 27001:2022 control set, available via the official standard overview, frames A.5.26 in terms of documented, risk‑appropriate incident response that is embedded in the management system.
Because ISO text is copyright‑protected, you will not see the exact words reproduced in public sources. However, common guidance converges on several expectations. You should have a defined process for responding to information security incidents, with clear responsibilities and authorities, and you should retain records that show the process is followed. Certification bodies and national standards organisations, including BSI’s ISO 27001 guidance, routinely summarise this expectation as a defined incident process with named responsibilities and retained records as evidence of operation. For an MSP, that process must explicitly cover services delivered to clients, not only your internal corporate systems.
In our 2025 ISMS.online State of Information Security survey, almost all respondents list achieving or maintaining certifications such as ISO 27001 or SOC 2 as a top organisational priority.
A.5.26 sits among the organisational controls in Annex A, alongside areas such as incident reporting, learning from incidents and supplier relationships. The emphasis is on response: what happens from the moment an event is classed as an incident, through containment and recovery, to lessons learned. It interacts with other controls on logging, communication and continual improvement, and with any regulatory or contractual obligations that govern how you handle breaches. Published mappings of the 2022 Annex A structure, such as this ISO 27001:2022 control summary, show it located alongside controls for reporting, learning from incidents and managing suppliers.
For MSPs, there is an extra dimension. Your incident procedures need to recognise shared responsibilities with customers and upstream providers. They must show how you coordinate with client incident owners, data protection officers, cloud vendors and, where relevant, regulators or law enforcement. Simply pointing to a generic vendor runbook is not enough; the standard cares about how your organisation handles your risks and services.
Turning control language into a practical checklist
Turning A.5.26 into practice starts with a simple self‑assessment checklist that turns abstract control language into concrete questions about your current capability. If you can answer these questions confidently, you are on the right track and you are likely to have incident response that will survive both certification audits and serious client scrutiny.
- Documented procedures – cover incidents in your own and client environments.
- Clear roles – state who declares, leads, approves actions and speaks externally.
- Time expectations – define technical response and legal or contractual deadlines.
- Traceable records – show incidents logged, handled, closed and reviewed.
Taken together, these questions give you a straightforward way to test whether your current approach would stand up to an audit or a serious client review. If the answer is “no” or “not confidently” to any of them, A.5.26 gives you a structured reason to fix the gap. The simplest place to start is often a policy‑level procedure that sets out the lifecycle at a high level, supported by more detailed playbooks for common scenarios.
Evidence A.5.26 expects you to have
A control is only as strong as the evidence that shows it actually operates, and auditors will usually ask for enough material to reconstruct what really happened. For A.5.26, that typically means being able to show how an incident moved from declaration through response to closure, who was involved, how decisions were made and what you changed afterwards.
- Procedure – incident management lifecycle, roles and communication rules.
- Playbooks – scenario‑specific runbooks referenced from the main procedure.
- Incident records – classification, actions, approvals, communications and closure.
- Reviews – post‑incident analysis with corrective actions linked to risks and controls.
If you are an MSP, those records should show incidents involving client systems as well as internal ones. They should demonstrate that you respected contractual terms and shared responsibilities, and that you involved the right client roles at the right time. The easiest way to assemble this evidence is to treat your incident tools and your information security management system as a single ecosystem rather than as separate silos.
ISO 27001 made easy
An 81% Headstart from day one
We’ve done the hard work for you, giving you an 81% Headstart from the moment you log on. All you have to do is fill in the blanks.
How Does A.5.26 Fit Into Your Wider Incident Management Picture?
A.5.26 fits into your wider incident management picture by governing how you respond, not how you detect incidents in the first place, and by linking operational handling to both customer expectations and your information security management system. To make sense of it, you need to see how it sits alongside detection, reporting, learning and supplier management, and how it connects with your clients’ processes as well as your own.
Most incident frameworks break the journey into phases: preparation; detection and analysis; containment; eradication; recovery; and post‑incident activity. That phased view mirrors public guidance such as NIST SP 800‑61 on computer security incident handling, which breaks incident management into preparation, detection and analysis, containment, eradication and recovery, followed by post‑incident activity. A.5.26 sits in the heart of that lifecycle, from the moment an event is judged to be an information security incident through to recovery and improvement. It assumes that you already have ways to detect and report events, and that you have mechanisms to learn from them; its focus is whether the response itself is structured.
In practice, MSPs often run several overlapping processes: ITIL incident management for service outages, security monitoring and alert response in a security operations centre, and client‑specific escalation paths for major incidents. A.5.26 does not replace these; it asks whether, taken together, they amount to a coherent, documented way of responding to security incidents, and whether responsibilities and hand‑offs are clear.
Your clients’ information security management systems add another layer. Many of them will have their own incident procedures, especially if they are certified or heavily regulated. Your playbooks need to dovetail with those, so that when a serious incident occurs, everyone knows whether the MSP leads, the client leads, or you coordinate jointly, and how decisions are escalated.
Scoping “security incident” in an MSP context
Scoping “security incident” clearly helps your teams pick the right process and set of playbooks when something breaks. Without a shared definition, people rely on instinct, which leads to inconsistent handling, missed legal obligations and confusing conversations with clients after the event.
For MSPs, confusion often arises at the boundary between a general service incident and a security incident, especially when symptoms look similar but root causes and obligations are very different. That boundary frequently cuts across multiple services and clients, and if you do not define it carefully, engineers will rely on instinct rather than a shared understanding when choosing which process to follow.
In our 2025 ISMS.online State of Information Security survey, around 41% of respondents named managing third‑party risk and tracking supplier compliance as a top information‑security challenge.
It pays to agree definitions with customers up front. A service incident might be any unplanned disruption to an IT service, regardless of cause. An information security incident is one or more events with a significant likelihood of affecting confidentiality, integrity or availability of information, or breaching policy or law. Definitions used by European cyber‑security agencies, for example ENISA’s incident‑management guidance, similarly focus on events that are likely to affect confidentiality, integrity or availability or to breach law or policy. A major incident is a severe subset of either category, based on impact and urgency.
Once you have these definitions, you can map which process and playbook applies. An outage caused by a misconfiguration might follow a service incident process with some security checks, while a suspected credential theft that has not yet caused visible disruption would still trigger a security incident playbook. Clear scoping prevents engineers guessing which procedure to follow when something breaks.
Connecting operational handling to strategic risk
A.5.26 is also a bridge between day‑to‑day operations and strategic risk management, because it forces you to treat incidents as data about your controls and services rather than as isolated firefights. Significant incidents should not just be resolved and forgotten; they should inform your risk register, your control priorities and your service design in a disciplined way.
This means designing your playbooks and post‑incident reviews so that they capture more than technical detail. You should record which risks materialised, whether likelihood or impact assessments were accurate, which controls failed or were missing, and where contractual or communication gaps created avoidable harm. Feeding this back into your information security management system is part of showing that you use incidents to improve.
For MSPs, this feedback loop can also support product decisions. If the same pattern of security weakness appears across multiple clients, you may decide to enhance your standard service packages or adjust your baseline controls. When you do so, you can refer back to incidents as the evidence base that justified the change. To make this real for engineers, you then need playbooks that reflect those lessons and fit how MSPs actually work.
How Do You Turn A.5.26 Into Practical MSP Incident Playbooks?
You turn A.5.26 into something your engineers can use by building playbooks that match how your organisation and your clients work, and by ensuring those playbooks are the first thing people reach for when an incident hits. A good playbook is short enough to use under pressure, specific enough to remove guesswork, and structured enough to generate the evidence A.5.26 expects without asking engineers to become amateur auditors.
At a minimum, each playbook should state its scope and triggers, define severity levels, identify the roles involved, and lay out step‑by‑step actions for each phase of the incident lifecycle. It should show when to escalate, when to involve the client, when to consider legal or regulatory notification, and how to capture evidence such as logs, screenshots and approvals.
For MSPs, playbooks must also recognise multi‑tenant realities. A single compromised remote monitoring account can affect dozens of customers; a cloud provider outage may trigger both security and service incidents. Your playbooks should describe how to handle simultaneous impact on several clients without losing track of responsibilities or over‑committing scarce resources.
Treat playbooks as living documents rather than static PDFs. Store them where engineers will use them – referenced from ticket templates, linked from monitoring alerts, and surfaced in collaboration tools – but maintain a single authoritative version in your information security management system, where updates are reviewed, approved and traced.
Designing a reusable playbook template
A reusable template keeps your playbooks consistent, reduces writing effort and makes auditing simpler because every scenario follows the same basic structure. Once engineers are familiar with that structure, they can find what they need quickly during an incident instead of hunting through unstructured documents that vary from client to client.
- Metadata: – playbook name, identifier, version, owner role, last review date.
- Scope and triggers: – services covered and events that activate the playbook.
- Definitions and severity: – how you classify incidents of this type, including thresholds.
- Roles and responsibilities: – who leads, investigates, communicates and approves actions.
- Procedure: – ordered steps for investigation, containment, recovery and closure.
- Communication plan: – who is informed, by whom, over which channels and how often.
- Evidence and records: – what to record, where, and who is accountable.
For each section, note how it links back to your high‑level incident procedure and to A.5.26. For example, the communication plan supports the requirement to notify interested parties, while the evidence section supports the requirement to retain records of response.
A playbook that lives only on a shared drive will not help much during a real incident, especially when teams are tired and spread across time‑zones. You need to weave it into the tools where people work, so that following it feels like part of doing the job rather than an extra task, and so that evidence collection happens automatically as people work through the steps.
For example, you can configure your ticketing system so that when a ticket is flagged as a particular incident type, the relevant playbook link and key fields appear automatically. You can align automation rules so that required data, such as impact assessments, approvals or containment actions, is captured as part of the workflow instead of as after‑the‑fact notes.
Where you use security orchestration and automation, you can mirror playbook steps in automated workflows while still requiring human confirmation for high‑risk actions. The key is to ensure that, whether actions are manual or automated, they are traceable back to the documented procedure, and that your information security management system holds the context, audit trail and review history. Platforms such as ISMS.online can help you tie these records back to Annex A.5.26 so the evidence is always ready when clients or auditors ask, and, as outlined in ISMS.online’s Annex A.5.26 implementation guidance, that linkage makes it easier to present audit‑ready packs directly from day‑to‑day records.
Free yourself from a mountain of spreadsheets
Embed, expand and scale your compliance, without the mess. IO gives you the resilience and confidence to grow securely.
How Can You Standardise Playbooks Yet Keep Them Client-Specific at Scale?
You standardise MSP incident playbooks and still keep them client‑specific by combining a common core with lightweight overlays that capture each customer’s context. Standardisation is essential if you support dozens or hundreds of clients; nobody can maintain a completely bespoke playbook library, and engineers will not remember how each variant works when the pressure is on.
In the core, you define the incident type, lifecycle, generic technical steps and internal roles. This is largely the same for every client: your internal incident manager, your security analysts, your service desk, your infrastructure teams. You standardise definitions, severity schemes, escalation patterns and evidence requirements so that every engineer knows what “high” means and which steps are non‑negotiable.
On top of that, you add per‑client parameters. These typically include named contact roles, out‑of‑hours coverage, service‑level commitments, regulatory obligations, preferred communication channels and any client‑approved deviations from your default approach. The overlay can also capture client‑owned steps, such as engaging their legal team or notifying their own customers when certain thresholds are met.
Handled well, this approach keeps your documentation manageable while still satisfying auditors that your response takes account of context. It also invites clients into the design, giving them a chance to challenge assumptions before a live incident forces the issue and everyone is arguing about roles while the clock is ticking.
Standardised playbooks with light client overlays are easier to maintain and easier to trust.
Comparing response models
A simple comparison between ad‑hoc and standardised response models makes the trade‑offs clear and helps leadership understand why you are investing time in playbook design and maintenance. It also gives you accessible language to use in proposals and renewals when you explain how your approach reduces risk for customers.
| Scenario | How incidents are handled today | What changes with standardised playbooks and client overlays |
|---|---|---|
| Ad‑hoc, engineer‑led | Individuals improvise based on experience and tools | Same steps captured once, shared by everyone and improved after each use |
| Generic policy, no client nuance | Policy exists but ignores real services and clients | Playbooks reference live services, roles, SLAs and client responsibilities |
A side‑by‑side view like this highlights how structure reduces risk without removing professional judgement. It also gives you a plain‑language way to explain to customers why you want to agree playbooks before serious incidents happen.
Governing variants across a client base
Once you start maintaining overlays, governance becomes important so that variants remain understandable and consistent as your client base and services grow. A few pragmatic practices help you avoid drift and make sure your documentation still reflects reality a year from now.
- Central templates: – keep master templates for each incident type in one repository.
- Change triggers: – define events that force review, such as new regulation or major incidents.
- Regular reviews: – schedule overlay checks with key clients, especially in regulated sectors.
- Simple metrics: – track overlay use, deviations from playbooks and client feedback.
These controls do not need elaborate tooling at first. Even modest discipline can prevent your documentation from drifting away from reality as your client list grows and services evolve, and they give you clear evidence during audits that you manage incident response in a controlled way.
What Does an End‑to‑End MSP Incident Response Lifecycle Look Like?
An effective MSP incident response lifecycle gives everyone a shared map of what happens between first detection and lessons learned, across both your organisation and your clients. It clarifies which steps you lead, which your clients lead, and where you work together, while aligning with A.5.26’s demand for documented, timely response and with the expectations of auditors, regulators and insurers.
A simple, MSP‑adapted lifecycle might include: prepare; detect; triage; contain; eradicate; recover; and learn. Preparation covers policies, playbooks, training, tooling and agreements. Detection relies on monitoring, alerting and user reporting. Triage assesses severity, scope and business impact, and determines whether an event is an information security incident. Containment limits damage; eradication removes root causes; recovery restores normal operations; and learning feeds improvements back into your information security management system. These phases closely reflect widely recognised incident‑handling models such as the lifecycle described in NIST SP 800‑61, adapted here for an MSP setting.
- Prepare: – define policies, playbooks, training, tooling and client agreements.
- Detect: – monitor systems, review alerts and capture user reports.
- Triage: – assess scope, severity and business impact across clients and services.
- Contain: – limit damage while preserving evidence and core operations.
- Eradicate: – remove root causes such as malware, misconfigurations or compromised accounts.
- Recover: – restore services, validate integrity and confirm customer acceptance.
- Learn: – conduct reviews, update risks, adjust controls and update playbooks.
Each phase should have clear entry and exit criteria, roles and communication expectations. For example, detection might end when a potential incident has been validated and logged with an initial severity, while recovery ends when systems are stable again and stakeholders have been informed.
For MSPs, the lifecycle must also cope with multi‑client and multi‑supplier incidents. You may be coordinating with client teams, cloud providers, software vendors and sometimes law enforcement at different phases. Documenting who leads at each stage avoids situations where everyone assumes someone else is in charge.
Clarifying ownership and decision points
Clarifying ownership and decision points makes your lifecycle usable in practice and defensible in audits, because it shows how decisions are made rather than just listing process steps. It starts with being explicit about who is responsible, who is accountable, who is consulted and who is informed at each phase for both you and your clients.
For instance, your security operations team may be responsible for detection and initial containment across all clients, while each client’s incident owner is accountable for business‑risk decisions and regulatory notifications. Cloud providers or other vendors may be consulted or informed at specific points, particularly where their services are central to the incident and their logs or actions are needed to move forward.
Critical decision points often include whether to isolate systems, invoke disaster recovery plans, notify regulators, inform affected individuals, engage external forensics or suspend certain services. These decisions should have pre‑agreed authority levels and escalation paths. For example, only the client incident owner might be allowed to approve regulator notification, while you recommend and document the decision in the incident record, and your own leadership team approves actions that impact multiple clients.
Documenting decision points in playbooks and rehearsing them in exercises builds muscle memory. It reduces the chances of over‑reacting, such as shutting down systems unnecessarily, or under‑reacting, such as delaying notifications beyond legal deadlines, and it provides a clear narrative when clients or auditors later ask why you acted in a particular way.
Designing entry, classification and closure
Designing entry, classification and closure well stops your lifecycle from becoming vague and ensures that incidents are handled consistently from first report to final review. Entry into your lifecycle should be consistent. A common pattern is to treat everything as an event until it crosses defined thresholds of likelihood and impact, at which point it becomes an information security incident, or a major incident if particularly severe.
Your classification model can be simple, but it must be understood and used consistently by service desk, security and operations teams. Clear categories help people pick the right playbook quickly and make reporting to management and clients more meaningful, because trends in “high” or “major” incidents become visible rather than hiding in free‑text ticket notes.
Closure is equally important. You should define what needs to be true before an incident is considered closed: systems stable, monitoring clean, stakeholders informed, documentation complete and a post‑incident review planned or carried out. Closing too early can hide unresolved issues; closing too late can clutter your records and make it look as though you do not really know which incidents are still active. A.5.26 cares that there is a discernible process, not just that tickets are marked “done.”
Manage all your compliance, all in one place
ISMS.online supports over 100 standards and regulations, giving you a single platform for all your compliance needs.
Which Roles, RACI and Communication Protocols Stand Up in Audits?
Roles, RACI and communication protocols stand up in audits when they are clear on paper, aligned across you and your clients, and proven in practice through records, training and exercises. Auditors and customers are concerned less with job titles and more with whether responsibilities are understood and whether people are equipped to carry them out under pressure without leaving gaps or duplication.
At a minimum, you should identify roles such as incident manager, security analyst, service owner, client incident owner, data protection officer, communications lead and executive sponsor. Role sets recommended in IT service‑management guidance, for example incident‑management role overviews from ITSM vendors, typically include a similar core group. For each incident type, you then assign responsibilities using a simple RACI model: who is responsible for doing the work, who is accountable for the outcome, who is consulted, and who is informed.
In an MSP context, your RACI must span organisational boundaries. For example, you might be responsible for technical investigation and initial containment, while the client incident owner remains accountable for decisions that affect their business continuity or regulatory posture. Cloud providers or other vendors may be consulted or informed at specific points where their platforms or logs are central to understanding and resolving the incident.
Building a dual‑organisation RACI
A dual‑organisation RACI makes roles and responsibilities explicit on both sides of the MSP–client relationship. When you build it together, you reduce misunderstandings during real incidents and make contract and renewal conversations much more straightforward.
Building a dual‑organisation RACI means mapping activities across both MSP and client roles so that everyone sees themselves in the same picture. A practical approach is to create a RACI table for each major phase of your lifecycle, with rows for activities and columns for the relevant roles on both sides, and then walk through a realistic incident together to test whether it makes sense.
Consider a ransomware attack on a shared service. You might be responsible for detecting the attack, isolating affected systems and collecting forensic evidence. The client incident owner might be accountable for deciding whether to invoke disaster recovery or notify regulators. A data protection officer could be consulted on privacy obligations, and executives on both sides informed regularly about business impact, communication plans and restoration timelines.
When you fill this in, insist on exactly one accountable role per activity. This forces you to have uncomfortable but necessary discussions about decision ownership. It may reveal that some decisions you thought you were making alone actually need explicit client approval, or that clients expect you to lead in areas you assumed they owned, and it gives you a shared basis for updating contracts and playbooks.
Once agreed, the RACI should be reflected in your playbooks, contracts, service descriptions and training. It becomes an anchor that stops responsibilities drifting as staff change or new services are added, and it gives auditors a clear map to compare against your incident records.
Communication that is both effective and auditable
Communication during an incident must work for busy people and leave a usable trail, so that you can show later that interested parties were informed appropriately. Effective communication is not accidental; you can design it by specifying the basics up front and weaving them into your tools and playbooks.
You should decide which channel is primary for operational coordination, such as a shared chat space or conference bridge, and which channel is used for formal updates to executives and clients. You should define how often updates are expected at different severities, and in what format, so that nobody is left guessing whether they have missed something important.
It also helps to spell out how to escalate if critical decisions are waiting on someone who is unavailable, and what must be captured in your records after the incident. Templates for status updates and executive summaries reduce variance and make it easier for people under stress to write clearly, while pre‑agreed message patterns help your teams avoid sharing sensitive details in the wrong channel.
At the same time, your information security management system or ticketing tools should capture key communication artefacts, so that you can demonstrate during audits that interested parties were informed appropriately. Training, tabletop exercises and simulations then build confidence in roles and communication approaches. When auditors ask how you know that named roles in your RACI can do what is expected of them, you can point to training records and exercise participation, not just job descriptions.
Book a Demo With ISMS.online Today
ISMS.online helps you turn Annex A.5.26 from static documents into live playbooks, records and improvements managed in one information security management platform, so you can respond consistently across clients and show that response clearly to customers and auditors. For MSPs, that central view is often the difference between an incident process that exists on paper and one that stands up to scrutiny when something serious goes wrong.
A short demonstration lets you see how incident response scenarios are modelled as linked policies, playbooks, risks, assets, service‑level commitments and incident records. You can explore how responsibilities and communication paths are captured, and how incident evidence can be exported as an audit pack in minutes instead of days, using information your teams already generate as they work.
If you already have policies and runbooks elsewhere, you do not need to throw them away. ISMS.online can act as the organising layer that points to existing artefacts, adds structure where gaps exist, and ties everything back to Annex A.5.26 and related controls. That reduces the sense of “starting again” and instead turns the exercise into rationalising what you already have so incidents are handled in a repeatable way.
What you see in an ISMS.online incident response demo
In an ISMS.online incident response demo, you see how structured playbooks and records live in the same place as the rest of your ISMS, so incident response is clearly part of your broader management system rather than an isolated process. The session focuses on the practical view your teams would use every day, not just on configuration screens or abstract control maps that only a few specialists ever see.
You can walk through a small set of realistic scenarios, such as ransomware on a key client, a compromised remote monitoring account or a cloud account takeover. For each one, you see how the platform links incident tickets to playbooks, roles, approvals, communication records and post‑incident reviews, and how those records flow into risk and improvement registers without extra manual effort.
You also see how evidence for A.5.26 emerges naturally as part of handling the incident. Rather than assembling an audit pack from scratch at the end of the year, you can show how the platform produces a clear history of decisions, timings and responsibilities directly from the records you already maintain, giving you greater confidence when customers and auditors ask hard questions about past incidents.
Start with a focused pilot across a few clients
Starting with a focused pilot allows you to prove value for Annex A.5.26 without asking your teams to change everything at once. You can test new playbooks and records on a small, important slice of your client base and build a business case from real results.
You might choose your top three incident types across your top five clients. During the pilot, you model those scenarios in ISMS.online, align playbooks with your existing procedures, and connect them to incident records and reporting so that engineers see familiar work, just with more structure. You then observe how the new approach compares with your previous way of working in terms of speed, clarity and confidence.
Over a period such as ninety days, you can track changes in mean time to contain incidents, the completeness of incident documentation and the ease of answering customer or auditor questions. Analyst research on incident‑response metrics, such as Forrester’s advice on building an incident response metrics programme, highlights indicators like time to contain, documentation completeness and the effort required to answer stakeholder questions as useful KPIs for pilots of this kind.
Turning a demo into a business case for Annex A.5.26
Turning a demo into a business case for Annex A.5.26 is easier when you link the platform directly to outcomes your stakeholders care about rather than to features alone. That means framing benefits in terms of risk reduction, audit readiness and client confidence, not just smoother workflows or nicer dashboards for the security team.
About two‑thirds of organisations in our 2025 ISMS.online State of Information Security survey say the speed and volume of regulatory change are making compliance harder to sustain.
You can use pilot results to illustrate how centralised playbooks and records reduce confusion during multi‑client incidents, cut preparation time before audits and give account managers clearer answers when customers ask how you respond to threats. You can also highlight how integrated records make it easier to show continual improvement to auditors, because every corrective action and lesson learned is tied back to incidents and controls in one place.
From there, a recurring governance rhythm inside ISMS.online will keep your incident response capability healthy. Regular reviews of incidents, trends and corrective actions in the platform ensure that A.5.26 remains a living control, aligned with changes in your services, client base and threat landscape, rather than a static set of documents that age quietly in the background.
If you want to move from ad‑hoc response to a structured, evidence‑rich capability that customers and auditors can trust, choosing ISMS.online as your incident response and ISMS platform is a natural next step. It gives you and your colleagues a concrete view of how an integrated information security management system can support the playbooks and processes you need to bring Annex A.5.26 to life across your MSP business, while keeping the focus on outcomes that matter to your clients.
Book a demoFrequently Asked Questions
What is ISO 27001 A.5.26 really asking an MSP to prove?
ISO 27001 A.5.26 expects you to prove that every genuine information security incident is handled in a controlled, repeatable and well‑evidenced way, not just that you have an incident policy on file. As an MSP, that proof must cover incidents in your own estate and in every managed client environment where you have responsibility or influence.
What kinds of incidents and records matter most for A.5.26?
In practice, auditors and mature customers will focus on higher‑impact examples such as:
- Ransomware affecting one or more tenants
- Compromised RMM, VPN, or privileged identity
- Business email compromise in a major SaaS platform
- Supply‑chain or third‑party compromise that propagates through your services
For each such incident, you should be able to:
- Show when and why the event was classified as an information security incident under your ISMS
- Identify the specific playbook or procedure that was followed
- Demonstrate who made key decisions, under what authority and at what time
- Evidence what you told the client and when, including escalation to regulators or insurers if required
- Link the incident to updates in your risk register, controls, contracts, SLAs and training
Auditors are not looking for perfection; they are looking for a consistent, defensible pattern. If even one serious incident is undocumented or handled ad‑hoc, it raises questions about the whole system.
ISMS.online helps you avoid that gap by keeping policies, scenario playbooks, live incident records and post‑incident reviews together. When a client CISO or auditor asks “Show me how you handled that compromise,” you can walk them through a coherent incident storey rather than assembling it from tickets and inboxes at the last minute.
How should an MSP design an ISO‑aligned incident playbook that engineers will actually follow?
A usable incident playbook should feel like a checklist for stressed engineers, not a policy textbook. It needs enough structure to satisfy ISO 27001 A.5.26, but it must still work at 03:00 when someone is triaging a noisy alert.
What are the essential building blocks of a practical MSP incident playbook?
You will usually get the best results if every playbook follows a common, compact pattern.
Clear ownership and purpose
Start with a brief header that anyone can scan:
- Unique ID and name (for example, “IR‑RM‑01: Compromised RMM Account”)
- Owner role, version and last review date
- One‑line purpose describing the scenario
This reassures customers and auditors that the playbook is current and somebody is accountable for it.
Scope, triggers and incident criteria
Engineers need to know when to use this document:
- Platforms, services and client profiles in scope
- Specific triggers: alerts, log patterns, user reports that activate the playbook
- Criteria for escalating from “event” to “information security incident” in your ISMS
That clarity reduces arguments during triage and helps you justify decisions later to regulators or insurers.
Severity and multi‑tenant impact
In an MSP world, a severity model must reflect blast radius across tenants:
- A small set of levels (for example, low, medium, high, critical)
- MSP‑specific examples for each level (single user vs critical shared service)
- Links to contractual and regulatory thresholds tied to severity
A shared model makes it easier for your teams to align actions, notifications and escalation across different client contracts.
Roles, RACI and decision authority
Ambiguity over who can approve disruptive actions is a common failure point. To avoid it, define:
- Core MSP roles (incident manager, SOC analyst, account/service owner)
- Core client roles (incident owner, DPO/compliance lead, communications contact)
- A simple RACI view for each phase (prepare, detect, triage, contain, eradicate, recover, learn)
- Decision gates for major steps such as isolating shared platforms, triggering breach notifications or restoring from backup
Larger customers will often ask to see this in due‑diligence exercises before they sign.
Break the technical work into phases with short, clear steps:
- Validate and scope
- Contain and preserve evidence
- Fix root causes
- Recover and verify integrity
- Review and improve
Within each phase, include reminders about critical evidence to capture (logs, approvals, copies of key messages). This makes it much easier to meet the “well‑evidenced” expectation in A.5.26.
ISMS.online allows you to build and maintain these playbooks as live documents, link them to incidents, and show how they were used in real cases. That makes it far more likely that engineers will open and follow them, and far easier to demonstrate that they did.
How can MSPs avoid drowning in per‑client playbooks while still honouring client‑specific obligations?
If you multiply each incident type by every client, you end up with more documents than anyone can realistically maintain. At the same time, regulatory, contractual and insurance requirements often vary by client, so a purely one‑size‑fits‑all approach is not enough.
How does a “core playbook plus client overlay” model keep incident handling scalable?
The most sustainable pattern for MSPs is usually:
- A shared core playbook per scenario
- Thin client overlays for local differences
Shared core playbook
The core playbook describes how your organisation responds to a given threat:
- Threat description and lifecycle (for example, ransomware in hybrid environments)
- Default technical actions: isolation, evidence capture, remediation, backup validation, restoration and checks
- Generic roles and escalation paths
- Standard evidence and review expectations
You use these for training, simulations and cross‑team alignment.
Client overlays
An overlay is a lightweight record attached to a specific client:
- Named contacts and their roles (incident owner, DPO, media spokesperson)
- Contracted SLAs, out‑of‑hours expectations and chargeable extras
- Regulated notifications and timelines relevant to that client’s sector and jurisdiction
- Any agreed deviation from your default approach
The overlay focuses on who, when and where, leaving what and how to the shared core playbook.
ISMS.online lets you capture this “core + overlay” structure in one place: one scenario template per threat, with overlay records per customer. That means you can show auditors and customers that you achieve both consistency and customisation, without maintaining a different 20‑page runbook for every tenant.
What does a sensible end‑to‑end incident lifecycle look like for a multi‑tenant MSP?
For A.5.26 to be convincing, you need a lifecycle that works across shared tools and many clients, not just inside one network. You do not need a complicated model; you do need a consistent, well‑understood one.
How can you structure an MSP‑friendly incident lifecycle?
A seven‑phase lifecycle works well for most providers:
Prepare
Put the basics in place before the next major outage:
- Agree roles, RACI, escalation and notification rules with each client
- Publish and maintain A.5.26‑aligned policies and scenario playbooks
- Configure and monitor tooling (EDR, RMM, SIEM, ticketing, messaging) consistently across tenants
- Run exercises with internal teams and priority customers
Detect
Define consistent entry points into your ISMS:
- Monitoring coverage, including who watches what (you vs client vs third parties)
- Thresholds, correlations and suppression rules to reduce noise
- Clear paths from user or third‑party reports into your incident process
The important part is that potential incidents enter a managed lifecycle, not just a generic support queue.
Triage
Make early decisions fast and defensible:
- Confirm whether a signal is an ISMS‑relevant incident
- Assign severity and cross‑tenant impact using your standard model
- Select the appropriate scenario playbook and client overlays
Strong triage is vital in a multi‑tenant context, where one misjudged case can grow into a cross‑client problem.
Contain
Limit harm without creating new damage:
- Isolate affected systems, users or integrations
- Apply short‑term changes (for example, firewall rules, conditional access tweaks) to stop spread
- Agree temporary business workarounds with the client when needed
Records here should show who authorised what and why, which is exactly what auditors examine.
Eradicate
Address the proximate and underlying causes:
- Remove malware, backdoors or unauthorised changes
- Close vulnerabilities and fix misconfigurations
- Rotate credentials, keys and tokens that may have been exposed
This phase should connect clearly into your change, configuration and vulnerability management processes.
Recover
Return services to a state you and the client both trust:
- Restore from tested backups where necessary
- Validate data integrity, application behaviour and monitoring coverage
- Obtain explicit client acceptance before closure
Customers often remember how recovery was handled more than anything else, especially if communication felt shaky.
Learn
Make every incident a lever for improvement:
- Conduct structured reviews with internal and client stakeholders
- Update risks, controls, contracts and SLAs
- Refine playbooks and overlays based on what actually helped
ISMS.online links incidents, risks, controls, training and improvements so that learning is recorded and visible. That evidence of continuous improvement is a strong signal for auditors and enterprise buyers that your ISMS is alive, not static.
Which roles, RACI and communication rules help MSPs satisfy A.5.26 without creating unnecessary bureaucracy?
You do not need a large incident organisation to satisfy A.5.26; you need clarity and traceability. In a managed service relationship, that clarity must span both your team and your client’s team.
How can MSPs define responsibilities and communication in a way teams can actually follow?
A practical model typically covers four areas.
A small, standard role set
Define a concise set of roles, then map people into them per client:
- MSP incident manager
- MSP SOC analyst or on‑call engineer
- MSP account or service owner
- Client incident owner
- Client DPO or compliance lead
- Client communications or PR contact
- Executive sponsor for high‑impact situations
Re‑using the same role labels across clients makes it easier to train teams and maintain documentation.
RACI linked to lifecycle phases
For each phase in your lifecycle, decide who is:
- Responsible: for doing the work
- Accountable: for its outcome
- Consulted: before major steps
- Informed: about progress and closure
For example, you might set:
- Prepare: MSP incident manager (Responsible), client incident owner (Accountable)
- Contain: MSP engineer (Responsible), MSP incident manager (Accountable), client owner (Consulted)
- Recover: MSP and client jointly Responsible, client business lead Accountable
This makes later explanation of decisions much easier, particularly during audits or internal reviews.
Clear channels, cadence and content rules
Document communication expectations in a way people will remember under pressure:
- Which tools to use for coordination (ticket, chat, bridge call)
- Update frequency by severity level
- The minimum information each update must include
If every engineer knows that a “critical multi‑tenant incident” means updates every 30 minutes with a standard format, customers and auditors will quickly notice the difference in professionalism.
Approvals and record‑keeping
Finally, define in writing:
- Which actions need approvals, and from whom
- Where those approvals are captured (ticket system, ISMS record, signed form)
- How long incident records are retained, and who can see them
ISMS.online gives you a single place to tie roles, training, approvals and incident records together, so you can show who was authorised to act, who did act, and how you kept a reliable evidence trail.
How can an MSP use ISMS.online to turn A.5.26 from static documentation into live, provable practice?
If you already have policies and scattered runbooks, the biggest gap is usually demonstration: being able to show that your teams consistently follow the framework you have designed. ISMS.online is built to close that gap by making A.5.26 operational, not just theoretical.
What is a realistic A.5.26 improvement plan inside ISMS.online?
A time‑boxed pilot around a handful of high‑stakes scenarios works well.
Start with the incidents that worry your customers and insurers most, for example:
- Multi‑tenant ransomware
- Compromised RMM or privileged identity
- Payment‑related breach or BEC involving regulated data
These are also the cases large prospects bring up in security questionnaires and due‑diligence calls.
Build core playbooks and client overlays in one environment
Within ISMS.online you can:
- Create one core playbook per scenario, aligning its sections directly with your A.5.26 policy and incident lifecycle
- Add client overlays that capture contacts, SLAs, notification obligations and any deviations
- Link each playbook and overlay to the corresponding Annex A.5.26 entry in your Statement of Applicability and control set
That linkage demonstrates a clear line from ISO language to day‑to‑day practice.
Log live incidents and improvements against A.5.26
As you run real incidents or structured exercises:
- Log each one against the correct scenario and client overlay
- Capture decisions, approvals and client communications within the incident record instead of across multiple tools
- Raise follow‑up work into your risk register, change log, contracts or training plan, and track it to completion
Over time you build a portfolio of incidents that shows how your ISMS behaves under pressure, which is exactly the storey auditors and enterprise customers want to see.
Review evidence and expand systematically
After 60–90 days, review:
- How quickly incidents were contained and recovered
- How complete the documentation is for each case
- How clients, auditors or insurers responded to your incident handling
Use those insights to refine playbooks, overlays and training, then extend the A.5.26 pattern to more scenarios and additional frameworks such as NIS 2, DORA or AI governance.
Working this way, you are not just claiming alignment with ISO 27001 A.5.26. You are able to demonstrate, with live records, that your organisation handles incidents consistently, transparently and in a way that satisfies regulators, customers and auditors alike. If you want to be seen as the MSP that keeps its head when things go wrong, moving A.5.26 into ISMS.online is one of the most concrete steps you can take.








