Why MSP Incident Readiness Is Broken in 2025
Many MSPs still end up treating incident response as improvisation rather than a documented, repeatable capability they can prove under scrutiny. When your incident history lives in screenshots, chat threads and half‑completed tickets, you cannot show that you plan, control and audit incidents in a consistent way. That gap becomes painfully visible when a client, insurer or auditor asks for a clear timeline, named decisions and evidence that you met contractual and regulatory expectations.
Incidents rarely fail because of technology; they fail because preparation and accountability were never made clear.
In our 2025 ISMS.online State of Information Security survey, only about one in five organisations said they had not experienced any data loss in the previous year.
Operational Chaos in Everyday Incident Handling
Operational chaos appears when your incident workflow grows around people and tools, not around a deliberate, documented design that everyone understands. Everyday tickets can look manageable, but a high‑impact security incident exposes gaps in ownership, priorities and communication that were always there but never tested under real pressure.
MSP incident problems often fall into a familiar pattern:
- Fragmented ownership: – monitoring, containment and client updates sit in different teams, with no single accountable owner.
- Ticket chaos: – security incidents share queues with routine faults, using improvised categories and inconsistent priorities.
- Contract drift: – SLAs and security schedules promise response patterns your day‑to‑day practice no longer reflects.
- Multi‑tenant confusion: – shared platforms generate issues that affect several clients, yet you treat them as isolated events.
- Weak learning: – lessons from big incidents rarely make it back into playbooks, tooling or contracts.
This pattern makes it difficult to prove what actually happened, who decided what and whether you met your contractual obligations. It also means every major incident feels new, even when the root cause is familiar and could have been handled more smoothly with better preparation and clearer design.
Clients, insurers and regulators now assume your incident management is defined, rehearsed and evidenced, not improvised in a crisis. Regulatory guidance on security and personal data increasingly emphasises documented, tested processes and clear records that show how incidents were handled, not just that they were acknowledged. They expect to see how you co‑ordinate technical work, decision‑making and communication across teams and tenants without relying on heroics or guesswork when something serious goes wrong.
Enterprise customers, cyber insurers and regulators increasingly assume that incident management is defined, rehearsed and evidenced, not improvised on the day. Industry breach and threat reports regularly highlight gaps in preparation and communication, which in turn push stakeholders to demand better‑evidenced incident handling from their providers. They expect you to show that you can co‑ordinate technical work, decision‑making and communication across teams and tenants without relying on individual heroics. ISO/IEC 27001:2022 control A.5.24 gives those expectations concrete shape and a language that external reviewers use when they examine your capability. The control text focuses on planning, preparation and clearly assigned responsibilities for information security incidents, giving auditors and assessors a common reference point when they look at MSPs.
In practice, that means someone will eventually ask you to demonstrate that you have a documented incident policy and procedure, that staff know their roles and follow consistent paths through incidents, and that you can produce coherent records of actions, approvals and communications. If you cannot do that today without a scramble, the gap is not just a compliance problem; it can easily become a wider trust issue that undermines renewals, referrals and insurability.
How A.5.24 Exposes the Gap in MSP Readiness
A.5.24 exposes whether your incident capability is genuinely designed and repeatable, or just a loose collection of tickets and good intentions across many clients. For MSPs, the control tests whether your live operations match what your policy claims and whether you can explain your approach clearly to outsiders who do not know your environment.
A.5.24 requires you to define, establish and communicate incident management processes, roles and responsibilities in advance, then show that you use them. Descriptions of the control consistently emphasise having documented processes, clear ownership and evidence that those processes are followed in practice, rather than relying on informal habits. For MSPs this is not a paperwork exercise; it is a test of whether your real‑world incident practice stands up to scrutiny across many customers at once and can be explained clearly to outsiders.
A simple way to look at your current state is to ask three questions:
- Could you show an auditor where incident roles, processes and responsibilities are documented and approved?
- Could you walk a strategic client through a recent incident using a single, coherent record as your source of truth?
- Could you show that lessons from the last major incident changed playbooks, tools or contracts?
If any answer is not really, you have work to do. The advantage is that the same foundations that close the A.5.24 gap also reduce chaos, improve margins and make you easier to insure and to buy from, especially when you can explain your approach in simple, defensible terms.
Book a demoWhat ISO 27001:2022 A.5.24 Really Demands
ISO 27001:2022 A.5.24 expects you to run a real incident management framework, not just own a document called “incident response plan”. For an MSP, that framework has to operate across many clients and platforms while staying simple enough for staff to understand and auditors to assess. The control is really asking whether you can describe what you intend to do, how you do it, who does it and how you prove it afterwards.
Almost all respondents in our 2025 ISMS.online State of Information Security survey listed achieving or maintaining security certifications such as ISO 27001 or SOC 2 as a top organisational priority.
A.5.24 does far more than ask for a generic “incident response plan”; it expects a working framework that you can explain, operate and evidence under pressure. For an MSP, that framework has to work across many clients, different technologies and a mix of contracts without becoming unmanageable or drifting away from reality. It is the backbone of how you prove readiness, not a checkbox document to satisfy an audit once.
The Four Practical Layers of A.5.24 for MSPs
You can make A.5.24 clearer for your teams by framing it as four practical layers: governance, process, capability and evidence. Governance defines intention and authority; process defines the lifecycle; capability provides people and tools; evidence proves you actually follow the design. Together they give you a simple checklist that mirrors the way auditors and strategic clients think about your incident readiness.
A.5.24 is easier to understand if you break it into four layers: governance, process, capability and evidence. Together they describe what you intend to do, how you do it, who does it and how you prove it later, which is exactly how auditors and strategic clients will assess you.
Step 1 – Set clear governance and accountability
Define an incident policy, scope, definitions and named roles with delegated authority so decisions do not stall.
Step 2 – Describe a simple, repeatable process
Agree how events become incidents and how they move through defined lifecycle stages that staff can follow.
Step 3 – Build and train the supporting capability
Give people, tools and information the structure they need to execute the process reliably across tenants.
Step 4 – Capture evidence that incidents are managed
Ensure incidents leave a traceable record of timelines, decisions, actions and lessons that you can show to others.
In governance terms, you need an approved incident policy, a clear definition of what counts as an “event” and what becomes an “incident”, and named roles such as incident manager, technical lead, communications lead and client contact. Those roles must have enough authority to act at speed and be recognised by both your teams and your clients.
Process means a documented lifecycle that people recognise and follow. A common pattern is detection and reporting, assessment and classification, containment and eradication, recovery and verification, and lessons learned. The standard cares less about your exact labels and more about the fact that the process is documented, communicated and applied consistently so nobody is improvising stages on the fly.
Capability is about people and tooling. Analysts and engineers must understand the process and their place in it. Monitoring and ticketing systems must support the lifecycle rather than work against it. Pre‑approved communications, decision criteria and access to logs and evidence sources tie this together in daily operations.
Evidence is the part many MSPs underestimate. You need incident records with timestamps, actions and approvals, records of exercises and training, outputs from post‑incident reviews and management discussions of incident trends and effectiveness. Platforms such as ISMS.online make it easier to keep these artefacts structured and aligned across the whole information security management system so you can produce them quickly when challenged. Our own guidance on Annex A.5.24 focuses on structuring policies, RACIs and incident records in a central ISMS so that this trail is consistently available for internal and external review.
In practice, this four‑layer view gives you a simple checklist: policy and roles in place, process defined, capability enabled, and evidence captured. When you can tick all four reliably, A.5.24 starts to feel like a description of your normal operations rather than an external demand.
How A.5.24 Connects to the Rest of Your ISMS
A.5.24 connects incident planning and preparation to the wider information security management system, so you cannot treat it as a standalone task. Auditors and clients will test whether your incident policy, risk assessments, supplier management and continuity planning all tell the same storey about how you handle security events and outages.
A.5.24 is not an isolated control; it touches almost every part of your information security management system. That matters because auditors and customers will look for consistency, not just for a single polished document that looks good on its own.
Around 41% of organisations in our 2025 ISMS.online State of Information Security survey said that managing third‑party risk and tracking supplier compliance is one of their biggest information‑security challenges.
It links to other incident‑related controls on assessment, response and learning. Logging and monitoring controls support detection and evidence. Business continuity and supplier controls influence how you handle service outages and third‑party failures. Core ISMS clauses on competence, awareness, performance evaluation and improvement determine how you train people, measure results and refine the system over time.
For MSPs, the real shift is to stop asking “do we have an incident policy?” and start asking “could we defend our incident capability, on paper and in practice, to an auditor, a regulator and a strategic client?”. When you view A.5.24 through that lens, it becomes the backbone of how you prove readiness rather than a standalone checkbox, and it sets up the conversation about who does what when an incident involves both you and your customers.
Turning A.5.24 Into a Working Framework Across Clients
A workable A.5.24 framework for an MSP must provide a shared core across tenants while still allowing for client‑specific responsibilities and regulatory obligations. Designing that “core plus variations” model once, and then reapplying it per client, stops you reinventing incident management from scratch for every contract and reduces the risk of unmanageable drift.
Because you serve many organisations, you cannot design a different incident framework from scratch for every tenant and expect it to stay current. Instead, you define a core model that applies across your portfolio and then vary specific responsibilities and escalation paths per client, using your contracts to reflect those differences.
In practice that looks like a standard incident policy and procedure set, plus reusable playbooks and runbooks, all mapped to A.5.24 and related controls. Per‑client provisions, such as notification rules or regulatory obligations, then bolt onto this shared core. An ISMS platform gives you a natural home for this model, tying policy, risk, suppliers, continuity and incidents into one environment so updates and reviews flow consistently across all your clients.
When you have that shared framework in place, the next logical step is to be precise about how responsibilities are split between your team and each customer, which is where clear roles, RACIs and boundaries come in.
ISO 27001 made easy
An 81% Headstart from day one
We’ve done the hard work for you, giving you an 81% Headstart from the moment you log on. All you have to do is fill in the blanks.
Defining MSP vs Client Roles, RACI and Boundaries
Clear roles and boundaries between your MSP and each client are just as important as the technical process when serious incidents occur. Without agreed responsibilities, you risk missed regulatory deadlines, delayed containment and conflicting communications that damage trust. A.5.24 expects you to settle these questions before an incident, not while everyone is already under pressure.
Every serious incident involving a client raises the same questions about who owns which decisions, who speaks externally and who carries regulatory responsibility, and those questions are much easier to answer if you have decided them in advance. A.5.24 expects you to settle those points before anything goes wrong, not while you are handling a live attack and debating ownership in the middle of a crisis. Clear roles are the foundation of credible incident readiness for MSPs.
Why You Need Tenant‑Aware Roles and Boundaries
Tenant‑aware roles and boundaries ensure that your team and your client make decisions at the right time, at the right level of authority, and with a shared understanding of who does what. Ambiguity in those boundaries quickly turns a manageable technical issue into a confidence problem that affects renewals and referrals.
Ambiguity in roles is one of the fastest ways to turn a manageable incident into a crisis of confidence. If your team assumes the client will notify regulators, while the client assumes you will tell them when to notify, important deadlines can pass without action. If nobody knows who approves disruptive containment, engineers hesitate, discussions stall and damage grows while everyone waits for direction.
A tenant‑aware RACI (Responsible, Accountable, Consulted, Informed) gives you a simple, repeatable way to assign roles. For each phase of the incident lifecycle you define what the activity is in your context, which side is involved and how responsibility is shared. That model then informs contracts, procedures and playbooks so reality and documentation stay aligned and both sides know what to expect from the other.
Building a Practical MSP–Client RACI
A practical MSP–client RACI starts with a generic model that reflects how you work today, then adapts to client criticality and regulation without changing its basic structure. This keeps things simple for your teams while still giving account managers the flexibility to negotiate client‑specific responsibilities where it matters.
A useful starting point is a generic RACI for a “typical” client, tuned by criticality and regulation. You can then adjust the model case by case without reinventing it every time, while keeping the same structure that your teams recognise.
A simple narrative example might look like this:
| Incident phase | Client’s role (summary) | MSP’s role (summary) |
|---|---|---|
| Detection and reporting | Receives and forwards user reports | Monitors systems and turns alerts into tickets |
| Triage and assessment | Provides business impact context | Classifies and prioritises events and incidents |
| Containment | Approves disruptive actions | Proposes and implements technical containment |
| Notification | Owns regulatory and public reporting | Provides technical details and timing information |
| Lessons learned | Sets risk appetite and changes | Documents root cause and proposes improvements |
The key is not the exact wording but the removal of grey areas. No activity should live in a space where each side quietly assumes the other will act. When you write service descriptions, SLAs, operational level agreements and onboarding materials, this RACI should show through clearly so that sales promises, operational reality and client expectations line up.
Handling Regulators, Evidence and Third Parties
Handling regulators, evidence and third parties requires more than generic wording in your contracts; you need specific triggers, decisions and hand‑offs for scenarios where time limits and legal standards apply. Getting these right in your RACI protects both you and your clients when incidents attract external attention.
Most organisations in our 2025 ISMS.online State of Information Security report said they had been impacted by at least one third‑party or vendor‑related security incident in the past year.
Some responsibilities need special care to avoid surprises in a crisis because external parties are involved and deadlines are fixed.
Regulatory clocks matter. If a client has legally defined notification deadlines, your contracts and procedures must state who decides that an incident is reportable, who starts the clock and who actually submits notifications. Public guidance on incident reporting frequently stresses the need for clear notification criteria, defined responsibilities and agreed timelines, especially where statutory deadlines apply. Your incident process should include prompts to trigger those decisions in time, with clear escalation paths when there is disagreement.
Evidence ownership is another sensitive area. You need agreements on how logs, screenshots and other artefacts will be shared, and how you maintain chain‑of‑custody. Treating client data as an internal convenience will not stand up to legal or regulatory scrutiny when investigators ask how you collected and protected it.
Third‑party providers complicate timelines. Many incidents involve cloud platforms, SaaS vendors or carriers. Your RACI should clarify who contacts which provider, what information they pass and how those interactions are recorded in your incident system so you can demonstrate diligence later.
Non‑technical roles such as privacy, legal and HR must also have defined places in the process. Writing them in as “we will involve them if needed” is not enough; they need trigger conditions and expected actions so that their work integrates smoothly with technical response. Once these boundaries are clear, you can anchor them in the policies, procedures, playbooks and runbooks that make up your incident library.
Designing Policies, Procedures, Playbooks and Runbooks
Your incident capability will only scale across clients if you organise it as a small, layered library of policies, procedures, playbooks and runbooks rather than as one bloated plan. Each layer should answer a different question and be written for a different audience, from executives who approve the policy to analysts who follow runbooks under time pressure.
An effective incident capability for an MSP is not a single “incident response plan” document that tries to do everything. It is a small, coherent library of policies, procedures, playbooks and runbooks that different people can use at different levels of detail, all linked back to A.5.24 and your MSP–client RACIs. Designing that library deliberately lets you scale your approach and keep it credible under review.
Building a Small, Layered Library Instead of a Monster Plan
A layered library prevents your incident documentation from becoming unreadable and out of date, because each document has a clear job and audience. Policies define intent, procedures define the lifecycle, playbooks define scenarios and runbooks define tool‑level steps. Together they give your teams and your auditors a coherent picture of how you handle incidents.
You can think of the library as four layers that answer different questions: why, what, how and with which tools. Keeping these layers clear prevents documents from becoming bloated and ensures staff know where to look when they are under pressure and have seconds, not minutes, to find guidance.
Step 1 – Write a concise incident management policy
Set scope, intent and high‑level accountability in a short, approved statement that everyone can understand.
Step 2 – Define a generic incident management procedure
Describe lifecycle phases, decision points and escalation rules at a process level, independent of specific tools.
Document triggers, objectives, roles, actions and communications for common scenarios that your clients actually face.
Step 4 – Maintain tool‑specific technical runbooks
Show step‑by‑step actions in specific platforms referenced by playbooks, ready for analysts and engineers to use.
The policy explains why you handle incidents, what is in scope and who is ultimately accountable. The procedure turns that intent into a consistent lifecycle and explains when to move from one phase to another. Playbooks take the generic process and turn it into concrete, scenario‑specific guidance that analysts can follow. Runbooks anchor those scenarios in real tools so engineers are not improvising technical steps on the day.
Choosing the First Playbooks That Matter
Your first few playbooks should cover the incidents that are most likely and most damaging for your customer base, not every theoretical scenario. Focusing on a small number of high‑value cases makes it easier to train staff, refine guidance through real use and demonstrate tangible coverage to clients and auditors.
You do not need dozens of playbooks to start; in fact, an overloaded library is harder to maintain and less likely to be used. It is more effective to write a handful of high‑value scenarios that match your client base and technology stack, then refine them through real use and structured exercises.
Good early candidates for MSP playbooks often include:
- Malware or ransomware on a managed endpoint in a typical client.
- Business email compromise in a standard cloud email platform.
- Privileged account compromise in a directory or cloud console.
- Suspicious activity in a shared remote management platform.
- Multi‑tenant service degradation that might be security‑related.
Each playbook should define how the incident starts, what your immediate objectives are, which roles are involved, what key decisions must be taken and what evidence must be captured. Short, consistent templates make this easier to maintain and easier for analysts to use under pressure, and they also make it easier to onboard new staff into your way of working.
Runbooks then fill in tool‑specific detail such as how to isolate a host in a particular endpoint detection tool or how to export logs from a specific cloud platform. Keeping them separate from policies and procedures avoids constant policy edits when tooling changes or when you adopt new platforms for different client segments.
Keeping Documents Usable and Aligned With Reality
Your documentation only proves value if it reflects how you actually work today and is easy for staff to find when they need it. Simple change control, clear ownership and integration into daily tools keep your library aligned with practice and demonstrate to auditors that you maintain, not just create, your incident materials.
Documents that live in an isolated folder and never change quickly drift away from actual practice, which undermines both readiness and audit credibility. To keep your library alive, build simple change discipline around it and connect updates directly to incidents and exercises.
After major incidents or exercises, review which documents were useful, which were missing and which were inaccurate. Update the relevant policy, procedure, playbook or runbook deliberately, with lightweight version control and approvals. Your aim is to keep written guidance and real practice aligned without burying the team in bureaucracy or slowing down essential changes.
It also helps to embed these documents where work happens. Linking playbooks and runbooks directly from incident tickets or SOC dashboards makes use far more likely than relying on people to search a separate repository. ISMS.online and similar platforms can act as the backbone, connecting your policies and procedures to risks, suppliers, continuity plans and incident records so staff always have current guidance at hand. With the library in place, the next challenge is to make sure your ticketing, monitoring and SOC tools actually reflect that design.
Free yourself from a mountain of spreadsheets
Embed, expand and scale your compliance, without the mess. IO gives you the resilience and confidence to grow securely.
Integrating A.5.24 With Ticketing, Monitoring and SOC Operations
A.5.24 only delivers value when your incident design is reflected inside the tools your teams use every day. For most MSPs, the service desk or IT service management (ITSM) platform should be the system of record for incidents, while monitoring and SOC tooling feed into it in a predictable way. When those tools mirror your process, roles and evidence model, you can demonstrate control rather than relying on narratives and recollection.
A.5.24 only delivers value when it is visible inside your day‑to‑day tools and the way your teams actually work. For most MSPs, the service desk or IT service management (ITSM) system should be the system of record for incidents, with monitoring and security operations centre (SOC) tooling feeding into it in a controlled way. Good‑practice incident handling guidance typically recommends a single, central record for each incident, with detection systems and response teams feeding into that record rather than maintaining separate, fragmented logs. When those tools mirror your process and roles, readiness becomes something you can show, not just something you claim.
Making the ITSM Tool Your Incident System of Record
Treating your ITSM platform as the incident system of record ensures that every significant event leaves a structured trail you can review and share. When categories, workflows and fields align with A.5.24 and your incident lifecycle, you no longer rely on scattered emails or chat logs to tell the storey; the ticket itself becomes the narrative for auditors and clients.
If security events and incidents are scattered across email threads, chat channels and ad‑hoc documents, you cannot easily prove control or learn from experience. When your ITSM configuration matches your incident process, every significant event leaves a structured trail you can review and show to clients, auditors and insurers without a scramble.
Step 1 – Define how alerts become incidents
Agree which alerts should open tickets and how analysts confirm and classify incidents before escalation.
Step 2 – Configure categories, priorities and workflows
Set up dedicated security categories, severities and lifecycle states that mirror your documented process.
Step 3 – Capture structured data for every incident
Add fields and templates for detection source, impact, approvals, communications and lessons learned.
Start by deciding how monitoring alerts enter the ITSM tool. Monitoring systems should either create tickets automatically or feed a triage queue where analysts decide whether to open or update incidents. Once an incident is confirmed, it should be tagged clearly as security‑related and assigned an agreed severity that relates to impact and urgency so response effort is consistent.
Configure categories and sub‑types so security incidents are distinct from routine service issues. Define lifecycle states such as open, triage, investigation, containment, recovery, review and closed, and make sure tickets move through those states in a controlled way. Add fields and templates for key A.5.24 data points like detection source, affected assets, key decisions, approvals and communications so that reviewers can follow the storyline at a glance.
To make this concrete, imagine a ransomware alert on a managed endpoint. The monitoring tool raises an event that opens a “Security Incident” ticket, pre‑populated with source, affected host, detection rule and severity. Analysts then follow a structured form to record triage decisions, containment actions, client notifications and final recovery steps, all within that single record. The resulting ticket reads like a timeline, not a puzzle.
Connecting Monitoring, SOC and Customer Communications
Monitoring and SOC tools need clear, documented pathways into your incident process so alerts, investigations and client updates stay aligned. Your goal is a controlled flow where technical systems create or update tickets, analysts refine and escalate, and account teams communicate in ways you can trace and explain later.
On the monitoring and SOC side, you want clear, explainable flows from alerts to records. Security information and event management (SIEM) systems, endpoint detection and response (EDR) tools, cloud security platforms and other sources should either open or update tickets according to rules you can describe in your procedure and playbooks. Tuning rules to cut false positives and duplicates is both an efficiency gain and a sign that you have thought carefully about detection.
For serious incidents, you may choose to create a bridge mechanism such as a dedicated war‑room chat channel or scheduled conference calls. Participation, decisions and significant messages from that bridge should be summarised back into the incident record so you are not reconstructing them later from transcripts and memories when someone demands a timeline.
Client communication should follow the same structure. Changes in severity should drive internal state transitions and, where appropriate, external updates through status pages, emails or account manager calls. Using pre‑approved message templates and clear approval paths reduces the risk of inconsistent or misleading statements under pressure and makes it easier to show that you took timely, measured steps.
Learning From Each Incident to Improve the System
Your tools and workflows should evolve after each significant incident so that the next one is easier to handle and easier to evidence. Building “review and improve” stages into your process turns A.5.24 into a driver of operational maturity rather than a static compliance task.
A.5.24’s planning and preparation intent is only fulfilled when you use incidents to refine your system rather than treating each one as a one‑off fire to be extinguished. That means building a repeatable pattern for incident reviews and feeding their outputs into change that you can track.
After each major incident, ask whether the tools and process helped or hindered you. Did you have all the information you needed in one place? Were there manual steps that could have been triggered by simple forms or automations? Did the ticket tell a coherent storey from detection to closure that someone else could follow?
Turn those reflections into actions: adjust categories, refine workflows, change templates, improve playbooks or update contracts. Capture improvement actions in a way that you can track to closure and reference in management review meetings. Over time, this turns A.5.24 from a static control into a driver of continuous improvement across your MSP operations and it naturally leads into questions about how you design and protect the evidence on which those reviews depend.
Evidence, Logging and Forensic Readiness for Multi‑Tenant MSPs
A.5.24 assumes you can show how incidents were handled, not just assert that they were managed appropriately. For MSPs this is difficult because you must balance evidence quality, tenant separation and privacy obligations across many customers and providers while keeping costs under control. A deliberate, documented evidence model turns that balancing act into a repeatable practice instead of an ad‑hoc scramble. Commentary on the control often highlights the need for records and artefacts that demonstrate planning, decision‑making and follow‑up, not just high‑level statements about having responded.
Designing an Evidence Model That Works Per Tenant
A per‑tenant evidence model helps you avoid both blind spots and accidental data leaks by defining which logs and artefacts you collect, where they live and how they relate to incident records. When everyone understands that model, you can respond to investigations with confidence rather than hunting through unmanaged stores.
A simple, documented evidence model for each client helps you avoid both gaps and accidental data exposure. The model should answer which logs and artefacts you collect, where they are stored, how clocks are synchronised and how records connect to incidents in your ITSM or case management tools.
Step 1 – List key log and event sources per client
Identify which systems generate security‑relevant records and how you access them quickly.
Step 2 – Define storage, time sync and retention rules
Document where data lives, how clocks stay aligned and how long you keep each type of record.
Step 3 – Link evidence to incident records
Describe how logs, artefacts and decisions are associated with tickets for later review and audits.
You do not need an elaborate diagram for every client, but you should be able to explain, for example, that security‑relevant logs from defined systems flow into central repositories or well‑defined stores, that clocks are synchronised so timelines make sense across platforms and that access to those stores is controlled and logged. That explanation should be consistent with your policies and your contracts.
Linking evidence to incidents can be as simple as associating log excerpts, reports or references to specific repositories within your ITSM tickets. The key is that someone can later reconstruct the incident from the record without a scavenger hunt across systems and accounts, and that they can see why certain decisions were made at particular times.
Getting Retention, Access and Segregation Right
Retention, access and segregation of incident data must balance legal duties, client expectations and operational needs. Too much data kept for too long increases risk; too little data or overly aggressive deletion leaves you unable to support investigations or demonstrate due care when questioned.
Retention and deletion decisions sit at the intersection of security, privacy and cost. Keeping everything for too long increases risk and may breach privacy rules; deleting too aggressively leaves you unable to answer reasonable questions after an incident or to support legal processes.
Document your choices for different types of data such as raw logs, aggregated events and investigative artefacts. Note where you use longer retention for regulatory or contractual reasons, and define triggers that extend retention for specific incidents, such as legal holds or insurance investigations. Explain how and when data is securely deleted, and make sure the practice matches what your policies and client agreements say.
In a multi‑tenant environment, segregation is as important as retention. You want confidence that:
- Analysts investigating one client cannot casually browse another client’s data.
- Administrative actions on log and evidence stores are themselves logged and periodically reviewed.
- When you share artefacts with clients, you do so using approved, secure channels with clear access controls.
These requirements should appear in your evidence model and in your operating procedures. If you use an ISMS platform, you can often centralise evidence references while keeping underlying data in segmented technical stores so that you maintain separation without losing visibility of what exists where.
Baking Evidence Collection Into Everyday Work
Evidence collection must be woven into daily incident response activities, not treated as an optional afterthought, if you want reliable records under A.5.24. By turning key evidence steps into check‑boxes within playbooks, runbooks and ticket templates, you make it easier for analysts to do the right thing even under pressure.
The time to think about evidence is not after an incident is closed; it is while runbooks and playbooks are being executed in real time. If capturing evidence feels like extra work, people will skip it when pressure rises, and you will discover the gap only when someone asks hard questions.
To avoid this, design playbooks and runbooks so that key actions include explicit evidence steps. For example, before isolating a host, analysts capture agreed screenshots or export specific logs; after resetting credentials, they record which accounts were changed and when; when notifying a client, they attach the approved statement and note who signed it off and at what time.
Closed incidents make good audit samples. Periodically select a few and review them as though you were an auditor, regulator or strategic client. Ask whether you can see a full timeline from detection to closure, whether the rationale for key actions is clear and whether the evidence attached would satisfy an external reviewer. Where the answer is no, refine your evidence model, your documentation and your training so that the next similar incident produces better records and analysis, laying the groundwork for more focused exercises.
Manage all your compliance, all in one place
ISMS.online supports over 100 standards and regulations, giving you a single platform for all your compliance needs.
Training, Exercises and ISMS Integration Across A.5.24–A.5.30
Training and exercises turn your A.5.24 design into a living capability that people can use under stress. For an MSP, that means mapping specific incident roles to tailored training, practising realistic scenarios across tenants and feeding lessons into your wider ISMS so improvement is visible, not just assumed.
Around two‑thirds of organisations in our 2025 ISMS.online State of Information Security survey said that the speed and volume of regulatory change are making security and privacy compliance harder to sustain.
A.5.24 assumes that your incident processes are not only written down but also understood and practised by the people who must use them. Guidance from standards bodies on incident response planning repeatedly stresses training, rehearsal and staff familiarity with procedures as essential complements to written documentation. For MSPs this means developing specific skills in different roles and using exercises to test both your design and your readiness across tenants and time zones. Training and rehearsal close the gap between neat documents and messy real‑world response.
Mapping Roles to the Training They Actually Need
Different roles need different training if they are going to recognise incident triggers, follow procedures and make good decisions. Mapping those roles to concrete learning outcomes makes your training programme focused and measurable, and provides strong evidence that A.5.24 is embedded rather than theoretical.
Generic security awareness training will not prepare your teams for multi‑tenant incident handling where responsibilities cross organisational boundaries. You need to map roles to concrete learning outcomes and then train against your real playbooks, runbooks and tools so people see themselves in the scenarios.
Step 1 – Identify incident‑related roles across teams
List analysts, engineers, account managers, privacy, legal and senior decision‑makers who touch incidents.
Step 2 – Define what each role must recognise and do
Specify triggers, actions, escalation paths and communication duties per role, including when to hand over.
Use short sessions that walk through realistic incidents using your live tools and actual ticket flows.
Frontline analysts and service desk staff must recognise incident triggers, follow playbooks and capture evidence as they go. Engineers need to execute runbooks safely, understand containment options and know when to escalate for approvals. Account managers should understand when and how to communicate with customers, especially during ambiguous early stages. Senior leaders need clarity about the situations that require their involvement and the decisions they may have to make quickly under incomplete information.
Training works best when it uses your actual incident library and tools. Walking through a ransomware scenario in your real ticketing and monitoring environment is far more effective than a generic slide deck, because staff see exactly which screens, fields and workflows they will use when the next incident appears.
Designing an Exercise Programme That Feels Real
An exercise programme should test both your people and your design by simulating realistic, time‑bounded incidents that reflect your client base. By rotating scenarios and client segments, you build confidence that your A.5.24 approach holds up under different conditions and you generate evidence that your MSP takes readiness seriously.
Vary three dimensions to keep exercises meaningful:
- Scenario type: – ransomware at a key client, compromise of a shared management platform, suspected data leak or cloud misconfiguration.
- Client segment: – regulated versus non‑regulated customers, or high versus medium criticality accounts.
- Frequency: – quarterly internal exercises and occasional joint exercises with selected clients where risk is highest.
Joint exercises with high‑value clients can be particularly powerful. They help align expectations, test RACIs and reveal contractual assumptions that do not hold under pressure. They also generate strong evidence for auditors and risk committees that you are taking readiness seriously in shared environments. Well‑run exercises tend to leave behind logs, reports and improvement actions that oversight bodies can review as concrete proof of how you rehearse and refine your response.
After each exercise, treat it like a small incident. Capture what worked, what did not and what needs to change in documents, tooling or agreements. Track those actions and bring a summary into your management review programme so you can show improvement over time, not just activity. This pattern links A.5.24 directly to the wider performance evaluation and improvement clauses in your ISMS.
Closing the Loop Into Your ISMS
The real value of A.5.24 appears when incident planning, training and exercises feed into risk management, supplier oversight and business continuity, strengthening your entire ISMS. That loop allows you to show that incident readiness is part of how you run the organisation, not an isolated technical concern.
A.5.24 sits alongside other incident‑related controls such as assessment, response, learning and business continuity, and all of them feed into your management system as a whole. Using exercises and training to feed those controls turns your incident work into a driver of system‑wide improvement rather than an isolated process.
For example, patterns from incidents and exercises should inform risk assessments, supplier evaluations and continuity plans. Repeated issues with a particular platform may trigger supplier reviews or technology changes. Gaps in training or decision‑making may lead to changes in competence and awareness programmes or adjustments to your RACIs and escalation rules.
Centralising incident records, exercise reports and improvement actions in a platform such as ISMS.online helps you make these links visible. It also makes it easier to show auditors how your incident planning and preparation influence, and are influenced by, the rest of your information security management system, providing a natural bridge into discussions about how technology can support your A.5.24 ambitions.
Book a Demo With ISMS.online Today
ISMS.online helps you turn ISO 27001 A.5.24 from a static control into a live, MSP‑ready incident capability that you can operate and prove across all your clients. By connecting policies, RACIs, playbooks, incident records, evidence and improvement actions in one environment, you gain a backbone for incident readiness that scales with your portfolio and makes your approach easy to explain to clients, auditors and insurers. The way the platform organises incident planning, responsibilities and records around Annex A.5.24 is specifically designed to help MSPs demonstrate both planning and execution when they are questioned.
What You Gain From Seeing ISMS.online in Action
Seeing ISMS.online in action is the quickest way to judge whether a structured A.5.24 implementation fits the way your MSP works. A focused walkthrough can trace a real incident from detection to lessons learned, show where policies and RACIs live, how incident records align with your evidence model and how management views bring everything together for oversight and reporting.
A short, focused walkthrough lets you test how a structured incident management approach would change your own recent incidents. You can explore how incident policies and procedures align with A.5.24, how RACIs and playbooks are captured, how incident records link to evidence and improvement actions, and how management views bring all of that together in one place. Seeing those elements joined up makes it easier to judge whether this is the right backbone for your MSP’s incident readiness.
Deciding Whether ISMS.online Is the Right Fit
Choosing the right platform for A.5.24 is really about deciding how you want incident readiness to feel for your teams and your clients. If you want incident management that is auditable, scalable across tenants and integrated with your wider ISMS rather than bolted on, ISMS.online offers a practical, standards‑aligned foundation.
You should choose ISMS.online when you want incident readiness that is auditable, scalable across tenants and integrated with your information security management system. If you value independent audits, structured evidence and a single source of truth that you can show to clients, auditors and insurers, we are ready to help.
A conversation that walks through one or two real incidents, and how they might have looked inside a joined‑up ISMS, will show whether this is the right foundation for your next phase of growth. When your current state matches the gaps described earlier and you are ready to strengthen A.5.24 by turning incident chaos into a structured, commercially valuable capability, ISMS.online is prepared to support you.
Book a demoFrequently Asked Questions
How should an MSP interpret ISO 27001:2022 A.5.24 in day‑to‑day operations?
ISO 27001:2022 A.5.24 expects your MSP to run a repeatable incident capability, not just store an “incident policy” in your ISMS. In practice that means you design, resource, operate and regularly test an incident lifecycle – and can show that real cases have followed that design.
What does “planned and prepared” mean for an MSP?
For a managed service provider, A.5.24 lands in four very visible areas:
- Design: – a policy and documented procedure that fit your Information Security Management System (ISMS) or Annex L Integrated Management System (IMS), with clear definitions of what counts as an “information security incident” across tenants and services.
- People: – named roles that work across time zones and multiple tenants, with owners for detection, triage, containment, communication, recovery and review.
- Execution: – a lifecycle that engineers can follow under pressure without hunting through SharePoint, usually a simple flow from detection → triage → containment → recovery → review.
- Evidence: – a system of record that ties everything together and shows how real incidents moved through that lifecycle.
If an auditor or major customer asks your team to walk through the last serious incident for a key tenant, you should be able to:
- Open a single incident record in your ITSM tool for that tenant.
- Show timestamps, state changes, severity and assigned roles.
- Point to policy clauses, RACIs and A.5.24 alignment.
- Show what changed afterwards – corrective actions, playbook updates, training or contract changes.
Well‑run incident management feels boring on the outside – because the surprises have already been designed out of it.
When you manage your incident policy, procedures, roles and incident records together in ISMS.online, you can show that A.5.24 is baked into your ISMS or Annex L IMS, rather than being a side document you dust off before an external audit.
How should an MSP structure incident responsibilities with each client under A.5.24?
Under A.5.24 you are expected to treat incident responsibilities as a designed, per‑tenant shared model, not a vague assumption buried in email threads. Auditors and enterprise customers will look for signs that you have decided – and documented – who does what at each stage of an incident, and that both sides recognise this split.
How can you design a clear shared responsibility model?
A practical method that works across most MSP environments is:
- Start with a standard RACI: that matches your normal incident flow: detection, triage, containment, eradication, recovery, notification, communication and review.
- Set sensible defaults: for your managed services, for example:
- Your MSP: responsible for detecting and containing threats inside managed platforms and services.
- The client: accountable for regulatory notifications, customer communications and business decisions affecting their own operations.
- Shared: providing evidence, agreeing disruptive actions, defining what is “material” or “notifiable”.
- Adjust by tenant: instead of reinventing the wheel:
- Higher‑risk or regulated sectors (finance, healthcare, public sector) may need faster notification commitments and more joint decisions.
- Sophisticated in‑house security teams may want more control; smaller clients may expect you to drive almost everything.
Those RACIs should sit where teams will actually find and maintain them – typically within your ISMS or IMS, linked to A.5.24, supplier controls such as A.5.19 and the relevant service descriptions.
How do you make shared responsibilities visible during real incidents?
A designed split only helps if it appears in the tools and artefacts people touch under pressure:
- Contracts and SLAs: reference the shared incident model and set expectations for detection, notification and response times.
- Ticket templates: include fields such as “Client incident owner”, “Regulatory notification owner”, “Business approver for disruptive actions” and “Communications lead”.
- Playbooks: call out who triggers which decision, who speaks to which stakeholder group, and which approvals are required at each step.
When you can show that the same shared design appears consistently in contracts, RACIs, ticket fields, playbooks and a recent tenant‑specific incident record, you make A.5.24 easy for auditors and large buyers to trust – and far easier for your teams to follow across hundreds of clients.
Which incident playbooks and runbooks does an MSP genuinely need for A.5.24?
A.5.24 does not reward a bloated wiki nobody opens at 2 a.m. It expects a lean set of playbooks and runbooks that cover your most likely threats, aligned to the services you actually run and the tools your SOC and engineers really use.
Most MSPs get strong coverage with 4–7 well‑designed playbooks tailored to their managed environments, for example:
- Ransomware or destructive malware: on managed endpoints or servers.
- Business email compromise: – account takeover, MFA fatigue, risky forwarding rules.
- Privileged account compromise: – admins, service accounts, break‑glass identities.
- Suspected data exfiltration: from a managed cloud or on‑prem environment.
- Multi‑tenant platform incident: – where a common tool or service misbehaves and security may or may not be the root cause.
- Third‑party SaaS compromise: that affects multiple tenants through your managed stack.
Each playbook should answer the same fundamental questions:
- What typically triggers this scenario?
- Who leads and who supports, inside your organisation and on the client side?
- How do you classify severity and when do you escalate?
- When and how do you involve the client, legal and privacy roles?
- Which approvals are required before high‑impact actions such as isolation or data wipes?
- What information must be captured in the incident record at each stage to satisfy A.5.24 and neighbouring controls?
How should you structure and maintain runbooks for specific tools?
Playbooks describe who does what and when at a scenario level; runbooks capture how to perform specific actions on each platform:
- Isolating a device in your EDR or endpoint management solution.
- Locking and resetting identities in major cloud providers.
- Capturing log and telemetry snapshots from SIEM, firewall or proxy.
- Checking and cleaning suspicious mailbox rules and forwarding destinations.
Keeping policy, playbooks and runbooks separate but cross‑linked inside ISMS.online has clear benefits:
- Governance (policy and control wording) stays stable while technology changes.
- Engineers know exactly where to look for “what’s the right next step?” versus “which command or console button do I use?”.
- You can show auditors a clean chain from A.5.24 policy text → scenario‑level playbook → platform‑specific runbook → real incident tickets where those artefacts were used.
If your current repository is sprawling or outdated, starting with a focussed library that matches your most common incidents will do more for A.5.24 – and for your clients – than a long list of rarely touched documents.
How can an MSP embed A.5.24 into ticketing, monitoring and SOC operations without slowing engineers down?
You make A.5.24 part of normal work by treating your ITSM incident record as the single source of truth and wiring monitoring and collaboration tools around it. The incident record tells the full storey; consoles, dashboards and chat capture the technical depth behind that storey.
What should an A.5.24‑aligned incident record include?
In your ITSM or service desk tool, define a dedicated “information security incident” type that reflects your documented process:
- Core fields: for tenant, environment, affected service, severity, data sensitivity and potential regulatory relevance.
- State flow: that mirrors your procedure (for example: New → Triage → Investigation → Containment → Recovery → Review → Closed).
- Mandatory fields and checklists: at key transitions:
- Before closing, has a review been completed?
- If the incident involved personal data, has privacy been consulted?
- Were agreed notification timelines met?
- Summaries and links: for:
- Key actions and approvals, with who authorised what and when.
- Client communications, including channels and times.
- Underlying alerts, cases or log sources stored elsewhere.
Security‑specific categories and tags let you separate information security incidents from general outages. That makes it much easier to report on trends, prove readiness to auditors and drive improvements across your Information Security Management System.
How do monitoring and SOC tools fit around that design?
Once you have a clear record type and flow, you decide which alerts should create or enrich incident records, such as:
- High‑impact or high‑confidence detections from SIEM, EDR or cloud security tools automatically raising pre‑populated incident tickets.
- Lower‑severity signals being grouped for analyst review, with an easy promotion path to “incident” when certain criteria are met.
- Integrations that add context – affected users or devices, correlated events, evidence artefacts – back into the master record rather than leaving everything trapped in chat or individual consoles.
If your team uses chat or virtual bridges during live handling, a brief summary of decisions and approvals should always be pushed back into the incident record so you can demonstrate control when someone reviews the case months later.
When you design this flow once, connect it to A.5.24 and its related controls in ISMS.online, and train your SOC and service desk to treat the incident record as “where the storey lives”, you satisfy the control without adding bureaucratic overhead for engineers.
What evidence should an MSP keep to convincingly demonstrate incident readiness for A.5.24?
A.5.24 is usually tested through recent real incidents, not theoretical checklists. Auditors, insurers and large customers will typically select one or two cases and ask you to show how they unfolded against your documented incident approach.
What does a strong evidence set look like for each incident?
For each material incident – especially those involving sensitive data or major disruption – you should be able to present:
- The main incident record:
- Timestamps, state changes and severity.
- Assigned roles and hand‑overs across shifts or teams.
- Short narrative of what happened and why key decisions were taken.
- Linked technical artefacts:
- SIEM or EDR alerts, case IDs and summary exports.
- Relevant log extracts or forensics notes, or references to where that data is retained securely.
- Client‑facing history:
- Who was informed, when and by which channel.
- How you met or exceeded contractual notification timeframes.
- Any follow‑up reports or meeting notes shared with the client.
- Review and improvement outputs:
- Likely root cause, contributing factors and residual risk.
- Specific corrective and improvement actions, with owners and due dates.
- Updates made to playbooks, contracts, templates or RACIs as a result.
For most MSPs, the challenge is consistency rather than volume. A few well‑chosen attachments and references that clearly support the storyline are worth far more than dozens of unstructured log files.
How can you avoid drowning in evidence across many tenants and services?
You keep evidence manageable by standardising patterns by client segment:
- Define which log sources and monitoring outputs you rely on for different types of service (managed endpoint, cloud tenancy, network).
- Standardise how those artefacts are referenced or attached inside incident records.
- Set retention periods and access controls that align with legal and contractual obligations for each segment.
Periodic evidence reviews – where you take a closed incident at random and ask “Would an external party find this complete and believable?” – often surface small design tweaks with large benefits.
When you manage your incident evidence model, related policies and A.5.24 mappings together in ISMS.online, you can show auditors and strategic clients that readiness is consistent and tenant‑aware, not something you scramble to reconstruct when a questionnaire or claim arrives.
How do training and exercises help an MSP move from paper compliance to real strength under A.5.24?
Training and exercises are where A.5.24 turns from documentation into reflexes. The control talks about planning and preparation; for an MSP that means teams across roles have practised realistic incidents using your actual tools, records and playbooks, not just read a policy once a year.
What training approaches work best for MSP teams?
Short, role‑specific sessions almost always beat long generic presentations:
- Analysts and engineers: run through simulated alerts in your monitoring stack, raise and update incident records, and follow playbooks step‑by‑step until the pattern feels natural.
- Account managers and service owners: practice time‑pressured client updates during a realistic outage or compromise, using the information they would see in tickets and dashboards.
- Legal, privacy and compliance colleagues: rehearse notification decisions with incomplete information, based on what is actually captured in your incident records and logs.
- Senior leaders: practice when to join bridges, how to approve disruptive containment quickly and how to align internal and external messages.
These sessions build confidence that, when a serious event hits a key tenant, people know exactly where to look and what to do, rather than losing time debating basic steps.
How should you design an exercise programme that satisfies A.5.24 without overloading your teams?
You do not need an elaborate war‑gaming programme; a simple, visible calendar is often enough:
- Internal simulations at least once a year for your highest‑impact scenarios (for example ransomware, business email compromise, major platform outage).
- Occasional joint exercises with strategically important or regulated customers, making RACIs, escalation paths and communication patterns real for both sides.
- Brief reports after each exercise capturing:
- What worked well and should be reinforced.
- Where roles, information or tools were confusing or slow.
- A small number of concrete improvements to RACIs, playbooks, ticket templates, logging or contracts.
Those improvement actions should feed into your normal ISO 27001 mechanisms – risk treatment plans, corrective action logs, management reviews – so you can demonstrate a full loop from design to testing to improvement.
When you plan, deliver and track these sessions inside ISMS.online alongside your A.5.24 policy, playbooks and incident records, you present a clear storey to auditors, regulators and enterprise buyers: incident readiness is designed, exercised and strengthened as part of your Information Security Management System, not left to chance. And that is exactly the position a modern managed service provider wants to be in when a serious incident or demanding customer arrives.








