Why MSP SOC/NOC teams struggle with event decisioning
MSP SOC and NOC teams struggle with event decisioning because they rely on individual judgement rather than a shared, repeatable process. Under constant alert pressure, analysts improvise which signals matter, who should act and how fast. Without clear definitions, criteria and records, the same issue is treated differently on each shift, customers receive mixed messages and you have little you can confidently show to ISO 27001 auditors.
MSP SOC and NOC teams often depend on heroic individuals instead of a consistent, documented decision path. Under pressure, analysts are asked to decide which of thousands of daily alerts really matter, who should act and how quickly. When your real decision logic lives in people’s heads instead of shared criteria, outcomes become fragile and very hard to explain to customers or defend in audits.
Calm, consistent decisioning is worth more than occasionally brilliant firefighting.
The alert deluge and “hero analyst” culture
The alert deluge and “hero analyst” culture emerge when analysts survive on personal shortcuts rather than agreed rules. In a multi‑tenant SOC, logs and alarms from every customer blur into a single stream, so experienced staff start deciding by instinct which signals to trust and which to ignore. That might keep operations moving, but it leaves you exposed when people leave, workloads spike or auditors start asking for proof.
In a typical multi‑tenant SOC you see an endless stream of alarms from SIEM, endpoint tools, email gateways, firewalls and performance monitoring platforms. Over time, experienced analysts build mental shortcuts about which signals to trust and which to ignore. That keeps the lights on day to day, but it also means your true decision logic lives in a few people’s heads and on a handful of sticky notes.
You can usually spot this pattern when:
- Different shifts treat the same alert type in noticeably different ways.
- Tickets bounce between queues because nobody agrees whether an alert is an incident, a health issue or noise.
- Near misses appear in post‑mortems when a dismissed event later proves serious.
This kind of implicit logic may work while you have the right people on duty, but it is fragile. Loss of a key analyst, a surge in alert volume or new regulatory expectations can expose the gaps immediately.
The hidden costs of inconsistent decisions
The hidden costs of inconsistent decisions show up as wasted effort, confused customers and difficulty proving improvement to management and auditors. Analysts spend time chasing benign events because criteria are vague, while genuine issues get escalated late or unevenly. Customers receive different answers depending on who answers the phone, and SLA timers start at different points for the same scenario, which undermines trust and makes audit narratives harder to sustain.
A majority of organisations in the 2025 State of Information Security report say they were impacted by at least one third-party or vendor-related security incident in the previous year.
Inconsistent event decisioning rarely appears as a single catastrophic failure; it leaks value and increases risk everywhere. Analysts waste time investigating benign events because the criteria are vague. Customers receive different answers depending on who picks up the ticket. SLAs are missed because nobody is certain when the timer should start. Meanwhile, you cannot easily tell whether security operations are actually improving.
You also pay in people terms. If your best staff are constantly firefighting ambiguous alerts, they burn out, move on or disengage from improvement work. That leaves you even more dependent on ad‑hoc judgement by fewer people, and it makes standardising for ISO 27001 much harder.
What ISO 27001:2022 Annex A.5.25 actually requires in an MSP context
ISO 27001:2022 Annex A.5.25 requires you to assess information security events systematically and decide whether they are incidents using defined criteria, roles and records. In an MSP context that means turning a short control statement into concrete policies, workflows and artefacts that work across multiple tenants and tools. The control looks small on paper, but it has wide implications for assurance, reporting and how you handle demanding customer and auditor questions.
In practice, A.5.25 requires MSPs to embed a consistent, repeatable event‑assessment process in everyday operations. You must be able to show that relevant events are visible to the decision process, that staff use agreed criteria to classify them and that you keep records of what was decided and why. For ISO 27001 certification and customer audits, this traceability often matters as much as any single technical response. Incident‑handling guidance such as NIST SP 800‑61 also stresses documented process and evidence across the incident lifecycle, not just isolated technical fixes, which reinforces this emphasis on traceability.
Almost all organisations in the 2025 ISMS.online survey list achieving or maintaining security certifications such as ISO 27001 and SOC 2 as a key priority for the coming years.
The core control in plain language
The core control in plain language is that every relevant security event must be assessed against agreed criteria and categorised consistently. You are expected to show that events are visible to the decision process, that staff know how to apply those criteria and that you can evidence what was decided and why. In other words, you need clear visibility, criteria, people and records for every event that matters.
In paraphrased form, A.5.25 says you must assess information security events and decide whether they should be categorised as information security incidents. Read closely, that implies four specific obligations:
- Events must be visible to the decision process, not lost in logs or ignored.
- There must be criteria that staff can use to decide consistently.
- There must be people with the authority and training to make those decisions.
- There must be records of what was decided and why.
Your real challenge is not understanding this sentence; it is embedding those four obligations into a complex, multi‑tenant operating model without slowing everything down or confusing customers.
How A.5.25 relates to the rest of incident management
A.5.25 relates to the rest of incident management as the hinge between detection and structured response. It connects directly into controls on preparation, response, learning and evidence collection, so auditors will expect a clear chain from the original alert through to the incident record and subsequent improvements. If that chain breaks in the middle, your storey will look weak even if some technical fixes were effective.
A.5.25 does not stand alone. It sits between:
- Planning and preparation, where you ensure you have the people, tools and communications ready to handle incidents.
- Response to incidents, which is what you actually do once something is declared an incident.
- Learning from incidents, including post‑incident reviews, improvements and trend analysis.
- Collection of evidence, ensuring logs, tickets and artefacts are legally and operationally usable.
From an auditor’s point of view, an event should be traceable along this chain: from initial detection, through assessment and classification (A.5.25), into response (A.5.26), and finally into lessons learned and evidence (A.5.27 and A.5.28). Standards‑mapping work from organisations such as ENISA reinforces this lifecycle view by aligning ISO 27001 incident‑related controls along a single detection‑to‑lessons‑learned pathway.
If that trail breaks at the assessment and decision point, your storey will not hold together, even if some individual responses were technically sound.
What this looks like in a multi‑tenant MSP
In a multi‑tenant MSP, A.5.25 must be robust enough to handle different customers, tools and regulatory regimes without fragmenting into dozens of bespoke processes. You need a standard spine for event assessment, with tenant‑specific parameters layered on top. Customers and auditors will expect you to show how decisions are made consistently and fairly across tenants, even when SLAs, risk appetites and regulatory expectations differ.
In the 2025 ISMS.online State of Information Security survey, around 41% of organisations said managing third-party risk and tracking supplier compliance was one of their top information-security challenges.
For MSPs, A.5.25 has to cope with realities such as:
- Different customers having different risk appetites and SLAs.
- Shared tooling feeding mixed alerts from many tenants into common queues.
- Distributed teams across time zones and shifts.
- Regulatory and contractual obligations that vary by sector and geography.
Your implementation therefore needs to answer questions like:
- Which events are in scope for A.5.25 assessment, and which are filtered earlier?
- Who decides whether an event affecting several tenants is one incident, multiple incidents or just background noise?
- How do you evidence that decisions were made consistently, even when different customers are involved?
The standard does not answer these questions for you, but customers and auditors will. A good A.5.25 design makes those decisions explicit, consistent and defensible.
ISO 27001 made easy
An 81% Headstart from day one
We’ve done the hard work for you, giving you an 81% Headstart from the moment you log on. All you have to do is fill in the blanks.
Defining events, incidents and weaknesses for MSP operations and SLAs
You implement A.5.25 effectively when everyone in your SOC and NOC shares clear, risk‑based definitions of events, incidents and weaknesses that fit your contracts and SLAs. Without that shared language, no workflow, tool or runbook can deliver consistent decisions, and you will struggle to explain or evidence your choices to customers, auditors and your own management team.
To implement A.5.25 effectively, you need clear, shared definitions of “event”, “incident” and “weakness” that make sense in your environment and contracts. Without this, no decision workflow or tooling can deliver consistent outcomes. The definitions also need to be risk‑based, not just copies of tool labels or vendor marketing.
Risk‑based definitions that work in real operations
Risk‑based definitions work in real operations because they link technical events to business impact and obligations, not just to tool terminology. By framing events, incidents and weaknesses around confidentiality, integrity, availability and compliance duties, you give analysts criteria they can apply consistently to different tenants and technologies. This creates a strong foundation for your A.5.25 procedures and for clause‑level assurance under ISO 27001.
Many MSPs find the following working definitions helpful:
- Information security event: any observable occurrence in a system, service or network that may be relevant to information security, such as a SIEM alert, unusual login, suspicious email or traffic spike.
- Information security incident: an event or series of events that has compromised, or is likely to compromise, the confidentiality, integrity or availability of information or services, or that triggers legal, regulatory or contractual obligations.
- Weakness: a vulnerability or control deficiency revealed during operations (for example, misconfigured access, missing patches or inadequate logging) that may not be an active incident yet but increases the likelihood or impact of future incidents.
The table below summarises how these three terms differ and how they show up in MSP operations.
| Term | Definition in your MSP context | Typical example |
|---|---|---|
| Information security event | Observable occurrence relevant to security that may or may not have real impact | SIEM alert, unusual login, suspicious email |
| Information security incident | Event or series of events that compromises, or is likely to compromise, CIA or drive duties | Ransomware activity on a key server |
| Weakness | Control deficiency or vulnerability that raises the chance or impact of future incidents | Shared admin accounts, missing patches, weak logging |
These distinctions matter because each outcome should follow a different path. Events might simply be monitored, incidents trigger response playbooks and notifications, and weaknesses flow into risk, change or problem management rather than clogging incident queues.
Making definitions tenant‑aware
Definitions become truly useful when they can be applied consistently across tenants with different risk appetites and regulatory profiles. You need a core taxonomy that your analysts learn once, then adjustable severity scales and examples per tenant so people understand how the same type of event plays out differently in a high‑regulation bank versus a small retailer. This is also what customers and auditors expect when they ask how you tailor services to their risk.
In a multi‑tenant MSP, customers range from small businesses with modest impact profiles to highly regulated entities where any misstep is serious. You cannot sensibly use a single severity matrix for every tenant without distortion. At the same time, you cannot afford bespoke, completely different systems per customer.
A practical compromise is to:
- Maintain a core taxonomy of event and incident types that is common across tenants.
- Define severity levels using business impact and urgency, such as critical, high, medium and low, in a way that can be calibrated per tenant.
- For each tenant, specify how those severities map to their own terms such as “major incident”, “data breach” or “service outage”, and how that affects SLAs and notifications.
These calibration decisions should be documented explicitly, agreed during onboarding and revisited as risk profiles or regulations change.
Treating weaknesses separately but consistently
Treating weaknesses separately but consistently ensures that you do not turn your incident queue into an improvement backlog, while still addressing systemic issues. Analysts need a clear way to flag weaknesses discovered during triage and to route them into risk or change processes. When these weakness records link back to A.5.25 assessments, you can show auditors that you are learning from events rather than just closing tickets.
Weaknesses often get lost because they do not demand immediate firefighting. A well‑designed A.5.25 implementation treats weaknesses as first‑class outputs of event assessment, alongside incidents and benign events. That means you:
- Provide analysts with a clear way to flag “weakness identified” during triage.
- Route weaknesses into risk, problem or change management with appropriate priority.
- Ensure weaknesses discovered during events are visible in your ISMS risk register and improvement plans.
This separation keeps incident queues free of long‑running improvement tasks while still ensuring systemic issues are captured and addressed.
Baking definitions into tools and conversations
Definitions only change behaviour if they appear in the tools and conversations analysts use every day. Ticket forms, dashboards, runbooks and customer reports should all talk about events, incidents and weaknesses in the same way. When you embed this vocabulary, you make it easier to train new staff, compare performance across tenants and answer audit questions with confidence and consistency.
You can reinforce definitions by:
- Encoding them into ticket templates and mandatory fields.
- Using them in dashboards and reports, rather than tool‑specific categories.
- Training SOC, NOC and account teams using real examples where classification was ambiguous.
Over time, this shared vocabulary becomes the foundation for consistent, auditable decisioning and smoother conversations with customers about what happened and why.
Designing an A.5.25‑aligned SOC decisioning workflow
An A.5.25‑aligned SOC decisioning workflow is a structured path that every in‑scope event follows from first detection to a clear, recorded outcome. It should be simple enough to follow under pressure yet rich enough to support consistent classification, escalation and evidence capture across tenants and shifts. When you get this flow right, you reduce noise, improve response and make ISO 27001 assurance conversations much easier.
An aligned decisioning workflow gives analysts a repeatable path from signal to outcome, even when volumes are high. Each stage answers a specific question about the event: is it real, does it matter, and what should happen next. When you describe these stages clearly in runbooks and mirror them in your tooling, you create a decision engine that survives staff changes, volume spikes and external scrutiny.
A simple “signal‑to‑decision” flow divides analysis into clear stages so analysts always know what question they are answering next. Each stage clarifies whether the signal is genuine, if it matters to the tenant and what should happen next. By writing these stages into runbooks and reflecting them in tickets, you create a decision engine that is predictable, teachable and auditable.
A practical SOC flow for a managed service provider usually follows a small number of consistent stages. Each stage answers a single question: is this signal real, does it matter and what should happen next? When you make those stages explicit, you give analysts a reliable path through noise and ambiguity.
Step 1 – Detection
An alert, log correlation, user report or monitoring signal appears and is captured as a potential information security event.
Step 2 – Validation
You quickly confirm whether the signal is genuine rather than a test, duplicate or obviously spurious alert.
Step 3 – Enrichment
You add context such as asset criticality, user identity, recent changes and similar events for that tenant.
Step 4 – Assessment against criteria
You evaluate impact, likelihood, scope and any legal, regulatory or contractual implications against agreed thresholds.
Step 5 – Decision
You classify the event as benign, monitor, information security incident or weakness using the documented criteria.
Step 6 – Routing and action
You route the case into the appropriate playbook or process, such as incident response, monitoring, risk management or closure.
Each of these steps should be described clearly in runbooks and mirrored in tickets or cases. For higher‑risk tenants, you may require additional approvals at the decision stage, such as sign‑off from a senior analyst or on‑call security architect.
Guardrails to manage false positives and false negatives
Guardrails to manage false positives and false negatives make your workflow safer by defining how to act when information is incomplete or ambiguous. You decide when to err on the side of treating something as an incident, when automation may safely close alerts and when new intelligence should trigger re‑assessment. These rules help you explain why apparently similar events received different treatment in different contexts.
No decision process is perfect, but you can reduce risk by making your tolerance explicit. Examples include:
- Defining when uncertainty must be resolved in favour of treating an event as an incident, especially when regulated data may be involved.
- Clarifying which low‑severity events can be auto‑closed after specific checks and which must always be reviewed by a person.
- Setting expectations for re‑assessment when new information appears, such as later threat intelligence linking a benign‑looking event to an active campaign.
These guardrails should be visible to analysts in playbooks and ideally encoded into automation rules and ticket workflows, so that they are applied consistently rather than reinvented in every shift.
Roles, tiers and RACI
Roles, tiers and a RACI matrix translate your workflow into day‑to‑day responsibilities that survive shift changes and staff turnover. You need clarity on which levels of analyst may make final A.5.25 decisions, when escalation is mandatory and how accountabilities work when an incident spans multiple tenants. This structure is a common focus in ISO 27001 reviews, so documenting it clearly pays off and avoids confusion during real incidents.
In many MSP SOCs, Level 1 analysts focus on validation and initial enrichment, while Level 2 or 3 handle complex assessments and coordinate incidents. For A.5.25 you need to be clear about:
- Which levels are allowed to make final event‑to‑incident decisions and in what circumstances.
- When escalation is mandatory, for example suspected compromise of highly sensitive systems.
- How responsibilities are shared when incidents span multiple tenants.
A simple RACI matrix covering event assessment and decisioning can prevent confusion, especially during night shifts or major surges.
Testing the workflow under stress proves whether your design holds up under real‑world pressure, not just in neat diagrams. Scenario exercises, red‑team tests and simulated alert storms show whether analysts follow the agreed steps, whether automation behaves and whether decision records remain complete. The lessons you learn from these exercises should feed directly into adjustments to criteria, training and tooling.
It is easy to design a neat workflow on paper; it is harder to keep it working during a real attack. Table‑top exercises, red‑team engagements or simulations of large‑scale phishing waves are excellent ways to see whether:
- Analysts follow the steps or fall back to improvisation.
- Automation performs as expected.
- Decision records remain complete even when volumes spike.
Findings from these exercises should feed directly into tweaks to your criteria, training and tooling. That continuous loop is what turns A.5.25 from a static clause into a living operational asset.
Free yourself from a mountain of spreadsheets
Embed, expand and scale your compliance, without the mess. IO gives you the resilience and confidence to grow securely.
Integrating NOC operations, ITIL flows and escalation paths
Integrating NOC operations, ITIL‑style flows and escalation paths ensures A.5.25 decisioning supports overall service health rather than undermining it. Security events often overlap with performance issues, so your SOC and NOC need shared rules about ownership, escalation and communication. When you embed A.5.25 into service management, you reduce friction, avoid conflicting actions and make it easier to tell a complete storey to customers and auditors.
Security events rarely occur in isolation from service performance. For MSPs, the NOC and SOC are two sides of the same coin: one focused on availability and performance, the other on security. A.5.25 decisioning has to sit comfortably within this broader service‑management context so you do not fix security problems while inadvertently breaking services.
Clarifying ownership across SOC and NOC
Clarifying ownership across SOC and NOC starts with mapping where events originate and which team currently takes the lead. You want to know which alerts come from traditional performance monitoring, which from security tooling and which affect both. Once that map is clear, you can define when a NOC event should trigger a security assessment and when a SOC event must trigger a service‑impact review.
A sensible starting point is to map how events currently flow through your IT service management processes:
- Which events arrive via traditional monitoring and are owned by the NOC?
- Which come from security tooling and are owned by the SOC?
- Which are ambiguous, affecting both performance and security?
Once you understand the flows, you can define rules such as:
- When a service‑health event must trigger a security assessment, for example repeated authentication failures or unexplained traffic spikes.
- When a security event must trigger a service‑impact assessment, for example aggressive blocking rules or containment actions.
This mapping helps you pinpoint exactly where A.5.25 assessments need to happen and who is accountable at each step.
Building an escalation and communication matrix
An escalation and communication matrix turns your decision criteria into predictable actions for both internal teams and customers. It links event categories and severities to who gets notified, how quickly and through which channels. When the matrix is agreed with customers, you avoid both over‑communicating minor issues and under‑communicating serious ones, and you can show auditors that your process is systematic rather than ad‑hoc.
Different severities and contexts require different escalation paths. For example:
- A high‑severity incident involving potential data loss may require immediate notification to the customer’s security lead, the MSP’s incident manager and, in some sectors, regulatory teams.
- A medium‑severity event affecting a non‑critical system may only require escalation within the SOC and a routine status note to the customer.
You can capture these patterns in a simple escalation matrix that links severities, event types and communication expectations. Once that matrix is agreed with customers and internal teams, analysts are no longer guessing who to involve or when to raise visibility.
A clear matrix also supports better communication discipline. When everyone understands that a particular classification automatically triggers certain notifications, you reduce the risk of both over‑communication and dangerous silence.
Aligning with SLAs and regulatory timelines
Aligning with SLAs and regulatory timelines ensures your decisioning workflow supports contractual commitments and legal duties. You need to be explicit about when SLA timers start, which decision points trigger customer notifications and when an event meets regulatory thresholds. These rules should be visible in runbooks and contracts so analysts are not left guessing under pressure.
A strong majority of respondents in the 2025 State of Information Security survey say the speed and volume of regulatory change are making compliance significantly harder to sustain.
SLAs often include response‑time commitments based on severity, while regulations may impose specific notification deadlines for certain incident types. Your event decisioning therefore needs to:
- Start SLA timers at the correct point, such as initial detection versus confirmed incident.
- Distinguish clearly between internal informational events and notifiable incidents.
- Trigger regulatory notifications only when thresholds are met, but always in time.
These expectations should be built into runbooks and contracts so that analysts are not left guessing, and customers know what to expect. It also ensures your SOC and NOC are not working at cross‑purposes when time‑critical decisions are needed.
Using joint exercises to refine the model
Joint exercises between SOC and NOC validate whether your integrated model holds up in realistic scenarios. By walking through incidents that start as performance problems and turn into security issues, or vice versa, you find gaps in ownership, communication and escalation. Each lesson gives you an opportunity to refine A.5.25 decision points, matrices and training so they better reflect the way you deliver services.
The best way to validate SOC–NOC integration is to walk through realistic scenarios together. These might include:
- A sudden loss of availability that turns out to be the result of a denial‑of‑service attack.
- A security hardening change that unexpectedly causes a critical application outage.
- A cloud provider issue affecting several tenants simultaneously.
As you practice, capture where ownership, communication or decisioning is unclear, and feed those lessons back into your matrices, playbooks and training. Over time, this builds confidence that A.5.25 assessments are not happening in a vacuum but are integrated into the way you run services.
Tooling, automation and evidence capture for multi‑tenant A.5.25
Tooling, automation and evidence capture are where your A.5.25 design succeeds or fails in day‑to‑day operations. You need a coherent tool stack where events flow into a case of record, automation supports but does not replace human judgement and evidence is captured automatically as work happens. When tools align to your process, you generate proof for ISO 27001 and customer audits as a by‑product rather than an afterthought.
Even the best process design will fail if your tools cannot support it. For A.5.25 in an MSP, the challenge is to connect SIEM, SOAR, monitoring and ITSM platforms in a way that enables consistent decisions and automatic evidence capture across all tenants, without forcing analysts to duplicate work.
When your tools support the workflow, evidence appears as a by‑product, not an afterthought.
Choosing a “case of record”
Choosing a case of record means deciding which system holds the authoritative storey of each event and its outcome. For many MSPs, the service desk or incident management tool is a natural choice for that primary system of record, because it already supports ownership, workflow and reporting, and common IT service management guidance treats it that way.
You should decide which system is the authoritative record for event decisions. For most MSPs this is the service desk or incident management tool, because it already supports ownership, workflows and reporting. Once that is chosen, you can:
- Ensure every relevant alert results in a case or is linked to an existing case.
- Store classification, severity, decision and rationale in structured fields.
- Link cases to assets, tenants and services via your configuration data.
Other tools still matter, but the ITSM layer becomes the place customers and auditors can see what you actually decided and did, rather than piecing together information from disparate sources.
A dedicated ISMS platform such as ISMS.online can sit above these systems, helping you link policies, runbooks, risks, incidents and improvement actions to the tickets your SOC and NOC already use so that control intent, operational reality and audit evidence stay aligned. Public guidance from ISMS.online on Annex A.5.25 illustrates how this type of platform can be layered over operational tools to give a coherent view of control implementation.
Balancing automation and human judgement
Balancing automation and human judgement means using tools to accelerate safe steps while keeping high‑impact decisions under expert control. Enrichment, correlation and obvious false‑positive handling are good automation candidates. Decisions that might trigger regulatory notifications, major incidents or contractual penalties should remain firmly in human hands, with clear approval paths documented for ISO 27001 A.5.25 and related controls.
Automation is essential at MSP scale, but it must be used thoughtfully. Good candidates for automation include:
- Enrichment steps, such as pulling asset details or recent change records.
- Deduplication and correlation of repeated alerts.
- Automatic closure of low‑risk alerts that meet tightly defined criteria.
Decisions with significant business or regulatory impact should remain under human control, potentially with additional approvals. Your automation playbooks and monitoring rules should reflect these boundaries clearly, so that staff trust the automation rather than fighting it or bypassing it when under pressure.
Designing tenant‑aware logic
Designing tenant‑aware logic allows you to standardise structure while tuning behaviour for each customer. You use common workflows and fields but parameterise thresholds, notification targets and timing per tenant or tenant group. That way, analysts can apply the same A.5.25 process to all tenants while respecting different SLAs, regulatory duties and impact profiles.
Because your customers differ, you cannot apply identical thresholds and playbooks everywhere. Instead, consider:
- Using parameterised rules where severity thresholds, notification targets and timing can be set per tenant.
- Grouping tenants by profile, such as high‑regulation, medium‑criticality and low‑risk, to simplify management while still respecting differences.
- Recording tenant‑specific parameters in one place, so that analysts know what they are working with.
This approach lets you standardise structure and terminology while adapting behaviour to each customer’s needs, which is often what auditors and enterprise customers expect from a mature MSP.
Making evidence collection automatic
Making evidence collection automatic is one of the most valuable outcomes of well‑designed tooling. You configure mandatory fields, time stamps and links so that every A.5.25 assessment leaves a trail without analysts writing extra reports. When you later face an ISO 27001 audit or a demanding customer review, you can extract those records and walk through them calmly, rather than reconstructing decisions from memory and scattered files.
A major advantage of a well‑integrated toolset is that A.5.25 evidence becomes a natural by‑product of operations. That means:
- Decision fields are mandatory in tickets before closure or escalation.
- Time stamps show when assessments and decisions occurred.
- Links exist from events to incidents, weaknesses, changes and problem records.
Security governance principles, including high‑level guidance such as the OECD Guidelines for the Security of Information Systems and Networks, emphasise embedding logging, accountability and auditability into everyday processes rather than treating them as separate reporting tasks. When you later need to demonstrate compliance or reconstruct a particular decision trail, you can extract the data rather than rely on manual recollection or ad‑hoc spreadsheets. It is also worth picturing how much easier that becomes when your decision records live inside an integrated ISMS, rather than being scattered across files and systems.
Manage all your compliance, all in one place
ISMS.online supports over 100 standards and regulations, giving you a single platform for all your compliance needs.
Documentation, metrics and audit storytelling for A.5.25
Documentation, metrics and audit storytelling turn your A.5.25 practices into something you can show and explain to others. You need a coherent set of policies, procedures and runbooks that align with your actual workflows, plus metrics that reveal decision speed and quality. When you combine these with clear case narratives, you give customers, auditors and senior stakeholders confidence that your decision process is real and improving.
Once your definitions, workflows and tools are in place, you need to make them visible and provable through documentation and metrics. A.5.25 is as much about being able to show what you do as it is about doing it, particularly when you are dealing with demanding customers and external auditors.
Building a coherent documentation set
A coherent documentation set shows how A.5.25 is implemented from policy through to real tickets, rather than existing as a single stand‑alone document. You should be able to point to a policy, a procedure, runbooks, a RACI matrix, a taxonomy and sample records that all tell the same storey. Keeping these items aligned and in one place makes ISO 27001 certification and customer due diligence far more straightforward.
A typical MSP documentation stack for A.5.25 includes:
- An information security incident management policy that sets the overall intent and scope.
- A specific procedure describing how information security events are assessed and decided.
- SOC and NOC runbooks that show how the procedure is applied in day‑to‑day operations.
- A RACI matrix for event assessment and decisioning.
- A taxonomy and severity scheme with clear criteria and examples.
- Samples of completed records demonstrating the process in action.
A short worked example can make these documents come alive. For instance, you might show how a suspicious login alert from SIEM was validated, enriched, assessed as an incident due to potential data exposure, escalated to the customer within agreed timelines and then closed with lessons captured in your risk register.
These documents should be consistent with each other and with what actually happens in tools and teams. Keeping them together in a single information security management system makes it easier to keep versions aligned, roll out updates and demonstrate control to customers and auditors.
An ISMS platform like ISMS.online can help you store this material in one place, link each document to the relevant control and process, and show how policies, procedures, tickets and improvements all support your A.5.25 obligations.
Choosing and using the right metrics
The right metrics show whether your A.5.25 process is timely, consistent and effective, rather than merely busy. You want measures like detection‑to‑decision time, percentage of events assessed within target, reclassification rates and weaknesses identified. These numbers support management reviews under ISO 27001 and reassurance to customers that your decision engine is working as designed.
Metrics for A.5.25 should focus on decision quality and timeliness, not just volume. Incident‑management policies from international bodies, such as the United Nations incident management policy, put similar emphasis on the quality and speed of response rather than simple counts of incidents handled, which aligns with this focus.
Useful examples include:
- Time from event detection to classification decision.
- Percentage of events assessed within agreed internal timeframes.
- Rate of reclassification, for example events later upgraded to incidents.
- False positive rate by event type and tenant.
- Number and types of weaknesses identified through event assessment.
The table below illustrates how a few core metrics support different decisions.
| Metric | What it shows | How you use it |
|---|---|---|
| Detection‑to‑decision time | Speed of assessment | Check capacity and refine guardrails and playbooks |
| Percentage assessed within timeframe | Process discipline | Hold teams accountable and justify resource requests |
| Reclassification rate | Quality of initial decisioning | Identify training or criteria gaps |
| Weaknesses identified via assessments | Improvement opportunities discovered in triage | Feed risk and change management programmes |
You can present these metrics in management reviews and use them to prioritise improvements in training, playbooks and tooling. They also provide a powerful way to show customers and auditors that your decision process is working and evolving rather than remaining static. It is worth testing these measures in your own environment through scenario walk‑throughs and retrospectives before you invest heavily in new tooling or major process changes.
Telling a clear audit and customer storey
A clear audit and customer storey walks through real examples from detection to learning, using the documentation and metrics you have already developed. You demonstrate how an event was spotted, assessed under A.5.25, classified, acted on and reviewed. When these stories match your documented procedures and the data in your tools, auditors and customers are far more likely to trust your control.
Auditors and sophisticated customers often want to see concrete examples, not just policies and charts. It helps to prepare a standard narrative structure you can apply to real cases, such as:
- What was detected and how?
- How was it validated and enriched?
- How was it assessed against your criteria?
- What decision was made, by whom and when?
- What actions followed, and what was the outcome?
- What was learned and changed afterwards?
With well‑structured documentation and data, you can walk through one or two such examples confidently, demonstrating that A.5.25 is embedded in your operations rather than existing just on paper. Over time, this storytelling builds trust and makes future audits and customer reviews less stressful.
Book a Demo With ISMS.online Today
ISMS.online helps you turn A.5.25 from an abstract clause into a practical, auditable decisioning framework that your SOC and NOC can live with every day. By centralising policies, runbooks, risks, incidents and improvement actions in one ISMS, you can link them directly to the tickets and evidence your teams already generate, so your control intent, operations and audit storey stay in step.
In an ISMS.online demo you see how policy intent, operational workflows and audit evidence come together in a single, structured environment. The session typically walks through how definitions, roles, decision criteria and incident records sit side by side, so you can show exactly how an event moved from detection through assessment to outcome without chasing separate documents or spreadsheets.
What you will see in an ISMS.online demo
What you see in an ISMS.online demo is how your A.5.25 process can live in one organised place rather than in scattered files and tools. The session connects policy, procedures, tickets and improvements so you can follow a real decision from signal to outcome. That gives you a realistic view of how control intent, SOC and NOC activity and audit evidence can stay aligned.
In an ISMS.online demo you see how policy intent, operational workflows and audit evidence come together in a single, structured environment. The session typically walks through how definitions, roles, decision criteria and incident records sit side by side, so you can show exactly how an event moved from detection through assessment to outcome without chasing separate documents or spreadsheets.
You can also see how the same environment supports other Annex A controls, management reviews and continuous improvement, so your A.5.25 work does not live in isolation.
Who gets the most value from an ISMS.online demo
The people who get the most value from an ISMS.online demo are those currently juggling compliance, security operations and customer expectations with limited time and scattered tools. That often includes SOC and NOC leaders, ISMS owners, compliance managers and MSP executives who need to show customers and auditors a coherent control storey. Seeing your existing A.5.25 workflows mapped into a structured ISMS helps each of these roles understand where effort is being wasted and where consistency can be strengthened.
Bringing a small cross‑functional group to the session also makes it easier to spot quick wins and agree on a realistic adoption path.
How ISMS.online supports A.5.25 decisioning
ISMS.online supports A.5.25 decisioning by making criteria, responsibilities and records first‑class citizens rather than buried details in scattered files. In the platform you can maintain your A.5.25 procedure, link it to SOC and NOC runbooks, define who may classify events as incidents and attach real tickets and incident records as evidence. That gives you a living catalogue of how you assess and decide events for different tenants and services.
If you value calm, consistent, auditable event decisioning that you can explain to customers and auditors without stress, ISMS.online is ready to help you explore what that could look like in your own MSP environment.
Book a demoFrequently Asked Questions
How does ISO 27001:2022 A.5.25 really change the way your SOC and NOC make decisions?
ISO 27001:2022 A.5.25 expects your SOC and NOC to move from “whoever’s on shift decides” to a repeatable, explainable decision system you can defend to customers, auditors and regulators. Instead of ad‑hoc triage, you are expected to define how events are assessed, who may classify them, and how those decisions are recorded, reviewed and improved.
What’s the practical impact on day‑to‑day SOC and NOC work?
In day‑to‑day operations, A.5.25 sits between raw telemetry and formal incident handling:
- Before A.5.25: Each analyst interprets alerts differently, based on personal experience and pressure.
- With A.5.25 designed properly: Every in‑scope alert follows the same short decision path from signal to outcome, with clear criteria and roles.
For a multi‑tenant MSP, this affects:
- How similar patterns are treated across tenants and shifts.
- How quickly analysts can justify “no incident” vs “notify customer/regulator”.
- How credible your security operations look in tenders, customer reviews and audits.
When you make A.5.25 the spine of your triage layer, you reduce noise, speed up onboarding and lower the risk of inconsistent decisions that later appear as uncomfortable audit findings.
How should you adjust roles and authority under A.5.25?
A.5.25 works best when you are explicit about who can:
- Decide whether an alert is in scope for assessment.
- Classify something as an event, weakness or incident.
- Close a case or downgrade its severity.
- Approve deviations or exceptions.
Writing this into a concise RACI gives your analysts confidence and prevents awkward disputes in the middle of a busy shift. It also tells auditors and customers that decisions are not being made by accident or convenience.
How does an ISMS platform such as ISMS.online strengthen this control?
An ISMS gives A.5.25 a visible home inside your Information Security Management System, instead of scattering it across emails and runbooks. With ISMS.online you can:
- Hold the A.5.25 procedure, incident policy and SOC/NOC RACI in one place.
- Link real‑world events and weaknesses to the control, related risks and corrective actions.
- Show in management reviews how you are tightening decision criteria, training and automation over time.
That makes external conversations calmer. When a customer or auditor asks “Why did you treat these two alerts differently?”, you can walk them from the standard, through your procedure, into an actual decision trail without scrambling across disconnected systems.
How should you define events, incidents and weaknesses so SOC and NOC stay truly aligned?
You keep SOC and NOC aligned by defining events, incidents and weaknesses in plain, impact‑based language that everyone can use without checking clause numbers. Those definitions become the reference point for tools, runbooks, contracts and reports, so they must work for analysts, service managers and customers.
What definitions work in a multi‑tenant MSP environment?
A practical pattern many MSPs adopt is:
- Event: Anything observable that might affect security, availability or performance.
- Incident: An event or chain of events that actually threatens confidentiality, integrity, availability or legal/contractual obligations.
- Weakness: A control or process gap you discover while handling events or incidents, whether or not anything bad has happened yet.
Rooting these terms in business impact and likelihood helps analysts make calls that hold up in front of customers and auditors. When an analyst marks something as an incident, that label should mean the same thing in:
- Your service desk queue.
- Your ISO 27001 incident register.
- Your customer’s risk register or governance pack.
That consistency becomes especially important when you support multiple regions, sectors and regulatory regimes from one operations team.
How do you create a glossary people will actually use?
Long glossaries are rarely read. Start with a single page that covers only the terms people argue about most often:
- Draught definitions in everyday language.
- Test them with SOC, NOC, account managers and at least one non‑technical stakeholder.
- Rewrite any phrases that trigger confusion or debate.
Then weave those definitions into:
- Ticket categories and severity options in your ITSM tool.
- Customer contracts, SLAs and data‑processing agreements.
- Quarterly review decks and incident reports.
Because the same words appear in all of these places, staff and customers begin to adopt them instinctively. That reduces heated conversations about whether “this is really an incident” and lets you focus on impact and response instead.
How can ISMS.online help you keep terminology aligned?
When all your key artefacts live in one ISMS, language alignment becomes much easier to maintain. ISMS.online lets you:
- Maintain a central glossary that underpins policies, procedures, risks and incident records.
- Link definitions to specific controls and clauses, so people can see why they matter.
- Keep terminology in sync across ISO 27001, ISO 27701 and other Annex L standards you adopt.
That consistency is a quiet but powerful signal of maturity when auditors or customers compare your documentation to what they see in your operational tools.
You turn A.5.25 into something people actually use by designing a short, repeatable decision path that every relevant alert follows, and then building that path directly into the tools your analysts live in. The policy should describe the path; the tools should make it the line of least resistance.
What does a practical “signal to decision” pathway look like?
Many MSPs converge on a model such as:
- Detect: Tool raises a signal based on rules or behavioural thresholds.
- Validate: Analyst or automation checks whether the signal is real enough to investigate.
- Enrich: Add business context – tenant, asset, user, service, recent changes.
- Assess: Consider likely impact and speed of escalation on confidentiality, integrity, availability and obligations.
- Decide: Label the case (benign, under observation, weakness, incident).
- Route: Assign to the right team with the right priority, SLA and communication plan.
You can reflect this in your case forms by:
- Making basic validation and enrichment fields mandatory for new cases.
- Using controlled lists for outcomes tied to your A.5.25 procedure and incident policy.
- Creating routing rules that move tickets to the right queues and on‑call groups when particular combinations of impact and likelihood appear.
This keeps the workflow short enough to use at 3am, but structured enough to show how and why you reached each decision.
Speed matters, but so does learning. A simple way to balance both is to:
- Use lightweight paths for well‑understood, low‑risk patterns, often with more automation.
- Use heavier review paths for high‑impact, high‑uncertainty or regulator‑sensitive scenarios, with dual control or explicit sign‑off.
- Capture a small number of decision‑quality metrics (for example, classification time, reclassification rates, weaknesses discovered) and discuss them regularly in management reviews.
This lets you keep response times under control while steadily reducing noise, misclassification and missed opportunities to harden your environment.
Where does an ISMS such as ISMS.online sit in this picture?
Your workflow is the engine, but the ISMS is the governor and logbook:
- The A.5.25 procedure, RACI and decision criteria live in ISMS.online.
- Real tickets and incidents are linked back to those documents and the risks they address.
- Corrective actions, training improvements and tuning decisions are recorded and reviewed.
That makes it clear that A.5.25 is not just an internal flowchart but a controlled, auditable part of your Information Security Management System that evolves in a measured way.
How can you weave A.5.25 into NOC, ITIL processes and SLAs without adding frustrating bureaucracy?
You get real value from A.5.25 when it improves your existing IT service‑management flows instead of sitting alongside them as an extra checklist. The aim is one joined‑up storey about how events move from monitoring to impact assessment to resolution across security, service and continuity.
How do you align SOC and NOC flows in practice?
A practical approach is:
- Map how events currently move through your ITSM tool:
- Which queues handle availability and performance issues?
- Which queues handle clear security events?
- Where do handovers between NOC and SOC currently happen (or fail to happen)?
- Mark the points where:
- A service issue genuinely needs a security view under A.5.25.
- A security issue clearly affects SLAs, continuity plans or regulatory reporting.
From there you can build a joint escalation matrix that clarifies:
- When the NOC must pull in the SOC for event classification and risk assessment.
- When the SOC must involve the NOC for capacity, failover or continuity impact.
- Which combinations of outcome and tenant type trigger specific customer communications or regulator notifications.
Publishing this matrix inside your integrated management system, runbooks and on‑call guides gives people a clear route to follow, even when pressure is high.
How does Annex L integrated management help you here?
If you operate an Annex L‑based integrated management system, combining ISO 27001 with standards such as ISO 20000‑1 (service management) and ISO 22301 (business continuity), you already have:
- Common clause structures (context, leadership, planning, support, operation, performance, improvement).
- Natural places to align incident, continuity and change processes.
- Shared expectations for management review, documentation and continual improvement.
You can use this to:
- Harmonise categories, priorities and escalation rules for security, service and continuity incidents.
- Run joint post‑incident reviews that look at operational impact, customer experience and security posture together.
- Show auditors that the same real‑world event is reflected consistently across multiple standards, not treated differently in each silo.
That, in turn, makes it easier to maintain trust with customers who care as much about uptime and resilience as they do about pure security.
How does ISMS.online support integrated management around A.5.25?
ISMS.online is built for organisations running several Annex L standards together. In practice, that means you can:
- Place your A.5.25 event assessment procedure alongside IT service incident and continuity processes.
- Reuse roles, communication plans and improvement actions across standards.
- Demonstrate, in a single space, how one event flowed through security, service and continuity controls.
For MSPs selling themselves as strategic partners rather than commodity providers, this integrated picture helps you show that your obligations to customers are met in a coordinated, transparent way.
What tooling and automation best support A.5.25 in a multi‑tenant MSP while still protecting human judgement?
The most sustainable model for A.5.25 is one where a single “case of record” system holds the storey of each significant event, while supporting tools feed it with context and automation. SIEM, SOAR, EDR and monitoring platforms do the heavy lifting on detection and enrichment, but your ability to defend decisions lives in the case of record.
How should you structure the “case of record” around A.5.25?
In many MSPs, the existing service desk or incident‑management module is the best candidate because it already:
- Assigns owners and teams.
- Tracks status, timestamps and notes.
- Aggregates reporting across tenants and service lines.
You can configure your environment so that:
- Every in‑scope alert creates or is attached to a case in that system.
- Each case captures the classification, severity, tenant, risk context and outcome required by your A.5.25 procedure.
- Automation performs safe tasks such as correlation, deduplication, noise suppression and closure for known benign patterns.
Meanwhile, high‑impact, sensitive or unfamiliar scenarios still require explicit human review or sign‑off before key decisions are finalised.
For different tenants, you maintain a single workflow design but vary:
- Thresholds for severity and escalation.
- Notification recipients and timings.
- Approval requirements for activities such as customer‑visible actions or regulator notifications.
This gives analysts one consistent mental model while still respecting each customer’s risk appetite and contractual commitments.
How do you avoid over‑automating event assessment?
It is tempting to automate as much as possible. A.5.25 pushes you to be clear about where automation stops:
- Supportive automation: enrichment, correlation, pattern recognition, automated closure of safe, well‑understood false positives.
- Reserved zones for people: decisions that materially affect confidentiality, integrity, availability, legal duties or customer trust.
In your case records, it should be obvious which steps were automated and which involved human judgement, along with who took each decision. This transparency reassures auditors and customers that you are not letting opaque automation silently make high‑stakes calls.
How does an ISMS such as ISMS.online help you govern automation?
Automation needs governance just as much as human procedures. ISMS.online helps you:
- Document playbooks and automation rules as formal controls, linked to risks and Annex A requirements.
- Record approvals, test results and rollback plans when you change rules.
- Feed operational metrics (for example, false‑positive rates, missed detections, reclassification trends) into management reviews and improvement actions.
This allows you to increase automation where it is safe while showing, on paper and in practice, that you are honouring the intent of A.5.25 and keeping human oversight where it belongs.
How can you prove to auditors and customers that your A.5.25 event decisions are consistent, timely and improving over time?
You demonstrate a strong A.5.25 implementation by combining a small, coherent document set, a few clear metrics and one or two detailed case walk‑throughs. Together, they show that you have a defined approach, that you follow it in live operations and that it improves with experience.
What documentation and evidence typically land well?
Instead of a long policy, focus on a tight pack that stays in sync:
- An incident‑management policy setting out your overall approach and definitions.
- A distinct A.5.25 procedure explaining how events are assessed and classified.
- SOC and NOC runbooks that mirror that procedure in shift‑friendly language.
- A RACI for assessment, escalation, closure and approval.
- A taxonomy and severity scheme aligned with your ITSM tool and customer contracts.
- A small set of anonymised example records (tickets, incident reports, weakness logs) that all use the same language and categories.
Alongside those documents, choose a few decision‑focused metrics, such as:
- Median time from detection to first classification.
- Percentage of A.5.25‑in‑scope events classified within your target time.
- Percentage of decisions later reclassified after review.
- Count of weaknesses identified through triage and the proportion that led to completed improvement actions.
These numbers tell auditors and customers that you treat triage as a managed process, not just an activity.
How do you turn real examples into convincing stories?
Pick one or two real cases that illustrate the control working as designed:
- Show the original signal and where it surfaced (tool and queue).
- Walk through enrichment and assessment steps, showing who did what and when.
- Show the decision, routing and any customer or regulatory notifications.
- Highlight any weaknesses identified and the improvement actions you logged.
- Point to where that improvement was discussed in a management review or internal audit.
When those stories line up with your written procedures and metrics, most questions about fairness, timeliness and learning become much easier to answer.
How does ISMS.online help you present that storey calmly and credibly?
ISMS.online brings your policies, procedures, risks, incidents, audits and improvement records together under one roof. That means that when someone asks about A.5.25 you can:
- Open the control and procedure.
- Jump straight into linked incidents, weaknesses and corrective actions.
- Show management review notes and audit findings that reference the same control.
That ability to move smoothly through the evidence is often as persuasive as the content itself. It signals that your SOC and NOC operate inside a governed, integrated management system, not just a collection of tools and heroic individuals, and it gives customers, auditors and regulators confidence that the way you assess events today will still make sense when they review your decisions months or years from now.








