Why MSP logging looks adequate-until an ISO 27001 audit
MSP logging often looks adequate until you must replay an incident and discover the logs cannot tell a clear storey. This guide is general information, not legal advice, but it reflects how auditors, investigators and insurers use logs to test your services and your ISO 27001 A.8.15 implementation. Strong logging turns a confusing day into an evidence trail you can defend under pressure.
Only about one in five organisations in the 2025 ISMS.online survey said they had avoided any form of data loss in the previous year.
Good logging turns chaotic events into stories you can actually replay.
The gap between “we have logs” and “we have evidence”
The gap between logs and evidence appears when you cannot turn raw events into a clear, defensible incident timeline for auditors. They care less about the fact that tools can generate logs, and more about whether you can prove who did what, when, from where, and with what result across your MSP tooling and customer environments.
In many MSPs, those questions trigger a scramble between RMM dashboards, firewall consoles, email security portals, cloud admin centres and ticketing systems. Timestamps do not line up because devices are on different time zones or have drifted clocks. Admin actions are buried in obscure audit trails. Some critical changes live only in email or chat threads. Individually, each tool looks “fine”; together, they do not produce the coherent narrative that ISO 27001 expects under A.8.15.
Another common pattern is that logs are accessible only to a small number of senior engineers. Those people can often answer questions from memory, but that is no substitute for objective, tamper‑resistant evidence. If one of them left tomorrow, you would struggle to replay the same storey from data alone. From an auditor’s point of view, that suggests your organisation is relying on individuals rather than a designed control.
How auditors actually look at your logging control
Auditors start from the control statement, not from your SIEM vendors feature list, and they are interested in how logging supports detection, investigation and assurance. They want to see that logs of activities, exceptions, faults and other relevant events are produced, stored, protected and analysed in a planned way that matches your stated intent.
In practice, they look for written intent first: policies, logging standards and responsibility matrices that say what should be logged, where, by whom and for how long. They then compare that intent with how your environment behaves now. If your documentation says all privileged actions on customer systems are logged centrally for at least a year, they will test that claim on one or two customers and one or two systems.
Where your documents and reality diverge, nonconformities appear. If tool defaults dictate retention but your contracts promise years of traceability, auditors will note the gap. If you rely on screenshots or spreadsheets because logs are hard to query or have been purged, they will question the effectiveness of A.8.15. This is often where MSPs realise they do not have a logging architecture; they have a pile of tools. The rest of this guide focuses on closing that gap with design you can explain and evidence you can defend.
Book a demoWhat ISO 27001:2022 A.8.15 Logging actually requires
ISO 27001 A.8.15 expects you to design logging so you can detect incidents, investigate them and prove what happened in a way that matches your risks and services. Independent explainers of the 2022 revision, such as practical commentaries on A.8.15 from ISO 27001 specialists, restate the control in very similar terms, emphasising logging that supports timely detection, investigation and evidential reconstruction tailored to the organisation’s risk profile and services scope. That is particularly important when you operate as an MSP with shared tooling and multi‑tenant responsibilities.
For an MSP, that design has to span your internal systems and the shared or managed components of customer environments, not just your own office network. It is about building a capability you can describe and repeat, not just turning on default settings.
The control in plain language
In plain language, A.8.15 requires you to choose what to log, log it reliably, protect it and actually review it. Everything else in the control flows from those four ideas. If you focus on those decisions, the technical details become easier to manage. For MSPs this means applying the same discipline across shared tools, internal systems and customer environments.
First, you must decide which activities, exceptions, faults and events matter for security and operations. Second, those events must actually be logged on the relevant systems and services. Third, logs must be stored and protected so they cannot be altered or lost without detection. Fourth, those logs have to be analysed and reviewed so they contribute to monitoring and investigations.
For an MSP, “relevant events” clearly include more than traditional server logs. Remote scripts executed via your RMM, policy changes on shared firewalls, sign‑ins to cloud admin portals, changes to privileged groups in your identity platform and actions on your ticketing system can all materially affect customer security. A risk assessment should drive which of these are in scope, but once they are in scope they must be logged in a way that is consistent, discoverable and usable.
The control also assumes that logging is purposeful, not opportunistic. It is not enough to say “the tool can log that if we turn it on”. You are expected to show that you have chosen what to log, how to configure it and how to keep it aligned with changes in your services, customers and technology stack. That is why A.8.15 sits inside the broader management system: it must link back to risk, objectives, policies and continual improvement.
How A.8.15 connects to the rest of your ISMS
Logging does not stand alone. A.8.16, which deals with monitoring activities, covers how you review and act on logs. High‑level descriptions of ISO/IEC 27001 consistently present A.8.16 as the control that focuses on monitoring and review of security events and logs, which is why it naturally pairs with A.8.15 in most implementations.
The 2025 State of Information Security report notes that customers increasingly expect suppliers to align with formal frameworks such as ISO 27001, ISO 27701, GDPR or SOC 2 rather than relying on generic good practice claims.
Controls on access management, incident handling, business continuity and privacy each add specific expectations that your logging design must support. Auditors look for those linkages when they decide whether your A.8.15 implementation is effective.
It can help to think in terms of linked control families:
- Access management controls expect logs that show who accessed what and with which privileges.
- Incident management controls rely on logs to reconstruct events and support lessons learned.
- Business continuity controls need logs to help you understand failure modes and recovery.
- Privacy controls require that logs containing personal data are minimised, protected and retained only as long as necessary. This aligns with core data protection principles such as data minimisation and storage limitation in regulations like the GDPR, which expect organisations to avoid collecting unnecessary personal data in logs and to remove it once it is no longer needed for the stated purposes.
Together, these expectations mean your logging architecture has to serve multiple purposes at once, not just security operations. This is where a structured information security management system becomes crucial. A platform such as ISMS.online can help you express, in one place, how A.8.15 aligns with your risk treatment, your statement of applicability and your other controls. You can define which event types are security‑relevant, map them to systems and services, and record who is responsible for reviewing them and at what frequency. Many MSPs now document A.8.15 decisions alongside risk and the statement of applicability in this kind of structured ISMS, because it gives auditors a clear, consistent view.
By linking logging decisions to risk statements and objectives, you can explain to auditors why certain log sources or retention periods were chosen, instead of appearing to have simply adopted vendor defaults. When your services evolve, you can update the design centrally and cascade changes into procedures and service descriptions. That is the difference between treating A.8.15 as a clause on paper and treating it as a design discipline that makes your environment more defensible.
ISO 27001 made easy
An 81% Headstart from day one
We’ve done the hard work for you, giving you an 81% Headstart from the moment you log on. All you have to do is fill in the blanks.
The MSP logging gap: single-tenant theory vs multi-tenant reality
Most generic logging advice assumes a single organisation controlling all of its systems, with one security team and one set of stakeholders. MSPs operate differently: you run shared platforms such as RMM, SOC tooling and cloud management consoles across many customers, and you provide services where log ownership is split between you and those customers. That difference has big consequences for how A.8.15 should be implemented and explained.
Shared tools and cross-tenant risk
Shared MSP tools sit at the heart of your service and your risk. Central firewalls, VPN concentrators, identity providers and administration platforms through which engineers access multiple customer environments often generate rich logs, but they also carry a risk: if data from one customer is visible while another customer’s case is on screen, you have a cross‑tenant exposure.
A multi‑tenant SIEM or log management platform that uses shared indices or queues can exacerbate this. If events are only tagged by a loosely enforced customer identifier, a misconfiguration or ingestion bug can cause events to appear in the wrong view. Discussions of multi‑tenant logging architectures and shared SIEM deployments frequently highlight this risk: weak or inconsistently applied tenant identifiers can allow mis‑tagged events to leak telemetry between tenants in ways that are hard to spot quickly.
Most organisations in the 2025 ISMS.online survey reported being affected by at least one third‑party or vendor‑related security incident in the past year.
From an ISO 27001 standpoint, that undermines confidentiality. From a contractual standpoint, it can breach commitments. From a logging standpoint, it means your architecture has not properly accounted for tenancy as a design dimension. Guidance on logging and monitoring in shared cloud environments, including work from communities such as the Cloud Security Alliance, treats cross‑tenant log exposure as both a confidentiality failure and a potential breach of contractual or regulatory obligations.
At the same time, customers may assume that you hold a complete copy of all their logs simply because you provide a managed service. In reality, you may only hold summaries or alerts from their systems, while raw logs stay in their own cloud subscriptions or data centres. If that ownership split is not clear, expectations and responsibilities around A.8.15 become muddled, and your position in a dispute or investigation becomes harder to defend.
To satisfy A.8.15 in an MSP context, you need to be very clear who owns which logs, who can access them and for what purpose. For every service offering, you should be able to answer: which systems generate logs, where those logs are stored, who has admin and read access, how they are backed up and retained, and how they are used for monitoring and incident handling.
Roughly 41% of respondents said that managing third‑party risk and tracking supplier compliance is one of their main information‑security challenges.
This clarity should be reflected in your contracts and service descriptions. If you provide a managed firewall service, for example, do you keep detailed traffic logs, only security events or just monthly summaries? If a customer wants raw logs for their own SIEM, is that explicitly in scope? When they ask for an incident report six months after the fact, which log sources will you reliably draw on?
Regulators and enterprise customers increasingly expect you to show architectural diagrams or written descriptions of your logging and monitoring design, especially if you serve critical sectors or cross‑border data flows. Policy papers on cybersecurity for critical infrastructure and cloud services, particularly in the European context, have stressed the need for documented architectures and clear logging and monitoring responsibilities as part of demonstrating operational resilience and transparency. If you cannot produce those in a timely way, it suggests that logging is emerging from tool configurations rather than from a deliberate, multi‑tenant architecture. The next section introduces a simple stack model that helps you move from ad‑hoc practice to structured design that stands up in audits and investigations.
The A.8.15 MSP Logging Stack: a 4-layer architecture
A practical way to design logging for an MSP is to think in four layers: collection, processing and normalisation, storage and protection, and access and use. Each layer has its own risks, controls and evidence, and each must work in a multi‑tenant context. When you can explain those layers clearly, auditors and customers tend to trust your design.
- Collection: – how events leave systems and reach your logging platform.
- Processing and normalisation: – how you parse, enrich and route log data.
- Storage and protection: – how you retain logs safely with integrity and backups.
- Access and use: – how people query, review and act on logs.
The four layers in practice
The collection layer covers how events leave systems and reach your logging platform. For MSPs, that might be agents on servers and endpoints, connectors on cloud services, syslog streams from network devices and API integrations for RMM and PSA tools. The key questions are whether all in‑scope systems are configured to send the right events, and whether those connections are secure and reliable.
Processing and normalisation involve parsing, enriching and routing logs once they arrive. You might add tenant identifiers, normalise usernames across systems, map vendor‑specific fields into a common schema and philtre out noise. Decisions here affect how easy it is to search for “what did this engineer do across all clients yesterday” or “show me all failed admin logins on high‑risk systems in the last week”.
Storage and protection deal with where logs are held, how they are protected against tampering and loss, and how retention is enforced. You need to choose data stores, backup strategies, integrity controls such as append‑only storage or hashing, and tiering schemes for hot and cold data. Finally, access and use cover roles, permissions, dashboards, alerting, investigations and reporting. This is where A.8.15 meets A.8.16: generating logs is not enough if nobody can efficiently review and act on them.
Turning the stack into MSP service blueprints
Once the four layers are defined for your environment, you can apply them service by service to create repeatable logging blueprints. For a managed service, you decide how events are collected, enriched, stored and accessed before you worry about individual vendor settings. That sequence makes it easier to explain your approach consistently across customers.
Take managed firewall as an example. At collection, you enable detailed security and admin logs and forward them securely to your central platform. At processing, you tag events with customer identifiers and normalise rule and interface names. At storage, you keep security events in searchable storage for an agreed period and archive raw logs longer if needed. At access and use, your SOC sees multi‑tenant dashboards while customers see their own subset via reports or portals.
The same pattern can be applied to managed Microsoft 365, endpoint security, identity services and other offerings. For each, you record in your ISMS which layers are in play, which controls are applied and how evidence is captured. This makes it much easier to onboard new customers, explain your design in tenders and prove conformity with A.8.15 during audits.
Step 1 – Describe the service scope
Define which systems, shared platforms and customer components the service covers, including any regions, tenants or data residency constraints.
Step 2 – Capture each logging layer
For that service, record how you collect events, process and normalise them, store and protect them, and give people access for monitoring and investigations.
Step 3 – Link layers to controls and evidence
Map each layer to specific ISO 27001 controls, responsibilities, procedures and records so you can show auditors exactly how the stack operates in practice.
This structured approach also makes resilience questions more concrete. If collection agents fail, how are logs buffered? If a region’s log store is unavailable, how do you avoid silent gaps? If your SIEM is down, how do you maintain minimum logging and retention obligations? By treating your logging as a stack, you can plan for these scenarios explicitly instead of discovering weaknesses only when something goes wrong.
Free yourself from a mountain of spreadsheets
Embed, expand and scale your compliance, without the mess. IO gives you the resilience and confidence to grow securely.
Designing multi-tenant collection, aggregation, storage and access
With the stack in mind, you can now address the uniquely multi‑tenant aspects of MSP logging: keeping customer data separated, respecting regional boundaries and aligning technology design with contracts and privacy obligations. These decisions have a direct impact on how credible your A.8.15 implementation looks to auditors, customers and regulators.
Collection and aggregation in a multi-tenant world
In a single‑organisation environment, you might simply point all systems at a central log collector. In an MSP, you also have to consider which customers share collectors, which regions data flows through and how you tag and verify tenant identifiers on incoming events. A sensible starting point is to define standard collection patterns per service and per region.
For example, you might use region‑specific ingestion endpoints so that European customers’ logs do not leave the region unless explicitly agreed. You might require every log message to include a tenant identifier, validated at the edge before being accepted. You might isolate particularly sensitive customers in their own pipelines. These decisions help prevent accidental data mingling and support data residency commitments.
Aggregation and normalisation then need to respect those same boundaries. When you bring logs together for correlation, are you aggregating across all customers, or only within defined groups? Can a query ever span customers without explicit authorisation? If your SOC runs global detection content, how do you ensure that the results analysts see are scoped to their approvals?
A few questions can anchor your design:
- Which services share collectors, and where are those collectors located?
- How do you validate tenant identifiers on ingestion?
- Under what conditions can queries or alerts span multiple customers?
Clear, documented answers to these questions are key to satisfying both A.8.15 and your confidentiality obligations, and they give you a defensible storey if a regulator or customer probes how your multi‑tenant logging works.
Storage, access control and privacy
On the storage side, multi‑tenant design decisions include whether to use shared indices with strong logical separation or separate data stores per customer. Shared storage can be more efficient but demands rigorous guardrails on indexing, querying and export. Separate storage can be simpler to reason about, at the cost of additional infrastructure. Either way, you must be able to show how you prevent one customer’s data being retrieved in the context of another.
Access control should mirror your service model. SOC analysts may need read access across many tenants, but only a very small group should have administrative rights to change logging configurations or retention. Customer staff should see only their own logs, with roles constrained further by least privilege principles. All access to the logging platform should itself be logged and reviewed, especially for sensitive actions such as changing retention settings or deleting data.
Privacy adds another dimension. Logs often contain personal data such as usernames, IP addresses, device identifiers and, in some cases, interaction content. You need to decide which fields are necessary for security and operational purposes, and where anonymisation, pseudonymisation or aggregation are appropriate. You also need to ensure retention periods and data locations are consistent with privacy laws and agreements. Those choices should be documented so that your A.8.15 design remains compatible with your privacy controls, and so you can defend your approach if it is challenged.
What to log: must-have vs nice-to-have MSP log sources
No MSP can or should log everything. The art is to select a defensible minimum set of log sources that allow you to detect and investigate meaningful incidents, and then add further sources where risk and budget justify it. ISO 27001 expects this to be risk‑based and documented, and auditors often ask why particular sources were prioritised over others.
Must-have log sources for MSPs
Some log sources are extremely hard to justify omitting from your A.8.15 implementation. A simple mental test is to imagine a serious incident and ask whether you could credibly reconstruct what happened without those logs. If the answer is no, that source probably belongs in your baseline design. Practical A.8.15 implementation guides from ISO 27001 consultancies often stress that identity systems, boundary controls, core security tooling and administrative jump hosts belong in this baseline set for a credible certification effort.
Key categories usually include:
- Identity and access systems: – directories, single sign‑on and multi‑factor providers.
- Network and boundary controls: – firewalls, VPN gateways and intrusion tools.
- Security tooling: – endpoint, email and web protection platforms.
- Administrative tools and jump hosts: – RMM, privileged access tools, bastions and cloud consoles.
- Core service platforms: – managed cloud suites, key applications and ticketing or PSA systems.
Identity and access systems are at the top of that list. Without logging from directory services, single sign‑on providers and multi‑factor authentication platforms, you cannot reliably see who signed in, from where and with what level of privilege.
Network and boundary controls are another must‑have category: firewalls, VPN gateways, secure web gateways and intrusion detection or prevention systems. These logs show which traffic was allowed or blocked, which connections came from unusual locations and when rules or policies were changed. Security tooling such as endpoint protection, email security and web philtres provide rich signals about threats and responses.
Administrative tools and jump hosts used by your engineers deserve special attention. Actions taken through RMM platforms, privileged access management tools, bastion hosts and cloud management consoles should be logged in enough detail to show what actions were performed on which systems, under which identity. Finally, key service platforms such as hosted Microsoft 365, core applications you manage and your ticketing or PSA system provide important context about changes and customer interactions.
If any of these categories are missing, you will struggle to answer basic questions during incidents and audits. Industry commentary on incident response and breach investigations regularly notes that lacking identity, network or security‑tool logs makes it very difficult to reconstruct events and satisfy detailed questioning from investigators or auditors. Making these categories mandatory in your A.8.15 design gives you a solid foundation and makes further enhancements easier to justify.
Nice-to-have sources and when to add them
Beyond the essentials, there are many log sources that can add value but may not be justifiable in all cases. Generic application logs from desktop software, detailed debug logs from development environments and verbose metrics from low‑risk systems can quickly consume storage and analyst time without significantly improving your ability to detect or investigate incidents.
That does not mean they are always out of scope. For high‑risk customers, bespoke applications or regulated workloads, you may decide that additional logging is necessary. The key is to record that rationale in your risk assessment and statement of applicability, and to configure collection and retention deliberately rather than sporadically.
A useful technique is to define log source tiers in your service catalogue. A base tier might include all must‑have sources and be suitable for standard customers. Higher tiers could add application‑specific logs, more detailed audit trails or longer retention. Each tier should describe not just the volume of data, but the detection coverage and investigation depth it enables. That way, sales, operations and customers can understand what they gain as they move up the tiers.
A small comparison table can help your team think about sources pragmatically:
| Tier | Typical sources | Primary purpose |
|---|---|---|
| Core (must‑have) | Identity, firewalls, VPN, EDR, RMM, admin tools | Detection and basic forensics |
| Enhanced | Key application logs, cloud workload logs | Deeper root‑cause analysis |
| Specialist | Debug logs, niche system logs | Rare, complex or regulated cases |
This is illustrative only; your actual tiers and sources should follow your own risk profile and services. The important part is that A.8.15 becomes a structured set of choices rather than an implicit side effect of whichever systems happen to have logging turned on. When you can explain those choices, they become much easier to defend to auditors, customers and regulators.
Manage all your compliance, all in one place
ISMS.online supports over 100 standards and regulations, giving you a single platform for all your compliance needs.
How long to keep it: a risk-based retention model for MSPs
Choosing retention periods is one of the most sensitive parts of A.8.15 for an MSP. You are balancing regulatory expectations, incident investigation needs, privacy rules and storage cost, and your choices will be judged on how risk‑based and defensible they are. Customers and auditors will both scrutinise these decisions closely during reviews.
Designing a tiered retention model
A practical way to approach retention is to group logs into classes and assign tiers. For example, you might treat security and administrative logs as one class, customer service and ticketing logs as another and low‑value technical logs as a third. For each class, you decide how long data should be quickly searchable, how long it should remain available in slower or archived form and when it should be deleted or anonymised.
To make those decisions, work backwards from your risks and obligations. Consider how long attacks typically go undetected in your customer base, how long investigations and legal processes tend to last and what regulators or contracts expect. If your customers operate in sectors where incidents are sometimes uncovered many months after the initial compromise, very short retention periods will be difficult to defend. Cloud provider guidance on log retention commonly recommends a similar pattern, with high‑value logs kept hot and searchable for a period and then moved into lower‑cost archival storage that still allows retrieval for investigations or compliance queries.
A common pattern is to keep high‑value logs (identity, security, admin actions) hot and searchable for several months, then move them to less expensive storage while keeping them accessible for one to several years. Lower‑value logs might have much shorter retention. Whatever numbers you choose, document how they were derived, what risks they address and who approved them. That makes discussions with auditors, customers and privacy officers much more straightforward.
Step 1 – Classify log types
Group logs into clear classes such as security and admin, customer service and ticketing, and low‑value technical or diagnostic data.
Step 2 – Decide hot vs archive retention
For each class, decide how long data should stay quickly searchable and how long it should remain in slower or archived storage.
Step 3 – Document rationale and approvals
Record why you chose each retention period, which risks or obligations it addresses and who authorised it, so you can explain it during audits.
Balancing regulation, investigation and cost
Retention is not just a technical or compliance decision; it is also a commercial one. Longer retention means more storage, backups and indexing, which may impact your margins if not priced appropriately. Short retention may save money now but increase the risk that you cannot support an investigation or demonstrate due diligence later.
A strong majority of organisations in the 2025 State of Information Security report said the speed and volume of regulatory change are making compliance harder to sustain.
Your service catalogue should therefore make retention visible. For each logging tier or service package, state which classes of logs are kept for how long, in which form. This lets customers choose based on their risk appetite and regulatory profile. It also gives your finance and operations teams a clearer view of the cost implications of each option.
Privacy rules add another layer. Many jurisdictions require that personal data be kept no longer than necessary for the purposes for which it was collected. This reflects principles such as storage limitation in data protection laws like the GDPR, which explicitly state that personal data should not be kept indefinitely and must be erased or anonymised once it is no longer needed for the purposes originally defined.
That can sit uncomfortably with the desire to keep security logs for many years. Techniques such as pseudonymising certain fields after a period, aggregating events into counts or dropping low‑value fields can help you reconcile these pressures.
The key test is whether your retention model would look reasonable and defendable if a regulator, customer or court asked you to justify it. If you can explain the balance you struck between regulation, investigation needs, privacy and cost, and show that you apply it consistently, you are in a much stronger position than if retention is simply “whatever the tool was set to when we installed it”.
Book a Demo With ISMS.online Today
ISMS.online helps you turn A.8.15 from scattered tool settings into a governed, audit‑ready control across all of your MSP services, so you can face incidents and audits with a clear, defensible logging storey. A well‑designed logging architecture and retention model will only deliver its full value if it is embedded in your wider management system and stays aligned with how your services evolve.
Why structure matters more than one more tool
ISMS.online gives you a structured place to capture your logging design across all your MSP services, instead of relying on a mixture of spreadsheets, slide decks and individual knowledge. You can define your A.8.15 control intent, list in‑scope log sources, describe your four‑layer architecture and record how multi‑tenant collection, storage and access are handled for each offering.
You can also model your retention strategy explicitly. For each log class and service tier, you document the agreed retention periods, the storage tiers used and the rationale behind them. When auditors ask why a certain set of logs is kept for a specific duration, you can point to a single, governed record that ties decisions back to risk, contracts and privacy obligations. That reduces the time and stress of audit preparation and helps avoid surprises.
Crucially, ISMS.online is designed to integrate with, not replace, your existing operational tools. Your SIEM, RMM, ticketing platform and cloud services remain where logging and monitoring happen. The ISMS provides the framework around them: who is responsible, which procedures apply, how reviews are recorded and how improvements are tracked. That separation makes it easier to evolve your tooling without losing control of your logging control.
What you gain by centralising your logging design
When you centralise your A.8.15 approach in ISMS.online, you give everyone a single, shared view of how logging and retention work across your MSP. That clarity makes responsibilities, potential gaps and design priorities far easier to see, and it becomes simpler to show auditors and customers how your approach operates in practice.
Security leaders can see at a glance which services are fully covered by the four‑layer stack and where gaps remain. Managing directors can see how logging and retention choices align with business risk appetite and commercial priorities. Operations managers can map daily checks and reviews to controls and keep evidence organised, so the burden of audit preparation is spread across the year instead of compressed into a few stressful weeks.
You can start small. Choose one flagship service, such as managed Microsoft 365 or managed firewalls, and capture its logging architecture, log sources and retention settings in the platform. Use that as a pilot to identify inconsistencies, missing responsibilities or undocumented assumptions. Once you are comfortable, apply the same pattern to other services. Over time, you build a complete, auditable picture of logging across your MSP.
Choose ISMS.online when you want logging and retention to become part of a coherent, audit‑ready information security management system instead of a loose collection of tool settings. If you value faster, calmer audits, clearer service definitions and the ability to show customers and regulators exactly how you meet A.8.15, booking a short demonstration is a sensible next step. It will help you see how the ideas in this guide translate into practical workflows, records and dashboards tailored to MSPs like yours.
Book a demoFrequently Asked Questions
What does ISO 27001 A.8.15 actually change for how your MSP approaches logging?
ISO 27001 A.8.15 expects your MSP to treat logging as a designed control that supports security and accountability, not a by‑product of whichever tools you happen to use. In practice, that means deciding what must be logged, where, how long for, how it is protected, and how your team uses those records to detect and investigate issues across all in‑scope customers.
How should you translate A.8.15 into a simple, usable logging standard?
A workable approach is to turn A.8.15 into a short, opinionated standard that engineers, analysts and service owners can actually follow:
- Define the scope – which customers, environments and services are in your ISO 27001 boundary.
- Specify event categories that always need logging, such as administrative changes, authentication and access attempts, policy and configuration changes, and security alerts.
- List the minimum log sources by service type (for example identity, firewalls/VPNs, EDR, M365, RMM, PSA).
- Set clear log integrity expectations – time synchronisation, restricted access, immutability or write‑once storage where feasible, and backup expectations.
- Describe operational use – who reviews which logs, at what frequency, how exceptions are escalated, and where findings are recorded.
That standard then becomes the reference point for every managed service. When an auditor asks how your “Managed Microsoft 365” or “Managed Firewall” offerings meet A.8.15, you can show:
- The logging standard in your ISMS.
- The service blueprint that maps each offering to the standard.
- Evidence of real reviews and investigations linked to those services.
Capturing the standard, service mappings and operational records in ISMS.online keeps everything traceable and makes it clear that logging is embedded in your information security management system, not scattered across tool settings and individual notebooks.
How can you quickly test whether your logging meets the intent of A.8.15?
A useful internal test is to pick a recent change or security‑relevant event and ask your team to reconstruct it using logs alone:
- Who performed the action?
- When and from where did it happen?
- Which accounts, systems or tenants were affected?
- What was the outcome and what did your team do in response?
If you can answer those questions confidently, using defined log sources and review processes, you are heading in the right direction. If the answers are slow, incomplete or inconsistent between customers, that is a clear signal to tighten your standard, coverage or review discipline and capture those improvements as risks and actions in ISMS.online so you can show progress over time.
How should an MSP design multi‑tenant logging so it is secure, scalable and audit‑ready?
A practical MSP logging design normally breaks into four layers: collection, processing and normalisation, storage and protection, and access and use. Thinking in these layers helps you separate customers cleanly, respect regional data requirements and give auditors a clear storey.
What design decisions matter most at each layer for multi‑tenant ISO 27001 logging?
At the collection layer, focus on how each tenant’s events reach your logging platform:
- Choose per tool whether to use agents, APIs or syslog.
- Provide region‑specific collection endpoints so EU, UK and US logs can remain in‑region if required.
- Enforce time synchronisation so timelines line up during an investigation.
At the processing and normalisation layer, make logs usable and safe in a shared platform:
- Ensure every record carries a reliable tenant identifier and, where relevant, environment or service tags.
- Normalise core fields (user, source, target, action, result) so analysts can search consistently across sources.
- Treat cross‑tenant queries and global searches as privileged operations, with their own access rules and logging.
At the storage and protection layer, design for separation and integrity:
- Partition storage by tenant, region or both, using indices, buckets or databases aligned with your architecture.
- Apply integrity measures such as append‑only storage, immutability flags or hash‑chaining where the tools support them.
- Link hot and archive retention to log classes, contracts and sector norms so you can defend your choices.
At the access and use layer, make sure day‑to‑day work never blurs customer boundaries:
- Define which roles can see which tenants; keep cross‑tenant or global roles rare, justified and monitored.
- Structure alert queues, reviews and investigations so engineers can work deeply within a tenant without exposing another customer’s data.
- Decide how often you share summaries, trends or incident timelines with customers and how that aligns with your service levels.
Documenting these decisions as part of your A.8.15 control, then tying them to concrete configurations, playbooks and review records in ISMS.online, turns multi‑tenant logging from something you hope is safe into something you can describe and defend.
How do you prove tenant separation to auditors and larger customers?
You make tenant separation much more convincing when you can show a clean line from policy to architecture to access control to real investigations:
- Policies and standards state that customer logs are logically or physically separated and that cross‑tenant access is tightly controlled and monitored.
- Architecture diagrams illustrate how that works in your chosen platform, including regional storage for regulated customers.
- Access records show which analysts have which tenant scopes, who approves cross‑tenant roles, and how those roles are reviewed.
- Incident and investigation logs demonstrate that your team can perform deep analysis within one tenant’s data without touching others.
Managing those documents, records and links in ISMS.online under A.8.15 gives you a single place to walk auditors and customers through your storey without exposing raw log data or every detail of your tooling.
Which log sources should an MSP treat as non‑negotiable under ISO 27001 A.8.15?
A.8.15 is intentionally flexible and asks you to log “activities, exceptions and information security events” in line with risk. For managed service providers, there is a core set of sources that almost always need to be in scope if you want reliable investigations and a comfortable ISO 27001 audit.
What does a sensible MSP logging baseline usually include?
Most MSP environments benefit from a baseline that covers at least five categories:
- Identity and access: directory platforms, SSO, MFA, privileged access management and any just‑in‑time admin tooling.
- Network and boundary controls: firewalls, VPNs, secure web gateways, key routers and reverse proxies that guard external and internal access.
- Endpoint and workload security: endpoint protection or EDR, email and web security, and cloud workload protection tools.
- Administrative and orchestration tools: RMM platforms, hypervisors, cloud management consoles, jump hosts, bastions and automation pipelines that can change customer environments.
- Core customer platforms and your own service tooling: major SaaS such as Microsoft 365 or Google Workspace, plus PSA and service desk systems that record what was changed and why.
With these in place, your team can normally answer “how did the attacker get in, what did they do, and which customers or systems were affected?” Without them, both incident handling and audits quickly slide into speculation, which undermines confidence in your managed services as well as your compliance.
How can you control lower‑value log sources without undercutting your security storey?
Not every potential log source justifies collection for every customer. A practical way to avoid waste while staying defendable is to group optional sources into value‑based tiers:
- High‑value logs: that materially improve early detection or context during most incidents.
- Specialist forensic logs: that are mainly useful for complex, high‑impact cases.
- Low‑value or noisy logs: that add volume and cost with limited investigative benefit.
You can then align these tiers to your service catalogue:
- Baseline services include the non‑negotiable sources.
- Premium or “enhanced security” services add specific high‑value and forensic tiers.
Documenting those tiers, and the risk‑based justification for each, in ISMS.online under your logging standard gives you a clean way to explain to auditors and customers why one service includes richer logging than another and helps your commercial team treat logging as an explicit part of each managed service rather than an invisible cost.
How should an MSP define and manage log retention so it satisfies A.8.15 and data protection law?
Because ISO 27001 does not prescribe retention periods, A.8.15 puts the responsibility on you to define and justify how long you keep different types of log data. As an MSP, you have to balance investigation needs, customer and sector expectations, contracts and privacy rules across multiple tenants and regions.
How can you build a retention model that feels reasonable and defensible?
Instead of setting retention per individual source, most MSPs find it more manageable to work with a handful of log classes, such as:
- Identity and access activity.
- Security events and alerts.
- Administrative and change activity.
- Service and ticket records.
- Low‑value technical logs.
For each class you can then decide:
- How long logs stay searchable in your primary tools for detection and day‑to‑day investigations.
- How long they remain in archive storage for rare, complex cases, legal holds or contractual reasons.
Those periods should be tied to:
- Typical detection and investigation timelines for serious attacks.
- Sector expectations and regulatory guidance in the industries you serve.
- Commitments in customer contracts and, where relevant, cyber insurance policies.
Lower‑value technical logs can usually be kept for shorter periods to manage storage and reduce unnecessary exposure of personal data, while high‑value security and access logs generally justify longer retention.
How do you balance retention with privacy requirements and still support deep investigations?
Retention becomes a privacy problem when it is extended purely because storage is cheap. To keep both ISO 27001 and data protection regimes such as GDPR or CCPA satisfied, you can:
- Identify which log classes hold personal data and ensure you can explain, in risk and legal terms, why those retention periods are “no longer than necessary”.
- Apply techniques such as pseudonymisation or tokenisation for long‑term archives so investigators can still join events when required without exposing clear identifiers to every user or tool.
- Replace detailed older records with aggregated statistics or summaries once they are no longer realistically needed for incident response or legal evidence.
- Regularly test retrieval and analysis of archived logs for representative incident scenarios so you know your retention design works in practice, not just on paper.
Documenting your log classes, retention periods, risk reasoning and approvals in ISMS.online under both A.8.15 and relevant privacy controls gives you an audit trail you can show to auditors, regulators and customers who want to understand why a particular type of log is kept for a given period.
How can an MSP convincingly evidence A.8.15 compliance during an ISO 27001 audit?
Auditors tend to look at A.8.15 through three lenses: design, operation and improvement. You are not judged on owning a particular SIEM, but on whether you can show that logging is intentionally designed, running as described, and being reviewed.
What should you prepare before an A.8.15‑focused audit?
A concise evidence pack mapped to A.8.15 makes the audit conversation far smoother. It typically includes:
- A logging policy or standard that explicitly references A.8.15 and related monitoring and incident handling controls.
- Service‑level logging blueprints that explain, for each major managed service, which log sources are used, how multi‑tenant separation works, and how retention is applied.
- A log classification and retention matrix that shows how long each class of logs is kept and why.
- Your Statement of Applicability with a clear view of all logging‑related controls and your implementation or exclusion decisions.
For operation, you can prepare:
- Configuration screenshots or exports that prove key sources, tenant identifiers, integrity options and retention settings are enabled.
- Samples of scheduled log reviews, alert queues and security dashboards, including who reviewed them and what they did with the findings.
- A small number of investigation records where logs played a central role in understanding or resolving an issue.
- Management review minutes or improvement records where logging performance, coverage or incidents have been discussed.
If those artefacts live in ISMS.online and are linked to A.8.15 and the relevant services, you can walk auditors from policy through to working examples in a logical way rather than hunting around in email or local folders.
How can you show logging as a maturing control rather than a static requirement?
Auditors are usually more relaxed when they see that logging is part of an ongoing improvement cycle. You can demonstrate that by:
- Recording log‑related risks, issues and changes as part of your risk and improvement processes, with owners and due dates.
- Showing a review schedule for your logging standard, retention model and service mappings, and evidence that reviews result in updates.
- Capturing lessons learned from investigations where logging either performed well or exposed a gap, then linking those lessons to changes in configuration, coverage or process.
Being able to step through these elements in ISMS.online under A.8.15 helps shift the tone of the audit from “have you ticked this box?” to “how are you using logging to strengthen your managed services over time?”, which supports the reputation you want as a serious MSP.
How does ISMS.online help your MSP turn logging and retention into a repeatable, trusted service?
For many MSPs, the challenge with A.8.15 is not whether a tool can collect logs, but whether logging and retention are consistent, explainable and commercially sustainable across customers. ISMS.online helps by treating your logging approach as a governed part of your management system instead of a set of scattered practices.
How can ISMS.online support your logging design, responsibilities and evidence?
Within ISMS.online you can bring A.8.15 under the same control as the rest of your ISO 27001 work:
- Record your logging policy and standard once, link them directly to A.8.15 and related controls, and make them visible to engineers, analysts and service owners.
- Map each managed service to that standard so you always know which log sources, architectures and retention rules apply to “Managed M365”, “Managed Firewall”, “Managed Endpoint” and similar offers.
- Maintain a single log classification and retention matrix, linked to risks, contracts and regulations, with approvals and review dates clearly recorded.
- Assign responsibilities for log reviews, exception handling and improvement tasks and track their completion with built‑in workflows and reminders.
- Attach architecture diagrams, review records, investigation summaries and management meeting notes directly to A.8.15 and to individual services so evidence is easy to assemble for auditors, insurers or important customers.
Because everything sits in one governed environment, updates you make to logging design and retention automatically align with your wider ISMS rather than being lost in spreadsheets or personal folders.
What practical benefits will your team and customers notice day to day?
When logging and retention are managed through ISMS.online, security leaders work from a single, reusable blueprint for A.8.15 across tenants and regions, engineers follow clear standards and schedules instead of ad‑hoc habits, and commercial teams can explain how logging supports each service level rather than relying on vague promises.
Over time, that combination often shifts how customers and auditors see your MSP. Instead of hoping that “the tools log enough by default”, you become the provider who can explain exactly how logging is designed, operated and improved, and who can show that storey clearly in your ISMS. Taking the time to capture your current logging approach and next improvements in ISMS.online is a straightforward move if you want A.8.15 to support your reputation rather than feel like another compliance hurdle.








