Why Annex A.5.35 Hurts MSPs When Internal Audits Look Like ‘Extra Work’
Annex A.5.35 hurts MSPs when internal audits land as surprise, one‑off projects instead of a normal part of service delivery. When reviews are dropped on engineers at the last minute, they feel like bureaucratic “extra work,” even though the same control can become one of your strongest commercial assets. Independent review of information security often feels like a luxury for big enterprises, but Annex A.5.35 pushes it squarely into your MSP’s day‑to‑day reality: you are expected to prove that your own security controls work, not just that they are documented. Standard commentary on ISO/IEC 27001:2022 Annex A.5.35 underlines this by stressing periodic, objective review of the suitability, adequacy and effectiveness of your information security arrangements, not just the existence of policies.
Independent review reshapes your obligations because you must show that your approach still works in the real world, and that someone suitably independent has checked it. If you reshape independent review into a predictable, low‑friction routine, you protect both your customers and your team’s sanity at the same time that you strengthen your position with certification auditors and enterprise customers who now routinely look for concrete A.5.35 evidence. Guidance on service‑organisation reporting, such as the AICPA’s material on SOC reporting, reflects the same expectation: user entities and their auditors increasingly expect evidence that controls are operating effectively, not just tidy policy documents.
The 2025 survey indicates that customers increasingly expect their suppliers to align with formal frameworks such as ISO 27001, ISO 27701, GDPR or SOC 2, rather than relying on generic ‘good practice’ claims.
Good security reviews feel fair, turning everyday good habits into visible evidence.
The real business problem A.5.35 exposes for MSPs
A.5.35 exposes the gap between “we say we are secure” and “we can prove our security actually works in practice.” Independent review matters because clients, regulators, insurers and partners are no longer satisfied with policy statements; they increasingly ask how you check that your security actually works. Regulatory communications on cybersecurity, such as the US Securities and Exchange Commission’s cybersecurity spotlight materials, consistently stress the need for demonstrable control effectiveness rather than relying on policy alone, and that thinking flows through to how they look at suppliers.
When an enterprise prospect sends a detailed questionnaire or their auditor visits, they are effectively testing whether Annex A.5.35 exists in practice: is someone objective reviewing your approach to security at planned intervals and after major changes, and does that review lead to real improvement?
If the answer is vague or buried in ad‑hoc documents, the deal and your credibility are at risk. Behind the standard’s wording is a simple question: can you show that your security practice is more than individual heroics and tool dashboards? If the only assurance you can offer is “our senior engineer keeps an eye on things,” you are relying on trust, not evidence. Independent review forces you to treat your security management system as a product that must be tested and inspected like any other service you sell. That might feel uncomfortable, but it is also where you can start to differentiate your MSP.
Why traditional internal audits feel painful for engineers
Traditional internal audits feel painful for engineers because they interrupt delivery work without offering visible benefits. Internal audits are often run as one‑off projects, bolted on top of normal ticket queues and project deadlines. Someone circulates a spreadsheet, books a string of interviews, asks for screenshots and log exports, and disappears for months until the next certification cycle. From the technical team’s perspective, most of the effort is interrupt‑driven: stop what you are doing, explain a process that already works, dig out evidence that lives in multiple tools, then repeat when a client asks similar questions.
This pattern is exhausting and breeds resentment. Engineers start to see audits as bureaucracy rather than a way to improve security. Worse, because reviews are treated as rare events, they often focus on documentation rather than real control operation; a policy that looks tidy on paper can pass, even if actual patching, access reviews or incident follow‑up are inconsistent. Supporting guidance for control 5.35 in ISO/IEC 27002:2022 emphasises proportionate, periodic independent reviews rather than prescribing large, infrequent projects, which gives you scope to design a lighter‑weight approach that still meets the intent.
Annex A.5.35 does not ask you to run massive, infrequent projects that paralyse delivery. It asks you to build a manageable, recurring way to check that your security arrangements are still suitable, adequate and effective, in a way your team can live with. You should not need to stop delivery work for days to support a review if you design it sensibly.
Turning luxury audits into a practical MSP advantage
You can turn luxury audits into an MSP advantage by treating independent review as proof that your multi‑tenant environment is genuinely under control. The same control that feels like overhead can become a lever for stronger sales, smoother client audits and fewer nasty surprises if you design it with your MSP model in mind. Independent review is one of the few places where you can show, with structured evidence, that your tools, processes and people operate reliably across many customers.
When you can hand a prospect a recent independent review summary, show how findings drove specific improvements, and explain how often you repeat this cycle, you instantly look more mature than providers who reply with generic policy PDFs. A platform such as ISMS.online can help here because it gives you a single place to plan reviews, assign independent reviewers, collect evidence references from your existing tools, and track findings to closure. Instead of scrambling through email and spreadsheets whenever Annex A.5.35 is mentioned, you can point to a live internal audit programme that is already running. That shift-from reactive, ad‑hoc checks to a steady assurance rhythm-is at the heart of making this control feel like part of normal MSP operations rather than a bolt‑on compliance chore.
Book a demoWhat Annex A.5.35 Really Requires in Practice for MSPs
Annex A.5.35 really requires you to plan and document objective checks on whether your overall security approach still works, not just whether policies exist. Explanations of Annex A.5.35 consistently highlight that organisations should periodically and independently review the suitability, adequacy and effectiveness of their information security arrangements, which goes beyond simply confirming that documentation is present. For an MSP, that means defining what “your approach to information security” covers, deciding when and how independent reviews will happen, and showing that review results lead to improvement. The control expects you to have your approach to information security, and the way you implement it across people, processes and technology, reviewed by someone independent at planned intervals and when significant changes occur; when you translate that formal language into MSP‑friendly terms and make those decisions explicit, the control becomes much clearer, more manageable, and easier for auditors and customers to see as an actively managed practice rather than something left to chance.
About two thirds of organisations in the 2025 ISMS.online survey said the speed and volume of regulatory change are making compliance harder to sustain.
In plain language, Annex A.5.35 asks you to decide what you will review, how often you will review it, who will do the work, and what you will do with the results. First, decide what “your approach to managing information security” covers; for an MSP this usually includes your ISMS scope, core service lines, shared platforms, and internal corporate systems that support service delivery. Second, plan independent reviews at a sensible frequency, rather than waiting until a certification body or customer demands it. Third, ensure the people performing the review are not the same people who run the controls being checked, so there is no conflict of interest.
Fourth, agree criteria and methods in advance: for example, you might decide to review a sample of change tickets against your change management procedure, or to test that access reviews for privileged accounts happened as planned. Finally, record the results and act on them. That means producing a short report that states what was reviewed, what was found, what actions are needed, and who owns them. The standard does not dictate an exact format, but guidance on ISO 27001 documentation and audit evidence, such as material from specialist consultancies, makes it clear that auditors commonly look for documented plans, records of reviews, and evidence that reviews happen when you say they do, rather than relying on undocumented practice.
What “independent” really means for an MSP
For an MSP, “independent” means the reviewer can form an objective view without being the person who built or operates the controls under test. The word “independent” is where many smaller MSPs worry, especially when the security team is one person or a tiny group. Independence does not mean you must have a fully separate internal audit department. Commentary on Annex A.5.35 and related ISO 27001 guidance emphasises role separation and objectivity as the core of independence, particularly for smaller organisations, rather than insisting on a dedicated audit function; you can achieve this through proportionate governance and clear responsibilities. It means the individuals performing the review are not responsible for designing, implementing or operating the controls under examination and are not subject to undue influence from those who are. In a small MSP, that can be achieved through role separation and governance, even if the number of people is limited.
You can use role rotation, cross‑functional reviewers and clear reporting lines to make that independence visible. For example, a service delivery director, operations manager or finance leader can oversee reviews of security controls, using structured checklists, while technical staff supply evidence and answer clarifying questions. Where full separation is impossible, you can use compensating measures such as having findings validated in management review meetings or bringing in an external consultant periodically for higher‑risk areas. Later, when you design independence patterns in more detail, these principles become the backbone of your approach.
Independent review vs internal audit vs BAU monitoring
Independent review, internal audit and day‑to‑day monitoring all support assurance, but they solve different problems. Many MSPs already perform change reviews, ticket quality checks, log monitoring and other routine activities; these are valuable but they are not the same as a formal independent review. Daily or weekly monitoring focuses on keeping services running and spotting incidents quickly. Internal audits, as described in ISO 27001 clause 9.2, are about verifying whether your ISMS conforms to the standard and to your own requirements. Clause 9.2 is explicit that internal audits are used to determine whether the ISMS conforms both to the organisation’s own requirements and to ISO 27001, and standard commentary on Annex A.5.35 builds on that by encouraging periodic, objective evaluation of your security arrangements as a whole.
Independent review in Annex A.5.35 sits alongside these and emphasises objective evaluation of your whole security approach, not just specific incidents or documents.
This distinction matters because auditors and clients will often ask both “How do you monitor security?” and “How do you independently review whether your security management is still effective?” You can answer the first with tooling and processes-security monitoring dashboards, remote management policies, change workflows. You answer the second with your independent review or internal audit programme. The most efficient MSPs design these elements so they reinforce each other: monitoring feeds evidence into audits, and independent reviews test whether monitoring and other controls are actually working as intended.
A simple comparison helps you position each activity:
| Activity type | Primary focus | Typical frequency |
|---|---|---|
| BAU monitoring | Detect and respond to issues in real time | Continuous or daily |
| Internal ISMS audit | Check ISMS conforms to ISO 27001 and your own requirements | Annual programme with cycles |
| Independent review (A.5.35) | Assess whether security approach remains suitable and effective | At planned intervals and after major change |
Having this picture ready makes it much easier to explain to auditors and clients how your different layers of assurance fit together, and where Annex A.5.35 lives in that picture. Visual: stacked diagram showing BAU monitoring, internal audit and independent review as three assurance layers.
ISO 27001 made easy
An 81% Headstart from day one
We’ve done the hard work for you, giving you an 81% Headstart from the moment you log on. All you have to do is fill in the blanks.
Designing Independence in a Small or Mid‑Sized MSP
Designing independence in a small or mid‑sized MSP means separating review decisions from day‑to‑day control operation, even when you have limited people. Independence is easy to imagine in a large enterprise with an internal audit department and a separate chief information security officer. It is much harder when you are running a twenty‑person MSP where the same senior engineer designs controls, operates them, and answers security questionnaires. The good news is that Annex A.5.35 and typical auditor expectations allow proportionate independence: you can design structures that fit a 10‑, 50‑ or 150‑person MSP by separating roles and decision rights rather than hoping to hire an internal audit team overnight.
Independence patterns for different MSP sizes
The right independence pattern for your MSP depends on size, but the principle-no one signs off their own work-remains constant. For a very small MSP, independence might mean that the managing director or operations lead commissions and signs off reviews, while a trusted colleague from a different function performs tests using agreed procedures. The person looking at backup controls should not be the same person who built the backup platform; however, they can still request and inspect evidence from that engineer. For a mid‑sized MSP, you can designate a security and compliance manager as the coordinator of internal audits and independent reviews, with reviewers drawn from finance, HR, operations or other teams who do not own the controls under review.
In larger MSPs, you might move closer to a classic three‑lines‑of‑defence model: service teams operate controls, a central risk and compliance function designs the framework, and an internal audit or quality assurance team performs independent testing and reports to senior leadership or the board. Whatever your size, the principle remains the same: reviewers must be able to form an objective view, escalate concerns without fear, and avoid signing off their own work. Documenting these patterns in an independence policy or section of your internal audit procedure will reassure auditors that you have thought this through and can scale the model as you grow.
Governance structures that demonstrate independence
Governance is what turns independence from an informal promise into something visible and testable. A simple, effective pattern is to make sure that the person responsible for the review programme reports, at least for this purpose, to someone other than the head of service delivery or the technical lead. For example, your independent review procedure might state that the review coordinator reports their findings directly to the managing director or a risk committee, even if they sit in the security team day‑to‑day. Management review minutes can then show that those findings were discussed, challenged and acted upon.
You can reinforce this with a clear RACI (Responsible, Accountable, Consulted, Informed) matrix. Control owners are responsible for operating controls; reviewers are responsible for testing and reporting; senior management is accountable for ensuring reviews happen and that findings are addressed. Staff who are consulted or informed should not be able to veto or edit findings to protect their own area. When your RACI and reporting lines make that separation obvious, auditors are more likely to be comfortable that your reviews are truly independent within the constraints of your size. Visual: simple RACI diagram showing separation between control owners, reviewers and leadership.
Blending internal and external reviewers without outsourcing accountability
Blending internal and external reviewers lets you strengthen independence without losing control of decisions. Many MSPs are tempted to rely entirely on an external consultant once a year to tick the independence box. External expertise is extremely useful, especially for initial design, high‑risk areas or validating objectivity. However, if you only bring someone in annually and do nothing internally between visits, your review programme will be fragile and may miss important changes. The strongest pattern is usually a blend: you run a risk‑based internal review cycle through the year, then invite an external specialist to sample and challenge a subset or to focus on particularly sensitive services.
Critically, you cannot outsource accountability. Even when an external party performs tests, your organisation remains responsible for deciding which findings to accept, what actions to take, and how quickly to address them. Make that explicit in your governance: external reviewers provide input and assurance, but your management review decides and owns the response. When clients or certification bodies ask about Annex A.5.35, you can then explain that you have a standing internal programme with periodic independent challenge, rather than a once‑a‑year consultant visit. That sets you up to discuss how you prioritise work, which leads naturally into the question of risk‑based planning.
A Risk‑Based Internal Audit Programme That Doesn’t Swamp Engineers
A risk‑based internal audit programme lets you meet Annex A.5.35 without swamping engineers in never‑ending checks. The core idea is simple: focus review effort where failure would hurt you and your customers most, and sample the rest over time. Annex A.5.35 expects you to plan independent reviews at sensible intervals and after big changes; ISO 27001 clause 9.2 expects an internal audit programme for the ISMS. Commentary on these requirements notes that both the independent review control and the internal audit clause refer to planned intervals and coverage guided by risk, rather than rigid fixed schedules, so you have flexibility in how you design the programme.
Roughly 41% of surveyed organisations said maintaining digital resilience and adapting to cyber disruptions was one of their main information‑security challenges.
Audits feel lighter when they follow your risk map, not your inbox.
Start with a simple MSP risk model
A simple risk model for your services and controls is enough to drive a sensible programme. List your major service lines-such as managed networks, endpoint management, backup and recovery, identity and access management, managed security monitoring, cloud hosting-and for each, rate the potential impact of a failure on confidentiality, integrity and availability for your clients. Consider factors like the sensitivity of data processed, regulatory exposure, contractual commitments and past incident history. You do not need a complex scoring system; high, medium and low can be enough as long as they are applied consistently.
Once you have this view, map security controls to those services and decide how often each combination needs independent review. For example, many organisations choose to look at higher‑risk areas quarterly or semi‑annually, medium‑risk areas annually, and lower‑risk areas on a rolling multi‑year cycle or opportunistic sampling. The aim is not to enforce a particular cadence, but to use risk to justify where you invest time. When auditors ask why you audit some things more often than others, you can point to this risk model instead of shrugging, and you can build your annual calendar on top of it.
Build an audit calendar that respects delivery work
An audit calendar that respects delivery work makes reviews feel like part of the job, not a disruption. With your risk model in hand, translate it into an annual or multi‑year audit calendar. For example, you might decide that in the first quarter you will review privileged access management for internal systems and key client platforms; in the second, you will look at patch management and vulnerability handling; in the third, your incident response process; and in the fourth, a cross‑cutting review of your ISMS documentation and management review process. Within each quarter, schedule specific weeks or days when evidence collection and interviews will occur, taking into account busy periods, major releases and known change freezes.
Involve operations and engineering leaders in this planning so they can flag potential clashes. If your development team has a major platform upgrade in a particular month, moving audit testing for that area to a quieter period will reduce friction without reducing assurance. Timebox activities: define how many hours reviewers and control owners are expected to spend during a cycle, and stick to it unless you find something serious. This discipline helps you avoid open‑ended audits that expand to fill all available time. It also shows auditors that you treat assurance as a planned process rather than a last‑minute scramble. Visual: simple quarterly audit calendar with risk tiers and indicative effort blocks.
Define a straightforward audit procedure your team can follow
A straightforward, written procedure turns good intentions into repeatable practice that any reviewer can follow. At minimum, your internal audit or independent review procedure should describe how scopes are selected, how criteria are defined, how sampling works, and how evidence and findings are recorded. For each review, the lead should produce a simple plan that states the objectives, the scope (systems, teams, time period), the criteria (policies, procedures, standards), and the methods (interviews, ticket sampling, log inspection, configuration checks). These seven steps capture the typical pattern for an independent review cycle.
Step 1 – Confirm scope and objectives
Agree what will be reviewed, why it matters, and which policies, services and time period are in scope.
Step 2 – Identify relevant policies, procedures and records
Collect the documents and records that describe how the control should operate before you start testing.
Step 3 – Define sampling criteria and sample sizes
Decide which tickets, logs or configurations you will test, and how many items you need for a fair sample.
Step 4 – Collect and test evidence against criteria
Pull the selected items from your systems and compare them to your documented procedures and standards.
Step 5 – Record observations, nonconformities and improvements
Write down what you saw, what did not match expectations, and where you noticed opportunities to improve.
Step 6 – Agree and log corrective actions with owners
Discuss findings with control owners, agree actions with due dates, and log them in a central register.
Step 7 – Report results into management review and risk register
Summarise the review for leadership and feed relevant findings into the risk register and management review agenda.
When everyone involved understands this pattern, reviews feel less like mysterious investigations and more like a known, bounded activity. It also makes it easier to explain your approach to auditors and clients, and to show how you keep the workload under control while still meeting A.5.35. You should not need to reinvent the process for every review; this procedure keeps expectations clear for both reviewers and engineers.
Free yourself from a mountain of spreadsheets
Embed, expand and scale your compliance, without the mess. IO gives you the resilience and confidence to grow securely.
Low‑Friction Evidence: Using Logs, Tickets and Dashboards Instead of Interviews
You make independent reviews far less painful when you rely on logs, tickets and dashboards instead of constant interviews. Modern MSPs are rich in machine‑generated evidence, and Annex A.5.35 does not spell out particular methods for gathering that evidence. Its focus, echoed by ISO/IEC 27002:2022 guidance for control 5.35, is on ensuring that reviews are objective and effective, which means you can lean heavily on data from your existing systems rather than defaulting to long meetings. One of the fastest ways to reduce the pain of independent reviews is to use the data you already have: your MSP generates tickets, change records, monitoring dashboards, configuration baselines, backup reports, authentication logs and more, and A.5.35 simply expects you to confirm, objectively, that your security arrangements are in place and working. When reviewers pull reports and samples from those systems first and only ask people when necessary, everyone saves time, disruption is reduced and the evidence is more convincing than reconstructed memories.
Use your tooling stack as the primary evidence source
Your tooling stack should be the primary evidence source for most independent review tests. Begin by listing the systems that already describe how your controls operate: your service desk and IT service management tools, your remote monitoring and management platform, log management or security information and event management, backup dashboards, identity platforms, and change management systems. For each key control area-such as access management, change control, patching, incident handling, backup and restore, vendor management-identify which system records the relevant events and decisions. During a review, your first step should be to pull reports or queries from these systems rather than asking engineers to dig through inboxes.
For example, to test whether incidents are classified and closed properly, you might sample a set of incident tickets from the last quarter and verify that each has a category, impact level, root‑cause analysis and closure notes. To review change management, you could examine change records for evidence of risk assessment, approvals and post‑implementation review. For backup, you might review summary reports showing success rates and test restores. These data‑driven checks are faster, less subjective and more convincing to auditors than informal explanations. They also let your engineers focus on fixing any gaps rather than repeatedly answering the same questions about past activity.
Build a reusable ticket and log query library
A reusable query library turns ad‑hoc evidence hunting into a repeatable routine. Capture the queries and philtres you use for each control into a simple library. For instance, you might define saved searches like “all high‑impact incidents in the last three months”, “changes affecting core client platforms”, or “new privileged accounts created this quarter”. During each review cycle, reviewers can run these saved queries, select a sample, and record their tests and conclusions. This avoids reinventing the wheel and reduces variability between reviewers. It also makes it easier to delegate evidence collection to someone outside the technical team under clear instructions.
Over time, you will find that some queries are useful not only for formal reviews but also for regular operational health checks. That is ideal: the more closely your independent review evidence aligns with the way you already manage services, the less it will feel like a separate burden. Remember to document any sampling rules-for example, always select at least ten items, or a certain percentage of total activity, or at least one example per key customer segment-so that reviewers are not accused of cherry‑picking. Clear criteria support both fairness and perceived independence.
Handling sensitive evidence safely in audit files
Handling sensitive evidence safely is part of running credible independent reviews. Independent reviews inevitably touch information that is sensitive: production logs, incident narratives, screenshots of configurations, or lists of privileged accounts. You must handle this material with the same care you apply to customer data in normal operations. That means limiting who can access audit working papers, storing them in controlled repositories, and thinking carefully about what is included in formal reports that might be shared more widely with clients or external auditors.
As a rule of thumb, keep detailed, potentially identifying evidence in internal working papers and summarise it in higher‑level reports using counts, patterns and redacted examples. If a ticket contains personal data or confidential customer information, remove or mask those elements before including it in an attachment. When in doubt, aggregate: stating that “ten out of twelve sampled incidents had complete root‑cause analysis” is usually enough for assurance without exposing names or specifics. A structured ISMS workspace or audit module can enforce access controls and retention rules for these records, helping you balance thorough testing with privacy and contractual obligations.
Turning Internal Audits into a Client‑Facing Evidence Engine
You get much more value from A.5.35 when internal audits double as a client‑facing evidence engine. If you treat independent reviews purely as an internal requirement, you will miss a significant part of their value. For MSPs, Annex A.5.35 can be the engine that powers smoother client audits, faster security questionnaires, stronger renewal conversations and even better margins. The key is to design your internal audit outputs so they can be partially reused, in a controlled way, as external assurance and become part of how you demonstrate reliability, not just how you satisfy an auditor. Enterprise customers increasingly expect evidence that their suppliers are actively testing controls, not just maintaining policies. Guidance on answering security questionnaires, such as articles aimed at CISOs and vendor managers, underscores how often customers now ask for examples of testing, internal audit findings and remediation activities rather than being satisfied with a policy excerpt alone.
Enterprise customers increasingly expect evidence that their suppliers are actively testing controls, not just maintaining policies; if you can show that your MSP runs regular independent reviews, records findings, and follows through on improvements, you offer visible proof that your security is managed, not assumed.
Managing third‑party risk and tracking supplier compliance was cited as a top challenge by about 41% of organisations in the 2025 ISMS.online survey.
Design internal reports that are easy to reuse with clients
Internal reports that mirror typical client questions are far easier to reuse in sales and assurance conversations. When you write an independent review report, aim for a structure that mirrors typical client questions. State the control or topic, the objective of the test, the method used, the period covered, the sample characteristics, the result and any corrective actions. For example: “Objective: verify that quarterly access reviews are performed for administrator accounts. Method: sampled ten accounts across three core systems for the last two quarters; compared evidence of review and approval to the access management procedure. Result: eight of ten had complete evidence; two lacked sign‑off; corrective actions raised.”
If your reports follow this pattern, you can extract sections for client due‑diligence questionnaires or attach redacted summaries to demonstrate that you are actively testing controls. You do not need to share every detail; often a one‑ or two‑page summary per area, plus a statement of how many findings were raised and how many are still open, is enough. The more consistent your reporting format, the easier it is for account managers and security leads to answer external questions quickly and confidently.
Map tests to frameworks and questionnaires your clients care about
Mapping your tests to client frameworks lets one review answer many different questionnaires. Most enterprise clients think in terms of their own frameworks: ISO 27001, SOC 2, widely used security frameworks, sector‑specific regulations or in‑house control catalogues. Framework comparison material, such as guidance that contrasts ISO 27001 with SOC 2 or explains how sector regulations map to control sets, shows how frequently organisations anchor supplier assurance in these structures before translating them into bespoke questionnaires. If you align your internal audit checklist with a unified control catalogue that maps your tests to these frameworks, you can answer a wide range of external requests with the same evidence. For instance, a single test of privileged access reviews might support Annex A requirements, commonly requested service‑organisation criteria and widely recognised identity management functions.
Maintaining this mapping in a central register-whether in a spreadsheet or, more effectively, in an ISMS platform-lets you look up which internal audit reports and evidence relate to each client question. When a vendor questionnaire arrives asking “How do you ensure timely patching?” you can point directly to your recent independent review of patch management, rather than assembling a new answer from scratch. Over time, this approach shortens response times, improves consistency between answers, and demonstrates to clients that you have a mature assurance model grounded in A.5.35.
Talking about findings without undermining confidence
Talking openly about findings, and what you did about them, builds more trust than pretending everything is perfect. Many MSPs worry that sharing anything related to internal findings will scare clients. In practice, sophisticated customers understand that any serious security programme will uncover weaknesses; what matters is how you respond. When you explain your independent review programme, frame it as a cycle of testing and improvement. For example: “We perform quarterly independent checks on our backup service. In the last cycle we identified gaps in test restore documentation, agreed corrective actions, and can show that those actions are now complete.”
This kind of narrative builds trust because it shows that you are willing to look critically at yourself and act on what you find. Avoid hiding issues; instead, put them in context, explain how you assessed risk, and describe the improvements you made. Your ability to show that Annex A.5.35 leads to tangible change-updated procedures, better monitoring, improved service levels-will often matter more to clients than a perfectly clean report. It also reinforces the idea that independent review is part of your value proposition, not just a box ticked for certification.
Manage all your compliance, all in one place
ISMS.online supports over 100 standards and regulations, giving you a single platform for all your compliance needs.
Governance, Metrics, KPIs and Typical Annex A.5.35 Gaps in MSPs
Governance, metrics and KPIs turn A.5.35 from a paperwork exercise into a living part of your ISMS. Independent review is not just an activity; it is part of your governance machinery. Without basic metrics and clear oversight, reviews can drift into a compliance ritual that nobody takes seriously. With the right metrics and rhythms, they become a consistent source of insight into how well your security arrangements are working. At the same time, many MSPs share similar gaps in their implementation of Annex A.5.35, which you can treat as design issues rather than personal failures.
KPIs that show your review programme is working
A small, focused set of KPIs can show whether your review programme is healthy without drowning you in numbers. You do not need dozens of indicators to manage Annex A.5.35 effectively. A short list that leadership understands is usually enough. Useful examples include:
In the 2025 survey, only around 29% of organisations reported receiving no fines for data‑protection failures, meaning a clear majority had been fined, with some penalties exceeding £250,000.
- Planned reviews completed on schedule: – percentage delivered against your annual calendar.
- Findings per review: – number and severity, to see whether you are still learning.
- Average time to close findings: – how quickly you act on what you discover.
- Repeat findings: – issues that reappear, signalling weak follow‑through.
- Coverage of high‑risk services: – proportion of critical services independently reviewed in the last 12–18 months.
Tracking these over time helps you spot trends: are you consistently slipping review dates, are the same issues resurfacing, are high‑risk areas neglected? Present these metrics in management review meetings alongside commentary, not just as raw numbers. If you see a spike in findings around access management, you might decide to invest in additional tooling or training. If time to close findings is growing, that may signal resource constraints or unclear ownership that need attention.
Common Annex A.5.35 gaps MSPs fall into
Many MSPs make similar mistakes when they first try to implement A.5.35, and recognising them early helps you avoid surprises. Across different organisations, recurring weaknesses show up in independent review programmes:
- No documented procedure: – reviews are ad‑hoc and inconsistent.
- Weak independence: – the same person designs, runs and “reviews” controls.
- Sporadic cadence: – reviews cluster before audits instead of following a plan.
- Thin documentation: – unclear scope, little evidence of tests, weak tracking of actions.
These gaps matter because they undermine both confidence and compliance. A certification auditor may raise nonconformities if they cannot see a structured, independent process. A client might question your maturity if you cannot produce recent review reports. Internal stakeholders lose trust if findings vanish into email threads. Treating these as common design issues makes it easier to address them constructively rather than defensively.
Quick wins you can deliver in the next 60–90 days
A focused 60–90‑day push can deliver visible progress and move your A.5.35 implementation towards a more credible footing. You do not need to solve every possible gap immediately. Start by writing or updating an independent review or internal audit procedure that defines purpose, scope, independence criteria, planning, execution and reporting. Next, create a simple twelve‑month review plan that lists which areas you will assess and when, linked to your risk model. Then, set up a basic log for findings and corrective actions with owners and due dates, ideally in a shared system rather than a personal spreadsheet.
Finally, run a pilot review using the new procedure, targeting a high‑value area such as access management, backup, or incident response. Use this cycle to refine your checklists, sampling approach and reporting format. Capture lessons learned and feed them into your governance process. If you are using an ISMS platform such as ISMS.online, configure its internal audit or review module to support this pattern, so future cycles are easier to plan and repeat. When auditors or clients ask how you handle independent review, you will then be able to describe a live, evolving programme rather than an aspiration.
Book a Demo With ISMS.online Today
ISMS.online helps you turn Annex A.5.35 from an onerous obligation into a structured, repeatable assurance cycle that fits how your MSP really works. Instead of juggling spreadsheets, email threads and scattered evidence, you can manage your entire review programme in one workspace: plan scopes and schedules, assign reviewers with appropriate separation from control owners, reference evidence from your existing tools, track findings, and demonstrate closure. A short, guided session is often the easiest way to see whether this approach matches your own A.5.35 ambitions.
See Annex A.5.35 working inside a structured ISMS
Seeing Annex A.5.35 modelled inside ISMS.online makes the control far easier to explain to colleagues, auditors and clients. You can explore built‑in templates for internal audits and independent reviews, map them to ISO 27001 clause 9.2 and Annex A controls, and tailor them for your service lines and customer commitments. Role‑based access and workflows help you show independence by clearly separating who operates controls from who reviews them. ISMS.online’s own internal‑audit guidance highlights how role‑based access, structured workflows and evidence registers support that separation in practice, making it easier to demonstrate objectivity when auditors ask who checks your controls.
Dashboards give leadership an immediate view of review status, open findings and remediation progress, supporting stronger management reviews and board updates.
Choose the next step that fits your role
The right next step depends on your role, and an initial session should feel like a practical exploration rather than a sales event. If you are a founder or operations leader, you can focus on how a structured review programme protects revenue, smooths client audits and reduces last‑minute firefighting. If you are a security or compliance lead, you can dive deeper into audit planning, evidence management and mapping to other frameworks such as SOC 2 or widely used security frameworks. Consultants and virtual CISOs can explore how to standardise Annex A.5.35 programmes across multiple MSP clients in separate workspaces.
You can see these patterns in practice in a short demo and then decide whether this environment is right for your MSP. Choose ISMS.online when you want Annex A.5.35 to support both assurance and growth, not just certification. If you value structured evidence, auditor‑friendly reporting and lower audit stress for your engineers, ISMS.online is ready to help your MSP build an independent review programme that works in real life as well as on paper.
Book a demoFrequently Asked Questions
You don’t need a rewrite here; you need the critique stripped out, not duplicated.
Right now your “Critique” block is almost a verbatim repeat of the FAQ draught. That’s why whatever is scoring the content is still returning 0 – it sees two near‑identical FAQ sets back‑to‑back.
Here’s what to do in atomic steps:
-
Keep only one copy of the FAQs
Delete everything under## Critique. Your working draught should just be the first FAQ block (from “### What does ISO 27001:2022 Annex A.5.35 actually require from an MSP?” down to the final paragraph about Annex L‑style IMS). -
Remove the “## History / ## Task / ## FAQ Draught / ## Critique” scaffolding
For a live FAQ page you only want the H3s and body copy. All the meta labels and section names (History, Task, Draught, Critique) should be removed before publishing. -
Tighten a few small things for clarity and duplication
If you want a slightly cleaned version ready to paste, here it is with minor edits and no meta wrappers:
What does ISO 27001:2022 Annex A.5.35 actually require from an MSP?
Annex A.5.35 expects your MSP to run planned, documented and objective reviews of how you manage information security, not just a one‑off check before certification. You define what is in scope, how often it is reviewed, who reviews it, which criteria they use, and how you record and act on the results.
What does “independent review” look like for a managed service provider?
For most MSPs, Annex A.5.35 becomes real when you:
- Write a short procedure that explains how independent reviews or internal ISMS audits are planned, performed and reported.
- Build a calendar of reviews linked to your services, risks and major changes rather than relying on a single annual inspection.
- Appoint reviewers who are not responsible for operating the controls they are testing, so they can give an objective view.
- Capture review plans, samples, findings and corrective actions in a way you can show to auditors and customers.
That structure turns A.5.35 from a vague label into a concrete, repeatable assurance activity that fits your size, client profile and service catalogue.
How is Annex A.5.35 different from clause 9.2 internal audit?
Clause 9.2 is about auditing your ISMS against ISO 27001 and your own requirements, while Annex A.5.35 focuses on having your overall security arrangements reviewed independently to confirm they remain suitable, adequate and effective. Most MSPs sensibly cover both by running a single internal audit programme that:
- Tests whether your ISMS meets ISO 27001 and your policies (clause 9.2), and
- Includes regular, risk‑based checks that your controls actually work in practice (A.5.35).
Auditors care that reviews are planned, objective and lead to visible improvement, not just a once‑a‑year paperwork exercise.
How does ISMS.online help you evidence Annex A.5.35?
ISMS.online gives you a single workspace to:
- Store your independent review or internal audit procedure.
- Build an annual and multi‑year review plan linked to risks and services.
- Assign reviewers with roles separated from control owners.
- Reference evidence from ticketing, monitoring, backup and identity tools.
- Track findings, corrective actions and retests to closure.
When a certification body or enterprise customer asks “Show me your last independent review”, you can open the relevant item in ISMS.online, walk through the plan, samples and actions, and export a concise summary instead of hunting across folders and email threads.
If you want Annex A.5.35 to feel like a controlled assurance process rather than a vague requirement, centralising it in an ISMS.online workspace is usually the cleanest next step.
How can a small MSP demonstrate “independent” review with a tiny security team?
A small MSP can demonstrate independence by separating roles and reporting lines, even if you only have one or two security specialists. Independence here means the people who analyse and sign off the review are not the same people who design and operate the controls being tested.
What practical options exist when you have very few people?
In a 10–50 person MSP, independence often looks like:
- A senior operations, finance or managing director commissioning and owning the review.
- Someone outside day‑to‑day security (service delivery, finance, HR or an external advisor) following a checklist, inspecting evidence and writing the report.
- The security lead providing logs, tickets and explanations, but not “marking their own homework”.
You can strengthen this by:
- Writing down simple conflict‑of‑interest rules so a control owner cannot review their own area.
- Documenting who the reviewers report to and how their conclusions are escalated.
- Discussing findings in management review meetings where security is one of several perspectives.
- Using an external consultant occasionally for high‑risk topics or to validate your overall approach.
Auditors and clients mainly want to hear a clear storey: who reviews what, why they are independent of the work under test, and how leadership uses the results.
How does ISMS.online support independence without extra headcount?
In ISMS.online you can:
- Assign different roles to control owners and reviewers.
- Control access to audit records so reviewers retain objectivity.
- Show reporting lines and review outcomes through Management Review records.
- Attach conflicts‑of‑interest statements and reviewer profiles to the relevant activities.
This makes your independence model under Annex A.5.35 much easier to explain and evidence, even when you do not have a formal internal audit department.
If you want to move from “trust us, we check things” to a documented independence model you can show on screen in a few clicks, ISMS.online gives you that structure without forcing you to grow your team.
How should an MSP design a risk‑based internal audit programme that doesn’t swamp engineers?
You keep reviews manageable by focusing effort where failure would hurt most and sampling everything else over time. That means using risk to drive your audit calendar instead of trying to inspect every control in depth every year.
How do you decide what to review and how often?
A practical pattern is to:
- Map core services-managed networks, backup, identity, monitoring, incident response-to confidentiality, integrity and availability impact.
- Rate services and control areas high, medium or low using data sensitivity, regulatory exposure and past incidents.
- Plan your review calendar so high‑risk topics (privileged access, patching, restore tests, incident handling) get more frequent reviews and slightly deeper sampling.
- Rotate lower‑risk areas on a longer cycle, rather than ignoring them.
Each review can follow a light‑weight, repeatable process:
- Confirm scope and objectives in a short plan.
- Identify criteria: policies, contract commitments, any external standards.
- Define samples: tickets, change records, logs, reports.
- Test the samples and record evidence.
- Capture findings, root causes and agreed actions with owners and dates.
By time‑boxing how many hours reviewers and engineers are expected to spend, and aligning reviews with existing rhythms (sprints, CAB meetings, maintenance windows), you avoid the “audit that eats the quarter”. Engineers know when reviews are coming, what will be asked and how long it will take, so Annex A.5.35 feels like part of normal work rather than a disruptive side‑project.
How does ISMS.online make a risk‑based programme easier to run?
ISMS.online helps you:
- Build a risk‑based audit schedule linked to services, assets and ISO 27001 controls.
- Reuse templates for audit plans, checklists and reports so each review follows the same simple pattern.
- Assign and track actions, deadlines and retests in one place.
- See at a glance which areas have been reviewed, which are due and where repeat findings appear.
That structure keeps the programme lean but effective. If you want to show that you take a risk‑based approach without turning audits into a full‑time job, using ISMS.online as the hub for your Annex A.5.35 reviews is an obvious move.
What evidence should an MSP collect to prove Annex A.5.35 is implemented effectively?
To satisfy Annex A.5.35 you need to show that independent reviews are happening and that they test real control operation rather than just confirming documents exist. A small, consistent evidence set usually gives auditors and customers the confidence they expect.
Which documents and artefacts do auditors commonly look for?
Typical evidence includes:
- A short, documented procedure for internal ISMS audits or independent reviews.
- An annual or multi‑year plan setting out what will be reviewed, when and by whom.
- Individual review scopes or plans describing objectives, criteria and samples.
- Working papers or evidence lists showing sampled tickets, changes, backup reports, access reviews, incident logs and similar records.
- Clear records of findings, root causes and opportunities for improvement.
- A corrective‑action log with owners, due dates and closure evidence.
- Management review minutes where results and decisions are visible to leadership.
Most of the raw material already exists in your toolset. Service desk tickets, change records and monitoring dashboards can all serve as independent review evidence if you choose representative samples and tie them to specific tests and conclusions. You do not need to hoard every log; you need enough to show that someone looked at real activity and made an objective judgement.
Over a few cycles, you will naturally assemble an “assurance pack” that becomes invaluable for vendor questionnaires, customer audits and recertification.
How does ISMS.online help you organise and retrieve that evidence?
With ISMS.online you can:
- Link each review to the relevant controls, risks and services.
- Attach or reference evidence from operational tools without duplicating everything.
- Maintain a single register of findings and corrective actions across all reviews.
- Generate exports or summaries tailored to auditors or customers.
Instead of scrambling through emails, screenshots and shared drives when someone says “Prove this control was independently reviewed”, you can show the review, samples and actions from a single ISMS.online screen. That makes Annex A.5.35 much less stressful for your team and more convincing for outsiders.
How often should an MSP run independent reviews under Annex A.5.35, and how do you justify your schedule?
Annex A.5.35 says reviews must take place at planned intervals and after significant changes, but it leaves the exact frequency to your risk‑based judgement. The key is that your schedule makes sense when you explain it against your services, contracts and incident history.
What does a sensible review cadence look like for MSPs?
Many MSPs use a structure like:
- One formal, full‑scope independent review each year covering the ISMS and core services.
- Quarterly or six‑monthly, narrower reviews for high‑risk topics such as privileged access, patch deployment, backup restore success or incident handling.
You can then justify your choices by:
- Linking frequencies to your risk register and service catalogue, for example reviewing services that handle regulated data or large contracts more often.
- Triggering extra reviews after major platform changes, large customer onboardings or serious incidents.
- Adjusting cadence using trend data-controls that consistently perform well may move to a slightly longer cycle, while repeated issues tighten the schedule.
When auditors or customers ask “Why this frequency?”, being able to point to a written risk model and change history is far stronger than quoting a rule of thumb.
How does ISMS.online help you defend and adapt your cadence?
In ISMS.online you can:
- Record the rationale for each review’s frequency against specific services, controls and risks.
- See upcoming, in‑progress and overdue reviews in one place.
- Tie reviews to incidents and changes so you can show when additional checks were triggered.
- Give leadership a simple view of assurance coverage and trends over time.
If you want Annex A.5.35 to feel like a live, risk‑driven process you can explain in plain language, capturing your schedule and rationale in ISMS.online is an efficient way to get there.
How can MSPs turn Annex A.5.35 internal audits into a client‑facing assurance asset?
You can turn your internal reviews into a commercial asset by designing them to answer the questions your customers ask during due‑diligence and renewals. When Annex A.5.35 testing is built with clients in mind, it becomes material for stronger security assurances instead of just an internal control.
How do you shape reviews so they support sales and renewals?
A simple pattern that works well is to document each review so you can easily re‑use parts in customer conversations:
- State the control objective in language a customer will recognise, such as “Backups can be restored within agreed times.”
- Describe the test performed: the sample size, period and methods used.
- Summarise results and key metrics, including any issues found.
- Record corrective actions and whether they have been completed.
From there, you can maintain a standard assurance pack that combines:
- An overview of your Annex A.5.35 review programme and scope.
- Recent high‑level results and trend metrics, like time to resolve findings.
- Confirmation that no unresolved critical issues remain.
- Carefully redacted examples of specific tests where appropriate.
When a prospect asks “How do you know backups work?” or “How often do you re‑check privileged access?”, having a recent independent review summary to share-rather than just a policy line-sends a far stronger signal about how you run your MSP.
How does ISMS.online help you reuse internal audit outputs with clients?
ISMS.online allows you to:
- Tag review findings and reports against specific services and controls that customers care about.
- Export concise summaries or evidence lists that align with common questionnaires and frameworks.
- Maintain a controlled set of customer‑safe extracts while keeping detailed working papers private.
That makes it much easier to build and maintain a repeatable assurance pack that supports new deals, renewals and vendor due‑diligence checks, while keeping Annex A.5.35 firmly grounded in how you actually run your services.
If you want internal audits to protect revenue as well as reduce risk, using ISMS.online to shape and share your A.5.35 outputs is a practical way to start.
How does ISMS.online make Annex A.5.35 easier to implement and sustain for MSPs?
ISMS.online gives your MSP a structured home for the entire Annex A.5.35 lifecycle, from planning and independence through to evidence, corrective actions and management review. That turns independent reviews into a predictable part of your ISMS rather than an annual scramble.
What does Annex A.5.35 look like inside ISMS.online?
Within one ISMS.online environment you can:
- Create and maintain a risk‑based internal audit or independent review schedule.
- Assign reviewers, separate their roles from control owners and manage conflicts of interest.
- Link each review to relevant ISO 27001 controls, services, risks, incidents and changes.
- Attach or reference evidence from ticketing, monitoring, backup and identity systems.
- Log findings, corrective actions and retests, and track status through dashboards and Management Review records.
For founders and operations leaders, that means Annex A.5.35 becomes part of how you protect monthly recurring revenue and reassure enterprise customers, rather than a last‑minute compliance task.
For security and compliance leads, it means you can show certification auditors exactly how your independent review control works and answer “show me” questions with live data rather than static documents.
For consultants and virtual CISOs, ISMS.online provides a repeatable pattern for Annex A.5.35 you can roll out across multiple MSP clients, using consistent plans, templates and reporting while tailoring scope to each environment.
If you want independent reviews to support both assurance and growth-not just tick a box against a control-seeing Annex A.5.35 running in ISMS.online is often the clearest way to decide how it should sit inside your ISMS and any Annex L‑style integrated management system you are building.
If you’d like, I can now:
- Rewrite specific answers to better fit a “Compliance Kickstarter” persona,
- Or compress this into a shorter 4–5 question FAQ for a landing page.








