Skip to content

What’s Behind the “Right to Explanation” in the EU AI Act-And Why It Changes How Your Team Must Operate

Compliance is no longer about hiding complexity inside a technical appendix or impressing auditors with policy prose. Article 86 of the EU AI Act moves the spotlight: can your team or platform explain an individual AI decision-clearly, immediately, and in plain language-to a person it impacts? That’s the new compliance minimum. If a user can’t get a straight answer, your certifications and security reports aren’t worth the paper.

If your team can’t justify an AI decision to an ordinary user, your policies and technical paperwork mean nothing.

Article 86 cuts through PR noise. It demands not just a snapshot of how AI works in theory, but a step-by-step rationale for any decision that directly affects legal rights, employment, access to loans, insurance, healthcare, or similar societal lifelines. It isn’t interested in a white-paper or PowerPoint. Explanations must be available, understandable, and concrete-and the expectation is that both lay users and regulators can interrogate a decision’s origin, rationale, impact, and recourse.

The effect is immediate. With Article 86, regulatory and legal teams are no longer the sole audience. Now, compliance, security, product, and engineering leaders share responsibility: can you trace a decision to its source, show what triggered it, explain the real-world consequence, and allow for redress or challenge? Anything less is, by design, non-compliance under EU law (European Commission, EU AI Act 2024).

Failure isn’t trivial. If just a handful of users or staff can’t understand or access the logic behind a major AI decision, you create regulatory exposure, reputational damage, and rapidly erode trust. You lose the ability to defend the business if the press, users, or authorities come calling. It only takes one broken explanation chain for the whole operation to look suspect.


ISO 42001-The Backbone for Real, Defensible Article 86 Compliance

Article 86 sets a strict requirement-but leaves “how” excruciatingly open-ended. That’s risk in itself. ISO 42001 steps in as the pragmatic backbone: it spells out the artefacts, operational processes, and checks needed to turn explanation rights from theory into actionable, provable reality.

When you follow ISO 42001, you build a system that is audit-ready from the ground up. No more scrambling for emails, version histories, or back-channel technical write-ups. Here’s what it looks like in practice:

  • Clause 6.1.4: Every potentially high-risk AI outcome must be mapped to a *documented impact assessment*. This connects system outputs directly to human consequences-making the stakes traceable.
  • Annex A.5.2: All explanations and transparency artefacts must be *logged and retrievable*-not just passively written into static documentation.
  • Clauses 8.2, A.8.2, and A.8.4: Every user explanation request must trigger a reviewable, comprehensible answer in accessible language; and users must know exactly *how* to challenge or appeal a decision.
  • Clauses 10.1 & 10.2: The entire process-every request, gap, and update-must feed continuous improvement, with triggers forcing root-cause analysis if things fall short.

Below, see how Article 86 requirements map directly to ISO 42001 controls and practical artefacts:

Article 86 Requirement ISO 42001 Clause/Annex Tangible Artefact Example
Explain logic/core factors A.5.2, A.6.2.7, A.8.2 User-facing explanation letter or dashboard
Detail personal consequences 6.1.4, A.5.4, 8.4 Consequence summary given to user
Use plain, accessible language 7.3, 8.2, A.8.2, A.8.4 Jargon-free notification copy
Track/respond to requests 8.1, 7.4, A.8.5, 10.1/10.2 Request log, dashboard export
Prove with evidence trails 9.1, 10.2, A.5.27 Log extracts, audit review reports

ISO 42001 moves your right to explanation from theory to evidence-making transparency a daily practice, not an aspiration. (ISMS.online, ISO 42001 and Explainability )

The result: audit readiness isn’t paperwork for inspection-it’s living, accessible proof. You reduce risk, show ongoing discipline, and have the tools to earn user trust on demand.




Everything you need for ISO 42001, in ISMS.online

Structured content, mapped risks and built-in workflows to help you govern AI responsibly and with confidence.




“Audit-Ready” Article 86 Explanations: What Regulators and Users Actually Expect

Bluffing or hiding behind “trade secret” language is over. Both users and regulators can-and will-ask you to prove the origin, rationale, and consequence of any significant AI decision. Here’s what an audit-proof, user-facing explanation looks like in practice:

Example: Real-World User Decision Explanation

“`
Dear [User Name],
Our AI system has reached the following decision:

  • Decision: Denied
  • Key Reasons: (a) Insufficient employment record; (b) Declared income below necessary minimum; (c) Late payment in last quarter.
  • Policy Reference: Applications are refused for >1 late payment in a six-month period.
  • Next Steps: You have 30 days to appeal or provide new documentation via our secure portal. Support available.

“`

The explanation must allow the average user-or any external advisor-to understand the ruling, trace the factors, and see their immediate rights to challenge.

Request Tracking & Audit Chain Example

Request ID Date User Channel Decision Response Time Status Notes
20034 2024-06-19 A.P. Web form Denied 36h Closed Email sent

Model Impact Assessment (For Auditors)

Model: CreditEligibilityAI v2.2
Key Criteria: Employment verified, minimum income met, payment on record.
Bias Checks: Audited quarterly for protected attributes.
Recent updates: Templates updated June 2024 as per ISMS.online best practice.

Your test: Can a user, supervisor, or outside regulator independently trace the path from system trigger to outcome, then drill down to rationale and recourse, with artefacts that survive legal and press scrutiny? If not, the explanation fails Article 86.




How to Build a Robust Article 86 Explanation Process-And Prove It on Demand

Talk is cheap; compliance is about robust, retrievable process control. Your team must deliver explanations-with chain of custody, timestamps, decision rationale, and proof of user receipt-on demand, every time. ISO 42001 makes these controls enforceable and mandatory, not “nice to have”.

Core Workflow for Article 86 Compliance

  1. User makes a request-via portal, email, or chat.
  2. Automated log-system assigns unique ID, captures channel and timestamp.
  3. User acknowledgement-documented confirmation, estimated timeline provided.
  4. Staff gather evidence-decision logs, input data, and model context.
  5. Draught explanation-clear, jargon-free, context-specific, with legal review if necessary.
  6. Deliver to user-via requested channel, with full traceability.
  7. Close and review-status flagged, archived, included in the next periodic process audit.

ISO 42001 operationalises each step-missing logs, vague answer templates, or excess delay each trigger an improvement cycle (Clause 10.1/10.2), closing the audit gap before it becomes a crisis.

The only defensible missing artefact is one logged as a recognised gap, under review, and tracked to closure with evidence.

This system is your frontline defence-not just against fines, but against loss of user and market trust.




ISMS.online supports over 100 standards and regulations, giving you a single platform for all your compliance needs.

ISMS.online supports over 100 standards and regulations, giving you a single platform for all your compliance needs.




What Makes a User Explanation “Compliant” and Trustworthy Instead of Just Legally Safe?

There’s a significant difference between legal formality and real-world, user-trusted explanation. True compliance does not hide behind complexity or legalese. Here’s what sets trustworthy explanations apart:

The Five Pillars of Compliant Explanations

  • Plain, direct language: “Application declined due to late payment,” not “violated system anomaly thresholds”.
  • Clear rationale, not mystery: Explicitly list every input or factor that drove the result.
  • Actionable recourse: Crystal-clear next moves for appeals, correction, or support.
  • Direct feedback channel: Users can request clarification or challenge decisions instantly.
  • Repeatable, but tailored: Templates guarantee consistency, but always reflect the user’s unique circumstances.

Audit these explanations with real user queries and third-party reviews each quarter. Version templates rigorously. Compliance isn’t static: every new model, policy, or outcome type triggers immediate review and update. With ISMS.online, templates and logics are always current-and mapped directly to system events.

Explanations only ‘count’ when a real user or outsider can quickly grasp, and meaningfully respond. (ISMS.online, Sample Artefact Pack )

Your best evidence against regulatory overreach or major complaints is a mapped artefact pack-each policy, event type, and process with a live template and audit trail.




Maintaining Article 86 Audit-Readiness Through Continuous Proof and Process Evolution

Regulators and users expect more than “annual compliance.” They expect living, evolving evidence-that every process, template, and artefact stays up to date, real-world tested, and never drifts from actual practice.

How To Stay Continuously Audit-Ready

  • Clause 9.1: Track every request, closure, and performance outcome in detail.
  • Clauses 10.1/10.2: Failures become fuel for root-cause investigation and targeted process redesign.
  • Quarterly tabletop audits: Simulate live regulatory challenge, test real explanations, and stress-test evidence chains.
  • External validation cycles: Bring in outside compliance partners, like ISMS.online, to audit process integrity and artefact freshness.

The gap between what’s on paper and what’s practised is the leading cause of regulatory penalties in explainability audits. (ISMS.online, Explainability in Practice )

Modern automation tools-feature-complete on platforms like ISMS.online-version-control your templates, archive every explanation, and provide full traceability for every artefact and process change. No more scramble when the call comes.




climbing

Embed, expand and scale your compliance, without the mess. IO gives you the resilience and confidence to grow securely.




Access the Article 86 Compliance Artefact Pack-Transform Uncertainty Into Confidence

Building in compliance beats retrofitting under pressure. Practical, sample-rooted artefacts make every part of your explanation process robust-at speed, and at scale.

What your team gets:

  • User explanation templates (email, dashboard, letter)
  • Logging/tracking forms (CSV, workflow-ready)
  • Impact/consequence statement sheets, matched to decision type
  • Quarterly, audit-driven process checklists for ISO 42001 alignment

Ready-to-implement artefacts have changed our position from hoping we were compliant to confidently passing regulators’ checks, zero findings. (ISMS.online, Download Compliance Pack )

Link your artefact pack directly to your ISMS.online dashboard for seamless updates, reminder prompts, and one-click retrieval. Audit panic is replaced by confident, transparent delivery-every time, with evidence.




Secure Your Audit-Readiness with ISMS.online-No Gaps, No Spin, Just Proof

No regulator or board cares about what you meant to do, or how strong your internal intentions were. They care about evidence: traceable explanations, version-controlled artefacts, robust logs.

ISMS.online ensures you’re not guessing when it matters. Instead, your compliance and security leaders move fast:

  • Simulate real audits, step through hands-on decision reviews, and close process gaps before they turn into risks.
  • Use ISO 42001-aligned checklists, versioned templates, and a unified artefact dashboard to transform audit prep from a scramble to a routine.
  • Protect every artefact with streamlined access controls, so security and compliance integrity are never second-guessed.

With ISMS.online, right to explanation isn’t a scramble, it’s embedded. Artefact management, user comms, and chain-of-evidence tracking are features-not afterthoughts. (ISMS.online, Audit-Ready Platform )

When a challenge arrives, your only answer is proof-ready, real, and respected by every regulator and stakeholder. No excuses. Build your system for explainability, and you win trust, resilience, and freedom to operate on your own terms.



Frequently Asked Questions

What makes the “right to explanation” in Article 86 of the EU AI Act a disruptive shift from standard transparency obligations?

The “right to explanation” in Article 86 forces your organisation to give every affected individual a concrete, plain-language account of how high-risk AI influenced their outcome-not a generic statement or privacy notice, but a personalised walkthrough that exposes the logic behind the decision and what it means for that user. In short: the law expects you to put yourself in the person’s shoes and defend every automated judgement, with a route for them to question or challenge the result.

A compliance badge is worthless if your user’s left guessing-Article 86 makes the explanation their power, not just your paperwork.

Most transparency regimes pound out high-level system info, privacy policies, or broad algorithmic sketches. Article 86 sweeps all that into the background and insists the actual user gets a step-by-step breakdown, tied to the specifics of their case. It’s no longer enough to broadcast technical documentation and hope for the best. The risk flips: failure to supply clear, relevant explanations turns every automated action into a potential investigation, a complaint, or even a fine.

What sets Article 86’s explanation demands apart?

  • Explanations must be specific, actionable, and tailored per affected person, not one size fits all.
  • You’re judged on your ability to show decision logic, contributing factors, and individual impact-not just handwaving at “AI was used.”
  • Each explanation must expose a path for review or appeal, signalling genuine accountability.

Traditional transparency buries users in policy; Article 86 wants them to walk away with real answers, ready to hold you to account if those answers don’t add up.

At a glance: Standard Transparency vs. Article 86 Right to Explanation

Requirement Standard Transparency Article 86 Explanation
Audience General public Directly impacted user
Content Depth Generic info, data use Case-specific logic, outcomes
Format Policy, docs, help pages Personalised, lay-language
Proof Published statement On-demand, logged artefact
User Empowerment Informed on process Can review, challenge, appeal

How does ISO 42001 turn the “right to explanation” from an obligation into an operational system you can defend?

ISO 42001 doesn’t just align with Article 86-it hardwires explainability into your daily business. Instead of ad hoc responses or panic-written templates, you get a backbone of processes: every explanation request, draught, log, and delivery is mapped, versioned, and tested. Where the AI Act gives users leverage, the standard lets your team show receipts at every step, resisting scrutiny from regulators, partners, or anyone else.

  • Full Decision Logging (Annex A.5.2, A.6.2.7): Every AI outcome is documented-no explanations “lost in the system.”
  • User-Centric Templates (A.8.2, A.8.4, Clause 7.3): Explanations are drafted for humans, not just lawyers. Templates are managed, not left to rot.
  • Request & Delivery Tracking (Clauses 8.1, 10.1, 10.2): Every user inquiry, response, challenge, and escalation is stored, timestamped, and traceable.
  • Continuous Validation (Clauses 9.1, 10.2): Your system spots bottlenecks and shortfalls-then tracks and verifies every fix, not just papering over mistakes.

Relying on memory or scattered files is an open invitation for failure; ISO 42001 backs every explanation with proof you can show on demand.

The result: your organisation transforms explainability from a blunt compliance stick into a robust, defensible system. When someone asks for an answer-or for proof you gave it correctly-you deliver with confidence, and a clear trail to back you up.

Key proof points: Article 86 demands vs ISO 42001 evidence

AI Act Demand ISO 42001 Control(s) Real-World Evidence
Personalised explanation 7.3, A.8.2 User-ready explanation file
Specific decision and logic breakdown A.5.2, A.6.2.7 Model rationale snapshot
Impact and corrections described 6.1.4, A.5.4, 8.4 Individualised impact note
Timeliness and traceability 8.1, 7.4, 10.1, A.8.5 Complete audit log
Correction/appeal enablement A.8.4, 10.2 Request/closure records

Which documentation and artefacts actually satisfy Article 86, and how should you maintain them?

Talk is just liability. Article 86 compliance stands or fails based on your ability to cough up proof: artefacts that show, for any request or decision, what you did, when you did it, and why it holds water. Passing an audit, defending against a regulator, or just keeping partnerships live all comes down to ready, indexed documentation-not assurance vapour.

  • Living template library: Create, review, and version actual explanation texts-mapped to every risk scenario, business line, and user group you touch.
  • End-to-end request logs: Take, timestamp, and track every explanation request, assigning handlers and confirming outcomes.
  • Model/logic artefacts: For any challenged decision, reconstruct the actual path: what data went in, how the AI did its work, and what factors mattered.
  • Staff manuals/playbooks: Lock in training and update routines for every staffer in the chain-impossible for responsibility to get lost in a reorg.
  • Improvement journals and audit trails: Keep snapshots of every exception, fix, audit, and process change-showing the system doesn’t just exist, but actually gets better.

Every artefact’s value increases when it’s versioned, reviewed, and coupled with an “owner.” If a GDPR inspector walked in today, could you trace any user’s request from signal to closure, proving every claim? That’s the standard.

The must-have suite for ongoing Article 86 defence

Artefact What It Demonstrates
Explanation templates User comprehension and consistency
Logs (request/fulfilment/closure) Process reliability and compliance
Model/impact documentation Technical explainability, accuracy
Training/playbooks Human factor resilience
Audit/fix logs Proactive learning, not box-ticking

How do you structure a practical, scalable Article 86/ISO 42001 explanation workflow?

Scalability isn’t scale in theory-it’s living up to user demand, system change, and probe-level scrutiny without breaking down. The smart move is to build an atomic, role-driven chain: every request logged, each explanation tracked from draught to delivery, with real humans able to review, escalate, and prove it worked.

Reference architecture-operational steps that work

  1. Request intake: Catch all user requests wherever they come-web, email, phone. Assign a unique ID and confirmation on the spot.
  2. Immediate decision logging: Securely snapshot the exact AI output, input, logic, and person assigned to answer, without delay.
  3. Explanation drafting: Use up-to-date templates, customised to the incident and individual; avoid stale, one-size-fits-most language.
  4. Human review: A real expert reviews before sending-every explanation should stand up in front of a regulator, partner, or auditor.
  5. Delivery and engagement: Return the answer using the person’s preferred channel; confirm receipt, and spot misunderstandings quickly.
  6. Closure and performance tracking: Mark the case closed, update logs, and use every outcome to scan for errors, escalations, or policy gaps.

A workflow that breaks under real traffic, or loses a case in transition, only guarantees more work-usually the crisis kind.

Table: Built-for-complexity, audit-ready workflow snapshot

Step What Happens ISO 42001 Reference
Intake Log each user request, confirm 8.1, 7.4
Log Record outcome, input, factors, staff A.5.2, 10.1
Draught Tailor explanation to context A.8.2, 7.3
Review Legal, logic, and rights check A.8.4, 10.2
Deliver Send to user, confirm reading 8.2, 10.2
Closure Case marked, outcome tracked 9.1, 10.2

What’s the real risk if your Article 86 process is just surface-level, even with an ISO 42001 badge?

Certifications lull executives into thinking all liability is managed; in truth, a weak explanation process is sandbag in a flood. If explanations are slow, patchy, unclear, or missing-especially when a user or investigator is looking-you’re left exposed:

  • Regulatory blowback: The AI Act brings GDPR-tough fines; late, bungled, or missing explanations mean seven-figure risk and, if systemic, court-ordered AI suspension.
  • Public trust goes to zero: User stories go viral fast-media loves a stonewalled patient, denied applicant, or job seeker. Salvaging trust is infinitely harder than protecting it.
  • Audit deadlock: Auditors or partners may freeze deals or require endless remediation-not because your intentions are bad, but because your logs have gaps and your explanations don’t stick.
  • Internal friction: The less traceable your system, the more crisis cycles build-good teams quit when forced to fight fires with broken hoses.

Surface-level compliance is functionally an invitation for rigorous, possibly very expensive, investigation.

Weak explanation = acute exposure to…

  • Penalties, suspensions, or forced system changes
  • High-profile complaints escalating quickly
  • Broken partnerships and extended buying cycles
  • Escalating cost from recurring internal errors and repairs

Which steps actually future-proof your “right to explanation” function?

The difference between teams who fly through audits and those who burn: the first set up for resilience, not just regulatory minimums. These teams:

  • Track every metric and outcome (Clause 9.1): Find slowdowns, spikes, and errors before they become headlines.
  • Automate root-cause to fix (Clause 10.2): Every glitch, delay, or user complaint drives system improvement that’s written down and validated.
  • Test the system, not just the process: Run internal “mystery user” and red-team challenges-simulate outages, spikes, and hostile reviews.
  • Live-version all artefacts: Every template and decision log is dated; nothing is static or left to rot.
  • Balance automation and human judgement: Use bots for recordkeeping and template fill, but always allow the hard cases a path to a real expert.
  • Implement platforms like ISMS.online: Centralise and manage all artefacts, logs, dashboards, and improvement actions in one place.

Winning organisations turn the agony of compliance into a demonstration of operational strength-laying down traceable proof, automating basics, and arming staff against surprises.

To lock in Article 86 compliance, couple documented, tested processes with full-spectrum visibility and a platform like ISMS.online-so every user’s request is handled, explained, and proved, no matter when the question is asked.

An organisation that can demonstrate, on any day, how it explains, audits, and improves its high-risk AI work isn’t just less exposed-it earns the trust that turns regulation into a competitive weapon.



Mark Sharron

Mark Sharron leads Search & Generative AI Strategy at ISMS.online. His focus is communicating how ISO 27001, ISO 42001 and SOC 2 work in practice - tying risk to controls, policies and evidence with audit-ready traceability. Mark partners with product and customer teams so this logic is embedded in workflows and web content - helping organisations understand, prove security, privacy and AI governance with confidence.

Take a virtual tour

Start your free 2-minute interactive demo now and see
ISMS.online in action!

platform dashboard full on mint

We’re a Leader in our Field

4/5 Stars
Users Love Us
Leader - Spring 2026
High Performer - Spring 2026 Small Business UK
Regional Leader - Spring 2026 EU
Regional Leader - Spring 2026 EMEA
Regional Leader - Spring 2026 UK
High Performer - Spring 2026 Mid-Market EMEA

"ISMS.Online, Outstanding tool for Regulatory Compliance"

— Jim M.

"Makes external audits a breeze and links all aspects of your ISMS together seamlessly"

— Karen C.

"Innovative solution to managing ISO and other accreditations"

— Ben H.