What does control A.1.3.11 require?
The organisation shall identify obligations, including legal obligations, to the PII principals resulting from decisions made by the organisation which are related to the PII principal based solely on automated processing of PII, and be able to demonstrate how it addresses these obligations.
This control falls within the Obligations to PII principals objective (A.1.3), which ensures that organisations provide appropriate transparency and rights to individuals whose data they process. Automated decision making is an area of increasing regulatory and public concern, and this control requires organisations to proactively identify where such decisions occur and implement meaningful safeguards.
What does the implementation guidance say?
Annex B (section B.1.3.11) provides the following guidance:
- Identify automated decisions — Determine where decisions are made based solely on automated processing that produce legal effects or similarly significant effects on PII principals (e.g. credit scoring, automated recruitment screening, insurance pricing)
- Provide meaningful information — Give PII principals meaningful information about the logic involved in the automated decision, the significance of the processing and the envisaged consequences for the individual
- Implement safeguards — Put in place measures including the right to obtain human intervention from a qualified person, the ability for the PII principal to express their point of view, and the ability for the PII principal to contest the decision
- The organisation should document which processing activities involve solely automated decision making and what safeguards are in place for each
- See also A.1.3.2: Obligations to PII Principals for related requirements
- See also A.1.3.6: Object to PII Processing for related requirements
The emphasis is on ensuring that individuals are not subject to consequential decisions made entirely by machines without recourse. This aligns closely with the transparency obligations in A.1.3.3 and A.1.3.4.
How does this map to GDPR?
Control A.1.3.11 maps to several GDPR provisions:
- Article 13(2)(f) and 14(2)(g) — Require organisations to inform data subjects about the existence of automated decision making, including profiling, and provide meaningful information about the logic involved, significance and envisaged consequences
- Article 22(1) — Gives data subjects the right not to be subject to a decision based solely on automated processing which produces legal effects or similarly significantly affects them
- Article 22(3) — Requires the data controller to implement suitable measures to safeguard the data subject’s rights, including the right to obtain human intervention, express their point of view and contest the decision
For the full GDPR-to-ISO 27701 mapping, see GDPR Compliance Guide.
How does this relate to ISO 29100 privacy principles?
This control supports the ISO 29100 principle of Purpose legitimacy and specification. Automated decisions must be made within the scope of the originally specified and legitimate purposes. Where automated processing is used to make consequential decisions, the organisation must demonstrate that this use is consistent with the purposes communicated to PII principals and that appropriate safeguards are in place.
Manage all your compliance, all in one place
ISMS.online supports over 100 standards and regulations, giving you a single platform for all your compliance needs.
What evidence do auditors expect?
When assessing compliance with A.1.3.11, auditors will typically look for:
- Automated decision inventory — A register of all processing activities that involve solely automated decision making, including the types of decisions made and their effects on PII principals
- Logic documentation — Meaningful descriptions of the algorithms, models or rules used to make decisions, written in terms that a non-technical person can understand
- Safeguard procedures — Documented processes for human intervention, including who is authorised to review automated decisions, how PII principals can request a review and the timescales for response
- Privacy notices — Evidence that PII principals are informed about automated decision making before it takes place, including through privacy notices or specific notifications
- Records of challenges — Logs of any requests for human review, the outcomes and how the organisation responded
- Impact assessments — Privacy impact assessments or data protection impact assessments covering the automated decision making systems
What are the related controls?
| Control | Relationship |
|---|---|
| A.1.3.3 Determining information for PII principals | Transparency requirements that include informing individuals about automated decision making |
| A.1.3.4 Providing information to PII principals | The mechanism for communicating automated decision making information |
| A.1.2.2 Identify and document purpose | Automated decision making must be within documented purposes |
| A.1.2.3 Identify lawful basis | A valid lawful basis is required for the automated processing |
| A.1.4.3 Limit processing | Automated processing must remain proportionate to the identified purpose |
| A.1.4.4 Accuracy and quality | Input data accuracy is critical for fair automated decisions |
What changed from ISO 27701:2019?
For a step-by-step approach, see the Transition from 2019 to 2025.
In the 2019 edition, this requirement was covered under Clause 7.3.10 (automated decision making). The 2025 edition has separated automated decision making into its own dedicated control (A.1.3.11), reflecting the growing regulatory emphasis on algorithmic transparency and accountability. The substance of the requirement is similar, but the explicit call-out of human intervention rights, the ability to express views and the ability to contest decisions is more prominent. See the Annex F correspondence table for the full mapping.
Get started easily with a personal product demo
One of our onboarding specialists will walk you through our platform to help you get started with confidence.
Why choose ISMS.online for managing automated decision making compliance?
ISMS.online provides the tools you need to govern automated decision making within your privacy management system:
- Automated decision register — Catalogue every automated decision making process, the data it uses, the decisions it produces and the safeguards in place
- Impact assessment workflows — Run privacy impact assessments for automated decision making systems with built-in templates and approval workflows
- Human review tracking — Log and track requests for human intervention, ensuring responses are timely and documented
- Policy management — Maintain version-controlled policies on algorithmic accountability, linked to your processing records
- Cross-control mapping — See how automated decision making requirements connect to transparency, lawful basis and data quality controls in a single view
- Audit-ready evidence packs — Export complete documentation of your automated decision making governance for certification audits
FAQs
Does this control apply to all automated processing?
No. A.1.3.11 specifically targets decisions based solely on automated processing that produce legal effects or similarly significant effects on PII principals. Automated processing that supports but does not solely determine a decision (for example, a system that flags applications for human review) is less likely to fall within scope, though transparency obligations still apply.
What counts as a “similarly significant effect”?
Beyond legal effects (such as denial of a credit application), similarly significant effects include decisions that substantially affect someone’s circumstances, behaviour or choices. Examples include automated rejection from a job application, insurance premium calculations, or denial of access to services. The threshold is whether the decision has a meaningful impact on the individual’s life.
How should organisations explain algorithmic logic to individuals?
The standard requires “meaningful information about the logic involved” rather than a full technical disclosure of the algorithm. In practice, this means explaining what data is used, the general factors considered, how they influence the outcome and what the possible consequences are. The explanation should be in plain language that the PII principal can reasonably understand.








