How does ISO 27701:2025 apply to AI systems that process personal data?
Artificial intelligence systems frequently process large volumes of personally identifiable information (PII), whether for training machine learning models, making automated decisions about individuals, or analysing behavioural patterns. ISO 27701:2025 does not contain AI-specific controls, but its privacy framework applies directly to any system that processes PII, including AI.
The following Annex A controls are particularly relevant to AI privacy:
| Control | Title | AI relevance |
|---|---|---|
| A.1.2.2 Identify and Document Purpose | Identify and document purpose | Requires clear documentation of why PII is being collected for AI training datasets, model inputs and outputs |
| A.1.2.3 Identify Lawful Basis | Identify lawful basis | AI processing must have a valid legal basis, particularly important for scraping or repurposing data for model training |
| A.1.3.11 Automated Decision Making | Automated decision making | Directly addresses AI driven decisions about individuals, requiring safeguards, transparency and the ability to contest outcomes |
| A.1.4.2 Limit Collection | Limit collection | Data minimisation for training sets: collect only the PII genuinely needed for the AI purpose |
| A.1.4.5 PII Minimisation | PII minimisation objectives | Requires objectives for minimising PII, directly applicable to reducing unnecessary personal data in training datasets |
| A.1.4.6 De-identification and Deletion | De-identification and deletion | Anonymisation and pseudonymisation techniques for AI training data to reduce privacy risk |
| A.3.27 Secure Development Life Cycle | Secure development | Privacy by design in AI system development, ensuring privacy considerations are embedded from the start |
Automated decision making under A.1.3.11 Automated Decision Making
Control A.1.3.11 Automated Decision Making is one of the most significant controls for AI governance. It requires organisations to identify processing activities that involve automated decision making, ensure PII principals are informed when decisions affecting them are made by automated systems, and provide mechanisms for individuals to contest those decisions. This aligns closely with GDPR Article 22 and positions ISO 27701:2025 as a practical framework for managing AI-related privacy obligations.
Data minimisation for AI training sets
One of the greatest privacy challenges in AI is the tendency to collect and retain vast quantities of personal data for model training. Controls A.1.4.2 Limit Collection (limit collection) and A.1.4.5 PII Minimisation (PII minimisation objectives) require organisations to collect only the PII genuinely needed for the documented purpose and to set measurable objectives for reducing PII use. In practice, this means organisations should evaluate whether synthetic data, anonymised datasets or federated learning approaches can achieve the same outcome with less privacy risk.
Control A.1.4.6 De-identification and Deletion (de-identification and deletion) is equally important. Organisations should implement pseudonymisation or anonymisation techniques for training data wherever possible, and delete source PII once the model has been trained if it is no longer required for its original purpose.
Organisations using AI systems should also consider ISO 42001, the dedicated AI management system standard. While ISO 27701:2025 addresses the privacy aspects of AI processing, ISO 42001 provides a broader governance framework for responsible AI development and deployment. The two standards complement each other effectively.
What controls address IoT device privacy?
The Internet of Things introduces unique privacy challenges. Connected devices such as health wearables, smart home assistants, industrial sensors and vehicle telematics systems collect PII continuously, often from environments where individuals may not be fully aware of the data collection taking place.
Key privacy risks with IoT devices include:
- Ambient collection — Devices may capture PII from bystanders or household members who have not consented
- Data volume — Continuous sensor data can create detailed profiles of behaviour, location and health status
- Limited interfaces — Many IoT devices have no screen or user interface, making it difficult to provide privacy notices or obtain consent
- Transmission risks — Data transmitted from devices to cloud services may traverse insecure networks
- Retention — The always-on nature of IoT devices can lead to indefinite data retention if not properly managed
Unlike traditional IT systems, IoT devices often operate in uncontrolled environments where physical access cannot be restricted, software updates are difficult to deploy, and the sheer number of devices creates a large attack surface. A compromised IoT device can expose PII at scale with minimal indication to the data subjects affected.
ISO 27701:2025 addresses these risks through several controls:
| Control | Title | IoT application |
|---|---|---|
| A.3.22 User Endpoint Devices | User endpoint devices | Security requirements for IoT devices as endpoints, including configuration, access control and lifecycle management |
| A.3.26 Use of Cryptography | Use of cryptography | Encryption of PII in transit from IoT devices and at rest on device storage |
| A.1.4.10 PII Transmission Controls | Transmission controls | Securing data transmitted between IoT devices and processing infrastructure |
| A.1.4.2 Limit Collection | Limit collection | Configuring IoT devices to collect only the PII necessary for their stated purpose |
| A.1.4.7 Temporary Files | Temporary files | Managing cached or buffered PII on device storage |
| A.3.19 Clear Desk and Clear Screen | Clear desk and clear screen | Display devices and kiosks that may show PII in shared or public spaces |
Start your free trial
Want to explore?
Sign up for your free trial today and get hands on with all the compliance features that ISMS.online has to offer
How does ISO 27701:2025 address biometric data privacy?
Biometric data, including fingerprints, facial recognition templates, iris scans, voiceprints and behavioural biometrics, represents some of the most sensitive PII an organisation can process. Under GDPR, biometric data processed for identification purposes is classified as special category data under Article 9, requiring additional safeguards.
ISO 27701:2025 does not have a dedicated “biometric data” control, but the framework provides comprehensive coverage through controls that address each stage of biometric data processing:
Collection and consent
Biometric data collection requires particularly careful attention to purpose documentation and consent. Control A.1.2.2 Identify and Document Purpose requires organisations to clearly document why biometric data is being collected. Controls A.1.2.4 Determine Consent and A.1.2.5 Obtain and Record Consent establish requirements for determining when consent is needed and recording it appropriately. For biometric data, explicit consent is almost always required.
Minimisation and storage
The data minimisation controls are critical for biometric data. Control A.1.4.2 Limit Collection (limit collection) ensures organisations collect only the biometric data genuinely needed. Control A.1.4.5 PII Minimisation (PII minimisation objectives) requires setting specific objectives for reducing biometric data where possible, for example storing derived templates rather than raw biometric images.
De-identification and security
Control A.1.4.6 De-identification and Deletion covers de-identification techniques that are particularly relevant to biometric data. Organisations should consider whether biometric templates can be used instead of raw data, whether one-way hashing can replace stored biometric images, and when biometric data should be permanently deleted. Control A.3.26 Use of Cryptography (cryptography) addresses the encryption of biometric data both in transit and at rest, which is essential given the sensitivity of this data type.
Biometric data and GDPR
For organisations subject to GDPR, Annex D maps ISO 27701 controls to GDPR articles, including Article 9 on special categories of data. This mapping provides a structured approach to demonstrating that biometric data processing complies with European data protection requirements.
Common biometric data scenarios
Organisations should consider how ISO 27701:2025 applies to their specific biometric use cases:
- Employee access control — Fingerprint or facial recognition for building entry requires purpose documentation (A.1.2.2 Identify and Document Purpose), a lawful basis (A.1.2.3 Identify Lawful Basis, often legitimate interest or contract), and encryption of stored templates (A.3.26 Use of Cryptography)
- Customer authentication — Voice or face recognition for banking or payment services requires explicit consent (A.1.2.4 Determine Consent, A.1.2.5 Obtain and Record Consent), privacy impact assessment (A.1.2.6 Privacy Impact Assessment) and robust de-identification measures (A.1.4.6 De-identification and Deletion)
- Healthcare biometrics — Retinal scans, gait analysis or physiological monitoring in clinical settings requires strict collection limitation (A.1.4.2 Limit Collection), secure transmission (A.1.4.10 PII Transmission Controls) and clear retention policies (A.1.4.9 Disposal)
- Law enforcement cooperation — Organisations that receive requests for biometric data from authorities must have processes under A.1.5.x to manage such requests lawfully
Practical implementation across AI, IoT and biometrics
While the specific privacy risks differ across AI, IoT and biometric technologies, the ISO 27701:2025 framework provides a consistent approach to managing them. Organisations should:
- Map processing activities — Document every activity where AI, IoT or biometric systems process PII, including the purpose, lawful basis, data categories and recipients
- Conduct privacy impact assessments — Control A.1.2.6 Privacy Impact Assessment requires privacy impact assessments for processing activities that present elevated risk. AI, IoT and biometric processing will almost always qualify
- Apply privacy by design — Use control A.3.27 Secure Development Life Cycle to embed privacy considerations into the design and development of AI models, IoT architectures and biometric systems from the outset
- Establish data subject rights processes — The A.1.3.x controls ensure individuals can access, correct and delete their data, which presents particular technical challenges for data embedded in AI training sets or distributed across IoT networks. Consider how you will honour erasure requests when PII has been used to train a model, or how you will provide access to data collected passively by IoT sensors
- Implement technical safeguards — The A.3.x shared controls provide a baseline of security measures including cryptography, access control, logging and incident management
- Monitor and review — Emerging technologies evolve rapidly. Use the Clause 9 performance evaluation requirements to regularly review whether your privacy controls remain effective as AI models are retrained, IoT device firmware is updated, or biometric systems are upgraded
Building a cross-technology privacy register
Organisations that process PII through multiple emerging technologies should maintain a unified processing activity register that captures each technology type, the PII categories processed, applicable controls and risk ratings. This provides a single view of privacy obligations across AI, IoT and biometric systems, making it easier to identify gaps and demonstrate compliance during audits.
The register should document: the specific technology and processing activity; the categories of PII involved; the lawful basis and purpose; applicable Annex A controls; the privacy impact assessment status; and any residual risks after control implementation. ISMS.online provides a centralised register that links processing activities directly to the relevant controls, risks and evidence.
This approach is especially valuable during certification audits, where auditors expect to see a clear, documented relationship between processing activities, risk assessments and the controls applied to manage those risks. A unified register demonstrates that your organisation manages emerging technology privacy risks systematically rather than in isolation.
Get started easily with a personal product demo
One of our onboarding specialists will walk you through our platform to help you get started with confidence.
Why choose ISMS.online for managing AI, IoT and biometrics privacy?
ISMS.online provides the tools and structure to manage privacy across complex processing environments:
- Processing activity register — Document and maintain a complete inventory of AI, IoT and biometric processing activities with purpose, lawful basis and risk ratings
- Privacy impact assessment workflows — Structured PIA templates that guide you through the assessment process for high-risk processing activities
- Control mapping — See exactly which Annex A controls apply to your AI, IoT and biometric processing and track implementation status
- Integrated risk management — Assess and treat privacy risks specific to emerging technologies within a single, centralised risk register
- Multi-framework support — Manage ISO 27701:2025 alongside ISO 42001 (AI management) and ISO 27001 (information security) in one platform, with shared controls and evidence
FAQs
Does ISO 27701:2025 cover AI specifically?
ISO 27701:2025 is a technology-neutral standard that applies to all forms of PII processing, including AI. While it does not contain AI-specific controls, its framework covers the key privacy concerns that arise from AI systems: purpose limitation, lawful basis, automated decision making, data minimisation and transparency. For AI-specific governance, organisations should also consider ISO 42001, which addresses responsible AI management. The two standards work well together, with ISO 27701 covering the privacy dimension and ISO 42001 covering broader AI governance.
How does ISO 27701 relate to ISO 42001?
ISO 27701:2025 and ISO 42001 are complementary standards. ISO 27701 provides a privacy information management system for protecting PII across all processing activities, while ISO 42001 provides a management system for the responsible development and use of AI. Organisations using AI to process personal data benefit from implementing both: ISO 42001 for overall AI governance (including ethical considerations, bias, transparency and risk) and ISO 27701 for the specific privacy obligations around how that AI handles personal data. ISMS.online supports both standards in a single platform.
Is biometric data treated as special category data under ISO 27701?
ISO 27701:2025 itself does not use the term “special category data” as that is GDPR-specific terminology. However, the standard recognises that certain types of PII carry higher risk and require additional safeguards. Biometric data falls into this category. When implementing ISO 27701, organisations processing biometric data should apply heightened controls for collection limitation, consent, de-identification and security. If you are also subject to GDPR, Annex D provides the mapping between ISO 27701 controls and GDPR articles, including Article 9 which governs special category data.
For a comprehensive framework on AI-specific privacy risks, see our AI Privacy Governance guide.
SaaS platforms processing PII face unique challenges — our guide for SaaS platforms covers these in detail.








