developers ncsc ai guidelines blog

How an ISMS Can Help Developers Follow the NCSC’s New Secure AI Guidelines

The EU is undoubtedly the current global leader when it comes to AI regulation. However flawed its new AI Act may be, it represents a significant achievement. The UK is taking a more hands-off approach, despite signposting its ambition to be a responsible global actor, by convening global AI Safety Summit in November last year. Soon after, its National Cyber Security Centre (NCSC) produced a new set of guidelines for secure AI system development.

Hailed by the US Cybersecurity and Infrastructure Security Agency (CISA) as a “key milestone”, the guidelines are a great first step in helping developers build security-by-design principles into their work. The even better news is that they can use existing best practices to help them.

What do the NCSC guidelines entail?

The new document is relevant to providers of any systems containing AI – whether they’ve been built from scratch or on top of existing services. They’ve also been endorsed by Amazon, Google, Microsoft and OpenAI, as well as multiple other countries including all of the G7, plus Chile, Czechia, Estonia, Israel, Nigeria, Norway, Poland, Singapore and South Korea. The guidelines are split into four sections:

Secure design

Considerations at the first stage of AI development include:

  • Raise awareness of threats and risks among data scientists, developers, senior leaders and system owners. Developers will need to be trained in secure coding techniques and secure and responsible AI practices
  • Apply a “holistic process” to assess threats to the AI system, and understand the potential impact to the system and users/society if the AI is compromised or behaves “unexpectedly”
  • Design the system for security as well as functionality and performance. This will require supply chain risk management, and the integration of AI software system development into existing secure practices
  • Understand security trade-offs as well as benefits when choosing an AI model. Choice of model architecture, configuration, training data, training algorithm and hyperparameters should be made according to the organisation’s threat model and regularly reassessed

Secure development

At the development stage, consider:

  • Assessment and monitoring of supply chain security across the AI system’s lifecycle. Suppliers should adhere to the same standards as the organisation applies to other software
  • Identifying, tracking and protecting AI-related assets, including models, data, prompts, software, documentation, logs and assessments
  • Documenting the creation, operation, and lifecycle management of models, datasets and meta prompts
  • Managing and tracking technical debt through the AI model lifecycle

Secure deployment

At the deployment stage, look at:

  • Securing infrastructure according to best practice principles, such as access controls for APIs, models and data, and segregation of environments holding sensitive code
  • Continuous best practice protection of the model from direct and indirect access
  • Developing incident management procedures
  • Releasing models, applications or systems only after security evaluation such as red team exercises
  • Implementing secure configuration by default, so it’s easier for users to do the right things

Secure operation and maintenance

At the operational stage, the NCSC suggests that organisations:

How an ISMS can help

An information security management system (ISMS) can go a long way to ensuring an organisation’s AI systems and usage are secure, resilient and trustworthy, according to ISMS.online CTO Sam Peters. He argues that ISO 27001 compliance delivers a “scalable, top-down information security culture” built on “risk-driven, process-based security”, which can help developers looking to follow the NCSC guidelines.

“The beauty of ISO 27001 is that it frames infosec as an organisational governance issue,” Peters tells ISMS.online.

“By taking this governance approach for AI security strategy, organisations can be confident that they can scale security sustainably instead of just playing catch up. Teams also have clarity on baseline expectations. Ultimately, this reduces risk, even as AI systems and usage get exponentially more complex.”

Peters sees seven key areas of crossover between ISO 27001 and the NCSC guidelines:

Risk assessments:

ISO 27001 mandates regular infosec risk assessments, which can help uncover vulnerabilities, threats and attack vectors in AI systems.

Policies and procedures:

An ISMS requires comprehensive policies and processes for managing security. These could be tailored for AI systems and aligned to the NCSC guidelines.

Access controls:

Role-based access controls and privilege management is required by ISO 27001 and could also help restrict access to sensitive AI assets like datasets and models.

Supplier management:

ISO 27001 requirements for audits and contractual agreements can help manage risks in third-party relationships with AI vendors.

Incident management:

The ISO standard also features requirements for incident management which organisations can use to respond to security incidents affecting AI systems.

Security monitoring:

The logging, monitoring and alerting controls required for ISO 27001 compliance can help organisations detect anomalous AI system behaviour and respond to incidents.

Security awareness and training:

ISO 27001 requirements in this area can be extended to ensure key stakeholders understand the unique security challenges of AI systems and stay informed about the latest threats and best practices.

The next steps

“As AI technologies evolve and become more embedded in everyday processes, the overlap between AI security and information security will likely grow in areas like data security, model robustness, explainability and confidentiality – which all directly build on the foundations of information security,” Peters concludes.

“This will require a comprehensive approach to security that considers both traditional information security principles and the unique challenges posed by AI technologies.”

Will the NCSC guidelines gain widespread adoption? Given that they’re voluntary, the jury is still out on that one. But for any organisation developing AI systems, they are highly recommended. Better to put in the time and effort now to architect secure-by-design systems, than risk a serious breach in the future. Such an incident could cost the organisation many times more to remediate and recover from reputationally.

ISMS.online now supports ISO 42001 - the world's first AI Management System. Click to find out more