NIST recently announced plans to refresh its privacy framework and make it more of an organic offering and less static. Does this benefit practitioners? Dan Raywood looks at the reasons for the change.
Last year saw the introduction of the second version of the NIST Cybersecurity Framework, an update of its 2014 version to broaden the use of the framework, improve guidance on implementation and emphasise the importance of governance.
Naturally, cybersecurity and privacy go hand in hand, and 12 months on from the revised Cybersecurity Framework, NIST announced a two-month review period in April 2025 for the Privacy Framework to consider new additions and revisions.
Manage Privacy Risks
Last year saw the first announcement of the revisions to version 1.1, with the concept paper released in June 2024 ahead of the Initial Public Draft that was published in April.
NIST says that changes to the Privacy Framework are needed because of the relationship to its Cybersecurity Framework: the two frameworks have the same high-level structure to make them easy to use together.
Julie Chua, director of NIST’s Applied Cybersecurity Division, called the update “modest but significant.” She said: “The Privacy Framework can be used on its own to manage privacy risks, but we have also maintained its compatibility with Cybersecurity Framework 2.0 so that organisations can use them together to manage the full spectrum of privacy and cybersecurity risks.”
Minor Update
Meghan Anderson, a privacy risk strategist with the Privacy Engineering Program at NIST, explains that rather than a major overhaul, this is “a very light, minor update.”
Speaking to ISMS.online, Anderson says that the privacy framework “is a living tool that is meant to evolve to meet our stakeholders needs.” In the five years since the first privacy framework was published, those stakeholders have been able to identify areas where those targeted improvements could be made, and consider changes in technology.
She played down the importance of the revisions to version 1.1, calling them “just minor revisions or restructuring of the categories in subcategories.”
However, she did acknowledge that after five years of the original version, it was time for a change. “It was like ‘this is a milestone, let’s update it’,” she says.
Specifically, Anderson says that as the privacy framework was modelled after the cybersecurity framework, there needs to be a connection maintained between the two frameworks. “I think the one thing that’s really great about the privacy framework is it’s very flexible, so a lot of organisations or stakeholders that utilise the framework have the ability to mould it to what they need for their organisations, privacy outcomes and goals.”
New Elements
One of the significant changes in this revision is to create an online version of the framework. More than just putting it on the website, this means NIST can publish timely and relevant updates in response to user needs. Anderson says that section three has been moved, where guidance is hosted on how to use the privacy framework.
“Our hope is that way, it’s a little bit more interactive, and we can update it a bit more frequently versus having it in a PDF document that stays still,” she says. “This way, we can provide that on the website more instantly versus in the PDF, which takes time to update, revise, and get republished.”
She also says that feedback on new trends – such as AI – was common, so additional guidelines on the relationship between AI and privacy risk management were added, and it is now a new section in the privacy framework’s initial public draft.
The initial public draft claims that the revised framework “can assist organisations with identifying and managing privacy risks that can arise from data processing within AI systems throughout the AI lifecycle.” This includes privacy risks that arise when AI systems are trained on data collected without individuals’ consent or have missing or inadequate privacy safeguards.”
In some cases, AI technology “may be the key enabler of privacy risk” and may create privacy problems for individuals and groups. AI may impact “the privacy of individuals and groups, leading to significant organisational impacts, ranging from revenue losses to reputational harms.”
Therefore, organisations can use the new framework to “effectively manage AI privacy risks and ensure that organisational privacy values are reflected in the development and use of AI systems.”
Essential Evolution
It’s not a complete solution, but it’s definitely a step forward. What about from the practitioner’s perspective? Is this enough to meet modern challenges?
Speaking to ISMS.online, Tarun Samtani, advisory board member at the IAPP, says that the proposed revision “represents essential evolution” and praises its alignment with last year’s Cybersecurity Framework.
He said that revision “bridges critical operational gaps between security and privacy – a pain point I’ve witnessed repeatedly.”
In particular, Samtani claims that the current framework offers solid theory but struggles in practical application. Looking at the draft for v1.1, he praises the introduction of addressing emerging AI risks but says, “It lacks practical implementation pathways for resource-constrained organisations.”
Insufficient Not Obsolete
From a practitioner’s perspective, does he feel this revision is needed, and was the 2020 version’s guidance especially outdated? He says it’s not obsolete, but it is increasingly insufficient. “Since 1.0’s 2020 release, we’ve seen explosive growth in the use of AI systems and automated decision-making-creating novel privacy risks,” he says.
“The proposed PFW 1.1 wisely incorporates emerging AI considerations while reflecting lessons from maturing regulatory regimes. This timely update acknowledges that privacy risk management now extends beyond traditional data processing into algorithmic transparency.”
On the other side, Samtani was not completely full of praise for the subsequent revisions, claiming the draft needs clearer metrics beyond maturity tiers and more prescriptive approaches for smaller enterprises navigating today’s challenging data and AI landscape.
He says: “PFW 1.1’s structural improvements might address some usability concerns, but without structured implementation guidance, compliance might remain elusive, particularly for organisations lacking mature privacy programmes.”
He claims that the proposed structural alignment between PFW 1.1 and CSF 2.0 addresses operational friction, which he has witnessed while advising multinational organisations. He recommended three practical additions to improve practitioner usability:
- First, integrated implementation playbooks demonstrating simultaneous operationalisation.
- Second, standardised cross-framework metrics for consistent reporting.
- Third, technology-specific profiles for common scenarios like AI deployments.
“These enhancements would transform frameworks from reference documents into operational tools driving measurable privacy governance improvements,” he concludes. In response,
Anderson says that all comments on the revision of version 1.1 are welcome.
Revising something like this comes with challenges, and the technological changes in the last five years mean this revision needs to be done. From a compliance and adherence perspective, the new version should not be too troublesome, as the changes are not too significant and should enable businesses to bridge the gap between the cybersecurity and privacy frameworks.
For many, these changes will be welcome, but smaller or less-resourced organisations may struggle without clearer implementation pathways. That’s where the shift to a more dynamic, online version of the framework could prove most valuable, allowing NIST to respond faster to emerging issues and offer more practical, evolving guidance. While the framework won’t meet everyone’s needs perfectly the first time, this move toward a living, responsive resource marks a meaningful step forward.