Organizations fret about security and privacy risk. And more recently, they’ve paid attention to AI risk. But how often do they think of all three in the same conversation?
Increasingly, it’s becoming clear that they should. Laws covering data protection, cybersecurity, and AI have quadrupled since 2016 across the U.S., EU, UK, and China.
The SEC has already proved that it’s serious about cybersecurity. Its cybersecurity rules, effective December 2023, are already reshaping how public companies handle breach disclosure. Form 8-K Item 1.05 now requires companies to disclose material cybersecurity incidents within four business days of determining materiality, not from when the incident was discovered. Form 10-K Item 106 mandates annual disclosure of risk management processes and board oversight structures.
The Commission isn’t afraid to punish companies that it believes to have downplayed security incidents. Just over a year ago in October 2024, the SEC settled enforcement actions against four public companies (Unisys, Avaya, Check Point, and Mimecast) for misleading investors about the impact of the 2020 SolarWinds cyberattack. The combined penalties approached $7 million. Unisys alone paid $4 million for describing cyber risks as “hypothetical” in its filings, while internal teams knew of actual intrusions.
Between December 2023 and January 2025, 55 cybersecurity incidents were reported via Form 8-K filings. Beyond the SolarWinds-related actions, Flagstar paid $3.55 million in December 2024 for describing a breach affecting 1.5 million people as mere “access” when data had actually been exfiltrated.
These penalties demonstrate a need to connect cybersecurity disclosure with broader enterprise risk management. The SEC’s formation of a new Cyber and Emerging Technologies Unit in February 2025 signals this scrutiny will continue. That replaced the Crypto Assets and Cyber Unit. CETU also hints at the importance of factoring AI into these risks, as it specifically includes both AI and cybersecurity practices in its mandate.
Fragmented Governance Creates Compounding Exposure
American companies with European operations also face additional pressure from the EU AI Act, which took effect in August 2024. The law, which comes with compliance deadlines staggered through 2027, applies extraterritorially. U.S. businesses placing AI systems in the EU market or deploying AI whose outputs affect EU users must comply.
The stakes are substantial. Penalties for prohibited AI practices reach €35 million or 7 percent of global annual revenue, whichever is higher. High-risk categories, covering AI used for employment decisions, credit scoring, and healthcare diagnostics, require conformity assessments, technical documentation, and human oversight mechanisms. Prohibitions on unacceptable-risk AI systems took effect in February 2025.
AI Is Showing Up In Disclosure Documents
Investor expectations are shifting as these risks evolve. Regulators and shareholders are making it clear that the old model of separate teams managing cybersecurity, privacy, and AI as distinct domains no longer works.
AI has migrated from boardroom opportunity discussions to the risk factors section of annual reports with remarkable speed. Seventy-two percent of S&P 500 companies now disclose material AI risks, up from just 12 percent in 2023. The concerns they cite most frequently are reputational damage (38 percent of disclosing companies), cybersecurity implications, and regulatory uncertainty.
Board oversight has followed. According to ISS-Corporate, 31.6 percent of S&P 500 companies disclosed board oversight of AI in their 2024 proxy statements. That’s an 84 percent year-over-year increase.
Those that don’t impose such oversight risk material shareholder harm, which could lead to potential negative vote recommendations. Last year Glass Lewis, a proxy advisory firm that advises institutional shareholders on how to vote, issued new benchmark guidelines directly addressing AI governance.
The trouble with managing cybersecurity, privacy, and AI separately is that incidents relating each of these bleed into the others. A single breach can simultaneously trigger SEC disclosure obligations, GDPR notification requirements, state privacy laws, and (if personal data trained an AI system) emerging AI regulations.
So the time has come to merge consideration of these risk areas, but none of this is easy. According to the National Association of Corporate Directors’ July 2025 governance outlook, AI is now a routine topic for 61 percent of boards, yet few have integrated it properly into governance structures.
Why? Cultural friction is one reason. Security, privacy, and AI teams have historically operated with different vocabularies, risk frameworks, and reporting structures.
Technology integration adds another layer of difficulty; siloed GRC tools create fragmented approaches to risk assessment, audit documentation, and evidence collection. Budget constraints force painful tradeoffs between building integrated infrastructure and meeting immediate compliance deadlines.
Standards Frameworks Offer A Path Forward
The good news: major standards bodies anticipated this convergence. ISO’s High-Level Structure means that ISO 27001 (information security), ISO 27701 (privacy), and the newer ISO 42001 (AI management systems) share compatible architectures, enabling organizations to build unified management systems rather than parallel bureaucracies.
Practical integration typically starts with cross-functional steering committees that include privacy, cybersecurity, legal, and AI representatives. From there, organizations develop shared risk taxonomies and (where budgets allow) unified GRC platforms that eliminate redundant assessments. Role boundaries are already blurring: according to an IAPP and EY survey, 69 percent of chief privacy officers have acquired AI governance responsibilities.
Organizations that don’t evolve their practices along these lines risk regulatory exposure. For those that do, lower regulatory friction, reduced audit burden, and stronger investor confidence await.










