election misinformation

How the World’s Digital Giants Are Rooting Out EU Election Disinformation

As the European Union prepares for upcoming elections, it faces the daunting challenge of safeguarding its democratic processes against the pervasive threats of disinformation and digital manipulation. Its robust response, backed by the Digital Services Act (DSA), has been to release strict new guidelines for very large online platforms (VLOPs) designed to prevent falsehoods from going viral.

The DSA itself not only underscores the EU’s commitment to protecting electoral integrity but also sets a precedent for how democracies can respond to the digital challenges of our time, ensuring that the pillars of democratic engagement remain strong and uncorrupted.

The Problem with Disinformation

Disinformation in the digital era presents a profound threat not just to the sanctity of elections but to the fabric of society and business integrity. It erodes public trust in institutions, fuels political instability, and can lead to significant financial and reputational damage for companies caught in the crossfire. The business impact is particularly severe; dis- and misinformation can skew consumer behaviour, disrupt markets, and lead to costly regulatory scrutiny.

Recent incidents illustrate the far-reaching consequences of unchecked digital falsehoods. For instance, during the 2016 US Presidential election, widespread disinformation campaigns were deployed, notably involving fabricated news stories that were shared across social media platforms to influence voter perceptions. The infamous Cambridge Analytica scandal highlighted how personal data could be exploited to influence voter decisions, demonstrating the potential of digital platforms to sway political outcomes. Then during the COVID-19 pandemic, fake news about the virus and vaccines led to public health crises and disrupted response efforts in multiple countries.

In this volatile landscape, businesses and governments alike are grappling with the challenge of distinguishing truth from falsehood, while striving to maintain a level of transparency and reliability in their communications. If left unchecked, online falsehoods not only threaten democratic processes but could also compromise the competitive landscape by undermining fair business practices and consumer trust.

The EU’s Regulatory Approach

The Digital Services Act (DSA) represents the EU’s strategic framework to regulate the digital space, with a strong emphasis on combating mis- and disinformation and securing democratic processes. This legislative move targets VLOPs, which are crucial conduits for information during election times.

The EU’s Digital Services Act defines VLOPs as those digital services that reach at least 45 million users within the bloc, which amounts to roughly 10% of the population. This categorisation targets the platforms that have the broadest reach and influence, necessitating stricter regulatory oversight to mitigate the risks and impacts of spreading misinformation on a large scale. By focusing on these platforms, the DSA ensures that the primary sources of digital content and interaction are held to higher standards of accountability and transparency.

The DSA mandates that these platforms implement more rigorous measures to manage and moderate content, enhance transparency around algorithmic processes, and ensure that political advertising is clearly labelled to prevent hidden influences.

New regulations under the DSA specifically geared towards securing elections require these platforms to rapidly address mis- and disinformation threats and to be more accountable in their content moderation practices. Platforms are also expected to provide users with greater control over the content they see, potentially curbing the spread of fake news.

VLOPs will need to make sure any political ads on their platforms and AI deepfakes are clearly labelled. And set up dedicated teams to monitor potential fake news narratives in some EU countries. Algorithms must promote diverse content, and there should be emergency processes in place for when deepfakes of political leaders appear, the European commission reportedly said. Non-compliance could lead to fines of up to 6% of global annual turnover.

Thomas Richards, principal security consultant at Synopsys Software Integrity Group believes that the DSA could significantly reduce misinformation if effectively implemented by VLOPs.

“If the VLOPs can implement the DSA guidance well, I think the prevalence of misinformation will decrease. It will create a cat-and-mouse game…but overall, I think people will see less of it,” he tells ISMS.online.

Adam Marrè, CISO at Arctic Wolf, adds that the DSA’s focus on transparency and accountability could fundamentally alter how misinformation is managed.

“The guidelines set forth robust mechanisms for accountability … If executed effectively, this strong enforcement mechanism is crucial for ensuring that platforms adhere to their commitments and responsibilities,” he tells ISMS.online.

Challenges and Enforcement

Enforcing the DSA poses several practical challenges, notably in the realms of detection, accuracy and scale. The vast amount of data managed by VLOPs necessitates the use of automated systems, primarily AI, to monitor and control misinformation. However, as Richards points out, “identifying and removing content, with accuracy, will be the largest hurdle to overcome.” The reliance on AI for detection can lead to inaccuracies and inadvertent censorship, potentially allowing disinformation to slip through or, conversely, wrongfully penalising legitimate content.

Moreover, the dynamic nature of disinformation tactics means that as soon as new detection methods are developed, malicious actors devise new strategies to bypass them. Richards argues that those spreading disinformation are constantly evolving techniques to “bypass or trick the detection mechanisms into thinking the content should not be removed.”

Hannah Baumgaertner, head of intelligence research at Silobreaker, notes that elections taking place already this year, in Taiwan, have been marred by disinformation campaigns

“Moreover, the ongoing Russia-linked Doppelganger campaign is likely to pivot its focus towards the UK,” she tells ISMS.online.

During the Brexit referendum, various platforms saw a surge in misleading articles and manipulated statistics designed to influence public opinion. Similarly, automated bot accounts have been utilised to amplify divisive content, demonstrating how disinformation strategies are continually refined to exploit new technological advances. These examples highlight the relentless evolution of fake news tactics, making enforcement a complex and dynamic challenge.

Broader Business Implications

The rise of deepfakes and digital deception introduces new vulnerabilities that businesses must address in their risk management strategies. As these technologies become more sophisticated, the potential for misuse in manipulating public perception and damaging reputational integrity increases.

Drawing from the broader principles outlined in the DSA, businesses can enhance their resilience by implementing rigorous verification processes and transparency measures. By integrating advanced detection technologies and establishing clear protocols for managing digital threats, companies can better safeguard against the destabilising effects of disinformation. Further, by promoting transparency and accountability, businesses can not only comply with emerging regulations but also build trust with their stakeholders, thereby reinforcing their market position.

Setting a Global Benchmark

The DSA marks a robust commitment to combating mis- and disinformation, setting a global benchmark for how democracies can protect electoral integrity in the digital age. By enforcing stringent guidelines on major online platforms, the EU not only aims to diminish the spread of false information but also to fortify the foundational principles of democracy.

These efforts are critical for sustaining the health of the digital economy, where trust and transparency play pivotal roles. As we look to the future, the continuous adaptation and enforcement of such regulations will be key to preserving both democratic processes and the integrity of the digital marketplace.

In this context, businesses face their own battles with deepfakes and digital deception. These technologies pose serious threats to corporate integrity and security, necessitating robust information security management systems (ISMS) to enhance awareness, establish detection protocols, and develop effective incident response strategies. As deepfakes become more sophisticated, staying ahead of these threats is essential for maintaining corporate operations and public trust.

The State of Information Security Report 2024 Now Live - Read Now