In today’s rapidly evolving digital landscape, disinformation presents one of the greatest threats to our democratic societies, according to the Global Risks Report 2024 published by WEF. Disinformation, often spread via social media and digital platforms, has the potential to distort public discourse and erode institutional credibility.
This issue has recently become even more pressing with Meta announcing the end of their third-party factchecking program for both Facebook and Instagram. With this decision, the company is following in the footsteps of Elon Musk’s X, which ditched their fact-checking program in favour of a feature called “Community Notes”.
In our recently published research paper “Countering Disinformation With a Focus on Fact-Checking and AI”, we explored how Switzerland can leverage its digital ecosystem to combat disinformation, particularly through the use of Artificial Intelligence (AI) and automated fact-checking.
Disinformation and Switzerland
In Switzerland, as in many other countries, there is a growing reliance on digital platforms for news consumption, especially among the younger population. Social media and online channels are becoming the primary sources of information. However, anyone can create and disseminate content online without verification or editorial oversight.
This issue amplifies the challenges associated with distinguishing between authentic/legitimate news and fake news. This shift away from traditional news outlets has increased the need for effective mechanisms to manage and mitigate the risks of false information, ensuring the public has access to reliable and accurate data.
What is Disinformation?
Disinformation involves the deliberate dissemination of false information intended to deceive the public. While disinformation poses global risks, Switzerland’s decentralised governance and reliance on public participation make it particularly vulnerable.
AI offers significant potential to detect and combat disinformation, but it is not without its limitations. Public scepticism regarding AI’s transparency, combined with a lack of AI-powered fact-checking tools, highlights the need for a holistic and multi-faceted approach.
AI and Disinformation Detection
AI has the potential to revolutionise fact-checking through Natural Language Processing (NLP), Machine Learning (ML), and even blockchain technology. These tools can process vast amounts of data, identify false information, and assist human fact-checkers in verifying claims faster and more accurately.
Such AI-driven systems are already proving effective in identifying harmful content that could otherwise spread unchecked. However, human oversight remains critical. While AI systems are good at sifting through large datasets, they struggle with nuanced content, including cultural differences, where human judgement is essential.
Without a human touch, automated fact-checking could risk false positives or fail to catch subtler forms of misinformation and disinformation.
The Role of Explainable AI (XAI)
One of the key recommendations outlined in our research paper is the adoption of Explainable AI (XAI)—a framework that ensures AI systems provide transparent, understandable explanations for their conclusions. This transparency is vital in building trust among users and the broader public.
As seen in the German DeFakts project, combining AI detection with human expertise helps explain why certain content is flagged as disinformation, increasing accuracy and fostering greater public trust.
Collaboration and Public Awareness
The fight against disinformation requires more than technological solutions; broad collaboration across sectors is important. The paper emphasises the importance of involving academia, media organisations, private companies, and government institutions to create comprehensive strategies.
Public awareness and education also play a central role in Switzerland’s efforts to build resilience against disinformation. By improving digital and media literacy, Switzerland can empower its population to critically evaluate the information they consume.
The paper recommends integrating digital literacy programs into school curricula and organising public workshops to raise awareness about disinformation.
Prebunking and the Importance of Education
A proactive approach known as prebunking—educating the public about common disinformation tactics before they encounter false information—can significantly reduce the impact of misleading content.
Programs like SSR SRG ’s “Newstest” and campaigns such as #UseTheNews are prime examples of initiatives that aim to enhance media literacy and help the public distinguish fact from fiction.
A Holistic Response: Education, Technology, and Regulation
To effectively counter disinformation, Switzerland must adopt a unified approach that combines AI-driven fact-checking, human oversight, public education, and cross-sector collaboration. As AI technologies continue to evolve, it is crucial to establish ethical guidelines and regulatory frameworks that safeguard transparency and prevent misuse.
One of the major concerns surrounding AI is its potential misuse, including biases in algorithmic decision-making and the risk of over-reliance on opaque systems.
To address these concerns, regulations should be put in place to ensure algorithmic transparency, data privacy, and accountability. For example, AI systems should be required to provide clear and understandable explanations for their decisions—such as those found in Explainable AI—so that users and regulators alike can scrutinise how conclusions were reached.
Building Resilience Through Digital Regulation
This is particularly important in disinformation detection, where false positives—wrongly labelling truthful content as disinformation—could have serious consequences for freedom of speech and the credibility of media outlets. Regulatory frameworks should also create safeguards for AI systems to prevent these risks.
Countries are moving toward more accountable digital spaces. In Switzerland, the Federal Office of Communications (OFCOM) is preparing a consultation on a draft legislation for early 2025: the New Federal Law on Communication Platforms and Search Engines (KomPG/LPCom).
The law seeks to give the Swiss population more rights vis-Ă -vis the significant communication platforms and enable them to demand transparency. Similarly, the EU Digital Services Act provides a model for holding platforms responsible for preventing the spread of disinformation through transparent and robust content moderation.
To build resilience and safeguard democratic values in the face of evolving disinformation threats, Switzerland can enhance digital literacy among its population as well as multi-stakeholder collaboration among economy, academia, civil society, and government.
AI, when combined with human oversight and transparent regulatory measures, will be an essential part of this unified approach.