The threat from cybercrime remains consistently high, but response remains sluggish
As in the previous year, one in twenty-five companies (4%) reported having been affected by a cyberattack in the past three years. 5% had been blackmailed, and 4% lost money due to fraudulent emails. In total, 88% of SMEs regard cybercrime as a serious problem. Nevertheless, only 24% of executives perceive incentives or expectations from their professional environment to invest more in IT security – many decision-makers simply do not realize the urgency.
Weak resilience, yet still low priority
Confidence in their own defenses has fallen significantly: only 42% of companies consider their protection sufficient in the event of an attack – a clear drop from 55% in the previous year. Overall IT-security confidence has also declined slightly: 52% of companies feel secure (2024: 57%), while 9% feel insecure. Despite this, cybersecurity continues to lose importance: in 28% of SMEs, the topic is no longer a business priority – a sharp increase compared with 18% in the previous year.
«Either companies underestimate the consequences of cyberattacks, or they lack the know-how or resources to prioritize this issue. Politics, business, and academia must work together to raise awareness,» says Franziska Barmettler, CEO of digitalswitzerland.
Organizational measures lag behind
While more than two-thirds of companies implement technical measures such as firewalls or software updates, organizational measures remain underdeveloped: only 30% of SMEs have an IT security concept, conduct training, or maintain an emergency plan. Regular IT security audits are carried out by only one in five companies.
IT service providers see room for improvement – but willingness to invest continues to decline
IT service providers also assess the situation as critical: only 39% consider their SME customers secure, while 14% regard their protection as insufficient. Accordingly, 84% expect rising demand for security solutions, while SMEs’ investment readiness continues to decline. Only 40% now plan to increase their cybersecurity measures over the next one to three years (2024: 48%).
Resilience as the key to digital security
«The results of the study make it clear: resilience is the key to protecting Swiss SMEs against the growing threats of cybercrime. It’s not enough just to feel secure – companies must be actively prepared. As an insurance partner, we see it as our responsibility not only to provide financial protection but also to strengthen our customers’ digital resilience – ideally through a combination of technology, organization and awareness,» says Simon Seebeck, Head of the Cyber Risk Competence Center at the Mobiliar.
Plea from the study partners
«The study partners call on SMEs to treat cybersecurity as a strategic issue. Greater awareness, targeted investment, and collaboration with certified IT service providers are required. The Alliance Digital Security Switzerland ADSS particularly recommends working with CyberSeal-certified partners,» says Andreas W. Kaelin, Co-Founder and Managing Director of ADSS.
Marc K. Peter from the FHNW School of Business and HES-SO Valais-Wallis School of Management recommends treating cybersecurity as a success factor in digital transformation: «Comparable to other digital topics such as AI and new work, cybersecurity belongs on the agenda of every board member and business executive.»
More information at cyberstudie.ch
In today’s rapidly evolving digital landscape, disinformation presents one of the greatest threats to our democratic societies, according to the Global Risks Report 2024 published by WEF. Disinformation, often spread via social media and digital platforms, has the potential to distort public discourse and erode institutional credibility.
This issue has recently become even more pressing with Meta announcing the end of their third-party factchecking program for both Facebook and Instagram. With this decision, the company is following in the footsteps of Elon Musk’s X, which ditched their fact-checking program in favour of a feature called “Community Notes”.
In our recently published research paper “Countering Disinformation With a Focus on Fact-Checking and AI”, we explored how Switzerland can leverage its digital ecosystem to combat disinformation, particularly through the use of Artificial Intelligence (AI) and automated fact-checking.
Disinformation and Switzerland
In Switzerland, as in many other countries, there is a growing reliance on digital platforms for news consumption, especially among the younger population. Social media and online channels are becoming the primary sources of information. However, anyone can create and disseminate content online without verification or editorial oversight.
This issue amplifies the challenges associated with distinguishing between authentic/legitimate news and fake news. This shift away from traditional news outlets has increased the need for effective mechanisms to manage and mitigate the risks of false information, ensuring the public has access to reliable and accurate data.
What is Disinformation?
Disinformation involves the deliberate dissemination of false information intended to deceive the public. While disinformation poses global risks, Switzerland’s decentralised governance and reliance on public participation make it particularly vulnerable.
AI offers significant potential to detect and combat disinformation, but it is not without its limitations. Public scepticism regarding AI’s transparency, combined with a lack of AI-powered fact-checking tools, highlights the need for a holistic and multi-faceted approach.
AI and Disinformation Detection
AI has the potential to revolutionise fact-checking through Natural Language Processing (NLP), Machine Learning (ML), and even blockchain technology. These tools can process vast amounts of data, identify false information, and assist human fact-checkers in verifying claims faster and more accurately.
Such AI-driven systems are already proving effective in identifying harmful content that could otherwise spread unchecked. However, human oversight remains critical. While AI systems are good at sifting through large datasets, they struggle with nuanced content, including cultural differences, where human judgement is essential.
Without a human touch, automated fact-checking could risk false positives or fail to catch subtler forms of misinformation and disinformation.
The Role of Explainable AI (XAI)
One of the key recommendations outlined in our research paper is the adoption of Explainable AI (XAI)—a framework that ensures AI systems provide transparent, understandable explanations for their conclusions. This transparency is vital in building trust among users and the broader public.
As seen in the German DeFakts project, combining AI detection with human expertise helps explain why certain content is flagged as disinformation, increasing accuracy and fostering greater public trust.
Collaboration and Public Awareness
The fight against disinformation requires more than technological solutions; broad collaboration across sectors is important. The paper emphasises the importance of involving academia, media organisations, private companies, and government institutions to create comprehensive strategies.
Public awareness and education also play a central role in Switzerland’s efforts to build resilience against disinformation. By improving digital and media literacy, Switzerland can empower its population to critically evaluate the information they consume.
The paper recommends integrating digital literacy programs into school curricula and organising public workshops to raise awareness about disinformation.
Prebunking and the Importance of Education
A proactive approach known as prebunking—educating the public about common disinformation tactics before they encounter false information—can significantly reduce the impact of misleading content.
Programs like SSR SRG ’s “Newstest” and campaigns such as #UseTheNews are prime examples of initiatives that aim to enhance media literacy and help the public distinguish fact from fiction.
A Holistic Response: Education, Technology, and Regulation
To effectively counter disinformation, Switzerland must adopt a unified approach that combines AI-driven fact-checking, human oversight, public education, and cross-sector collaboration. As AI technologies continue to evolve, it is crucial to establish ethical guidelines and regulatory frameworks that safeguard transparency and prevent misuse.
One of the major concerns surrounding AI is its potential misuse, including biases in algorithmic decision-making and the risk of over-reliance on opaque systems.
To address these concerns, regulations should be put in place to ensure algorithmic transparency, data privacy, and accountability. For example, AI systems should be required to provide clear and understandable explanations for their decisions—such as those found in Explainable AI—so that users and regulators alike can scrutinise how conclusions were reached.
Building Resilience Through Digital Regulation
This is particularly important in disinformation detection, where false positives—wrongly labelling truthful content as disinformation—could have serious consequences for freedom of speech and the credibility of media outlets. Regulatory frameworks should also create safeguards for AI systems to prevent these risks.
Countries are moving toward more accountable digital spaces. In Switzerland, the Federal Office of Communications (OFCOM) is preparing a consultation on a draft legislation for early 2025: the New Federal Law on Communication Platforms and Search Engines (KomPG/LPCom).
The law seeks to give the Swiss population more rights vis-à-vis the significant communication platforms and enable them to demand transparency. Similarly, the EU Digital Services Act provides a model for holding platforms responsible for preventing the spread of disinformation through transparent and robust content moderation.
To build resilience and safeguard democratic values in the face of evolving disinformation threats, Switzerland can enhance digital literacy among its population as well as multi-stakeholder collaboration among economy, academia, civil society, and government.
AI, when combined with human oversight and transparent regulatory measures, will be an essential part of this unified approach.
Disinformation poses a growing threat to public trust and democracy in today’s digital age. Our latest study, “Countering Disinformation With a Focus on Fact-Checking and AI” explores innovative strategies to tackle this challenge, highlighting Switzerland’s potential to lead with its digital expertise and commitment to collaboration.
This comprehensive research examines different uses of artificial intelligence in detecting and debunking false information, while emphasising the vital role of human oversight, transparent AI, and public education. It also features actionable recommendations for governments, academia, and private sectors to unite efforts against disinformation.
Whether you’re interested in the ethical implications, regulatory frameworks, or the latest AI-powered solutions, the study “Countering Disinformation With a Focus on Fact-Checking and AI” provides a roadmap to strengthen digital resilience and restore trust in public discourse. Check out the full report to uncover how Switzerland—and the world—can effectively combat the spread of falsehoods.
Oliver Wyman study “Switzerland’s Digital DNA”
Confidence in the Swiss population’s own digital competence is growing only slowly. More than a fifth of all people still feel unable to keep up with the pace of technological progress. The benefits of digitalisation are nevertheless considered high in all areas of life. The willingness to disclose personal data for digital services is growing – despite an increased awareness of cyber risks. At the same time, satisfaction with digital services varies. This is the result of the sixth edition of the study “Switzerland’s Digital DNA”, which is published jointly by the international strategy consultancy Oliver Wyman and digitalswitzerland as part of Swiss Digital Days 2022.
Selected highlights:
- 75 percent of the population consider the internet and technology to be an opportunity for Switzerland.
- Considering personal digital skills, 44 percent of the respondents feel they lack knowledge in technological skills such as programming (44 percent) and the use of new technologies such as smartphones or VR glasses (18 percent).
- When it comes to sharing data, Banks (64) and universities (61) are more trusted than government and public offices (53).
- 30 percent of respondents said they had already been the victim of a cybercrime or corresponding attack.
Find an infographic with further key findings here in German.
Read the full press release in German, French and Italian.