Risk of manipulation through AAI: Cyber criminals can manipulate AI models in a targeted manner, for example through hidden backdoors or hostile input data that provoke incorrect decisions. Examples include deepfakes, automated cyberattacks and the misinterpretation of traffic data in autonomous vehicles.
Lack of transparency and traceability: With the increasing complexity of AI models, especially in deep learning, the human controllability and traceability of decisions decreases.
Dependencies and critical infrastructures: AI is playing an increasingly important role in critical infrastructures. Malfunctions or manipulation can have far-reaching consequences.
Skills shortage and knowledge transfer: Building up AI expertise in Switzerland remains key to ensuring technological competence and cyber security.
Security integration in critical infrastructures: AI risks must become part of the risk assessment of critical infrastructures.
Strengthen research and education: Promote training programmes and talent pipelines in AI and cybersecurity. Incentives from SERI are needed to combat the shortage of skilled labour.
Increase resilience to AAI: Development of new protective mechanisms to defend against AAI and integration of the robustness of AI models as a quality criterion.
Ensure ethics and transparency: Create clear rules for the use of AI to address ethical issues and strengthen society's trust.
Umberto Annino, Microsoft | Stefan Frei, ETH Zurich | Martin Leuthold, Switch
Endre Bangerter, BFH | Alain Beuchat, Banque Lombard Odier & Cie SA | Matthias Bossardt, KPMG | Dani Caduff, AWS | Adolf Doerig, Doerig & Partner | Roger Halbheer, Microsoft | Katja Dörlemann, Switch | Pascal Lamia, BACS | Hannes Lubich, Board of Directors and Consultant | Luka Malisa, SIX Digital Exchange | Adrian Perrig, ETH Zurich | Raphael Reischuk, Zühlke Engineering AG | Ruedi Rytz, BACS | Riccardo Sibilia, DDPS | Bernhard Tellenbach, armasuisse | Daniel Walther, Swatch Group Services | Andreas Wespi, IBM Research