(Adversarial) Artificial Intelligence

AI applications are penetrating more and more areas of the economy and society, where its advantages, e.g. predicting results from data sets, are being utilised. Numerous applications, whether in medicine, transport or agriculture, are based on AI. At the same time, AI brings with it new risks, in particular through Adversarial Artificial Intelligence (AAI), i.e. hostile AI. This hostile AI exploits weaknesses in other AI systems to manipulate key technologies such as voice and video or to compromise existing AI models. To overcome these challenges, AI risks must be included in the assessment of critical infrastructures and suitable protection mechanisms must be developed. It is also essential to strengthen the resilience of AI models against manipulation.

The challenges 

  • Risk of manipulation through AAI: Cyber criminals can manipulate AI models in a targeted manner, for example through hidden backdoors or hostile input data that provoke incorrect decisions. Examples include deepfakes, automated cyberattacks and the misinterpretation of traffic data in autonomous vehicles. 

  • Lack of transparency and traceability: With the increasing complexity of AI models, especially in deep learning, the human controllability and traceability of decisions decreases. 

  • Dependencies and critical infrastructures: AI is playing an increasingly important role in critical infrastructures. Malfunctions or manipulation can have far-reaching consequences. 

  • Skills shortage and knowledge transfer: Building up AI expertise in Switzerland remains key to ensuring technological competence and cyber security. 

Recommendations for politics, business and society 

  • Security integration in critical infrastructures: AI risks must become part of the risk assessment of critical infrastructures.  

  • Strengthen research and education: Promote training programmes and talent pipelines in AI and cybersecurity. Incentives from SERI are needed to combat the shortage of skilled labour. 

  • Increase resilience to AAI: Development of new protective mechanisms to defend against AAI and integration of the robustness of AI models as a quality criterion. 

  • Ensure ethics and transparency: Create clear rules for the use of AI to address ethical issues and strengthen society's trust. 

Authors and topic responsibility:

Umberto Annino, Microsoft | Stefan Frei, ETH Zurich | Martin Leuthold, Switch

Review Board:

Endre Bangerter, BFH | Alain Beuchat, Banque Lombard Odier & Cie SA | Matthias Bossardt, KPMG | Dani Caduff, AWS | Adolf Doerig, Doerig & Partner | Roger Halbheer, Microsoft | Katja Dörlemann, Switch | Pascal Lamia, BACS | Hannes Lubich, Board of Directors and Consultant | Luka Malisa, SIX Digital Exchange | Adrian Perrig, ETH Zurich | Raphael Reischuk, Zühlke Engineering AG | Ruedi Rytz, BACS | Riccardo Sibilia, DDPS | Bernhard Tellenbach, armasuisse | Daniel Walther, Swatch Group Services | Andreas Wespi, IBM Research