(Adversarial) artificial intelligence

Current situation

No one could dispute the growing importance of artificial intelligence (AI) and automation in the digital society. They improve both the efficiency and quality of decisions, for example in medicine. However, AI also brings with it security threats from automated cyber attacks or the use of adversarial artificial intelligence (AAI) to circumvent security mechanisms and compromise systems. 

AI and the related subfield of machine learning (ML) cover a large research area that is part of computer science and related to operations research. At its core, AI is a method by which machines imitate intelligent human behaviour by recognising and sorting information in their input data. This intelligence can be based on programmed processes or generated by ML. ML is a method (mathematical models and algorithms) for predicting events based on patterns recognised in existing data sets. The importance to the digital society and industry of the possibilities presented by artificial intelligence and the automation of decisions or actions is growing. We are increasingly reliant on AI’s capacity for independent assessment and action when it comes to critical decision-making and automating decision-making and other processes. This capability enables us to learn from a large volume of earlier experience, draw conclusions, uncover relationships and classify complex data.  

Apart from holding out the possibility of automating processes and delivering associated efficiency gains, AI also offers a way of improving the quality of decisions. For example, medical decisions may be of better quality and result in increased life expectancy compared with an entirely manual approach. 

In cybersecurity, AI makes it easier to identify attacks through its ability to process large volumes of data and check it for unusual patterns. However, the same technology also enables attackers to recognise patterns and weaknesses in complex systems and use automation to exploit them. 

The increasing permeation of digitalisation into all areas of life and the wider availability of data are driving the growing significance of AI. For this reason, the technology is extremely important for Switzerland. Because the discipline is still in the very early stages of its expected development – at present we are mainly using the machine learning subfield, for example for automated recognition of image content – its impact on various sectors of technology could yet increase appreciably as its capabilities improve. In addition to its huge potential, AI also presents complex ethical issues that it will be imperative to address and which will require accompanying measures. 

The tremendous spread of AI is also giving rise to the phenomenon of adversarial artificial intelligence, where attackers exploit AI methods and tools to

  1. Compromise operational AI models 

  2. Scale and automate elements of attacks using deepfakes and advanced persistent threats (APTs), where such elements were previously impossible (deepfakes) or heavily reliant on manual processes.

In scenario A above, AAI results in the machine learning model under attack incorrectly interpreting the information it receives and behaving in a way that is beneficial to the attackers. To compromise the model’s behaviour, the attackers create “adversarial data”, in other words data with a kind of backdoor that often resembles the expected data, but which impairs the model’s performance or alters its output in a way intended by the attackers. AI models will then classify this adversarial data in the opposite to expected way and supply incorrect answers with a high level of accuracy. Various AI models, including state-of-the-art neuronal networks, are susceptible to such adversarial data. 

Challenges

Cheaper computing power and the growing abundance of data collected are giving rise to ever more complex AI models in which many behaviours are beyond human understanding and – by the time we get to deep learning, if not sooner – to a growing extent unverifiable and outside human control. Thus, a current research paper shows1 that it is possible to incorporate backdoors into trained models in such a way that they cannot be detected using the standard behaviour of the model, but will exhibit separately specified behaviour when deliberately activated. This would allow the originators to use specifically manipulated inputs to obtain the decision they want from a machine learning model. Yet any number of tests would still fail to uncover the backdoor. 

There have already been proven cases of manipulative attacks on key technologies such as image processing, optical character recognition (OCR), natural language processing (NLP), language and video (deepfakes), and malware has been identified. 

Examples of other threats from AAI 

  1. Using machine learning algorithms to insert deepfakes and face swaps (transferring one person’s facial expression or face to a different person by computer manipulation) into videos. Deepfake services are already being offered online for just a few dollars. As Europol2 recently reported, deepfakes are gaining popularity among criminals and the threat from this source as a driver of disinformation campaigns is set to increase. A large number of services are available in real time, which shifts the threshold of what can be achieved from static images and recorded video to dynamic video conferences.  

  2. Image recognition/classification by autonomous vehicles, e.g. misinterpreting road signs or obstacles. 

  3. Manipulation of text recognition in automated document or payment processing services.

  4. Ability to circumvent and industrialise fraud detection and check mechanisms (using AI-automated and industrialised APTs). 

  5. Malicious interference in AI models to promote or discredit certain groups. 

Action areas for government, business and civil society: Current gaps

At present, a large number of strategic issues associated with critical dependencies have yet to be resolved: 

- To what extent should AI applications in critical infrastructure or in infrastructure with broad relevance to Swiss society be checked for legally compliant implementation or examined for manipulation (especially as regards ethics, data protection, transparency and AAI, quality of training data sets, etc.)? 
- What proactive and reactive strategies does Switzerland have to improve its resilience towards AAI?
- What trends are emerging as regards the use of AAI in cyber crime and how should Switzerland anticipate and respond to a development of this kind in cybersecurity? 

Recommendations: How government, business and civil society can close the gaps 

AAI targets something that has never needed protection before – AI models themselves. The SATW Cybersecurity Advisory Board recommends implementation of the following measures: 

  1. The risks associated with AI models and AI-controlled automation and decision-making must be part of critical infrastructure risk assessment. 

  2. To create and nurture public trust in and acceptance of these technologies, all decisions should give appropriate consideration to ethical issues linked to AI use. 

  3. However, regulation must be restrained, technology-neutral and trust-inspiring in order to avoid impeding innovation. 

  4. Educational organisations at all levels must ensure that they have the courses and talent pipeline needed to establish and develop the necessary expertise (AI, AAI, use of AI in cybersecurity) in Switzerland. SERI must create and manage appropriate incentives.  

  5. The robustness of AI models (resistance to manipulation) must be factored into decisions when choosing such models. 

  6. An understanding must be developed of where AAI-driven threats could lead to new system-relevant threats and where new protective mechanisms have to be developed and implemented in response. 

Harnessing the benefits of AI, for example using data sets to predict results, is resulting in the technology permeating ever more areas of business and civil society. A large number of applications in areas such as medicine, transport or agriculture are based on AI, and it is not uncommon for critical infrastructures and safety-critical applications to be affected. For this reason, and in view of the tremendous pace at which use of ever more powerful algorithms and solutions is spreading, the issue of AI and the opportunities it presents – as well as the resulting threat posed by AAI – are hugely relevant to Swiss society and Switzerland’s national economy.  

References

Explaining and Harnessing Adversarial Examples: https: //arxiv.org/pdf/1412.6572.pdf
AI Is The New Attack Surface: https: //www.accenture.com/_acnmedia/Accenture/Redesign-Assets/DotCom/Documents/Global/1/Accenture-Trustworthy-AI-POV-Updated.pdf
What is adversarial artificial intelligence and why does it matter?: https: //www.weforum.org/agenda/2018/11/what-is-adversarial-artificial-intelligence-is-and-why-does-it-matter/
Deepfakes web β: https: //deepfakesweb.com

Authors and topic responsibility:

Umberto Annino, Microsoft | Stefan Frei, ETH Zurich | Martin Leuthold, Switch

Review Board:

Endre Bangerter, BFH | Alain Beuchat, Banque Lombard Odier & Cie SA | Matthias Bossardt, KPMG | Dani Caduff, AWS | Adolf Doerig, Doerig & Partner | Roger Halbheer, Microsoft | Katja Dörlemann, Switch | Pascal Lamia, BACS | Hannes Lubich, Board of Directors and Consultant | Luka Malisa, SIX Digital Exchange | Adrian Perrig, ETH Zurich | Raphael Reischuk, Zühlke Engineering AG | Ruedi Rytz, BACS | Riccardo Sibilia, DDPS | Bernhard Tellenbach, armasuisse | Daniel Walther, Swatch Group Services | Andreas Wespi, IBM Research

More articles from the Cybersecurity Map

 

Dependencies and sovereignty

 

Cloud computing

 

Data protection

 

Information operations and warfare

 

Internet of Things (IoT) and Operational Technology (OT)

 

Quantum computing