How should “artificial intelligence” be regulated? - A balancing act between welfare-creating innovation and socially acceptable risk.

This blog series has so far dealt with the challenges of artificial intelligence in specific fields of application. This sixth instalment focuses on regulation based on the EU AI Regulation that recently came into force in the EU.

In the previous contributions to this blog series, relevant aspects of artificial intelligence (AI) were presented in relation to specific factual situations. In the latest blog 5 (podcast), the authors reached the conclusion that state regulation of AI essentially boils down to a socio-political trade-off between the benefits (e.g. gain in knowledge; increased productivity) and the risks (e.g. high energy consumption for required computing power; use of statistically biased training data) of AI-Applications.

Accordingly, the present blog 6 is dedicated to the question of AI-Regulation. The aim of this short contribution is not to provide an abstract presentation of legal provisions that have already been enacted or are currently in the process of being drafted, but rather to briefly examine to what extent the cases discussed in blogs 1 to 4 are covered by the Artificial Intelligence Act (AI Act), which the EU put into force in August 2024. 

It stands to reason to focus here on the AI Act because this Regulation is also relevant to the Swiss export industry insofar as it aims to sell products, which include AI systems, on the EU internal market. At the same time, the AI Act will have an impact on the regulation of the Swiss market since the national legislator will neither be able nor willing to ignore the EU legal framework. 

The Robodog and the AI regulation

The (harmful) behaviour of the Robodog in blog 1 is a consequence of software developed with artificial neural networks and trained with relevant data. Thus, the Robodog includes an AI system within the meaning of the new AI Act and therefore falls within its scope of application.

However, as an ‘artificial animal’ so to speak, the Robodog primarily constitutes a machine that – within the EU internal market – is subject to the EU Machinery Directive, respectively the EU Machinery Regulation (which replaces the Directive). This means that the Robodog as a marketable product must not only fulfil the safety requirements of the AI Act but also of the Machinery Regulation if it is to be sold on the EU internal market. This dual regulation of the same product implies the risk of a twofold administrative burden for the placing on the market of products with AI systems. Although the EU legislator has basically spotted this problem, it has been addressed but merely in general terms in the AI Act. We need therefore to wait for the implementation of further provisions and guidelines by the EU Commission before the real impact of the AI Act on the Robodog as a marketable product can be reliably assessed. As of now, it is unknown when these additional rules and guidelines will be issued. 

The automated car and the AI regulation

Automated motor vehicles also rely on software developed with neural networks. Such software does not improve autonomously within a permanent learning process in real time, i.e. while the vehicle is driving, but is updated periodically on the vehicle by the manufacturer, similarly to the standard practice for smartphones. As a safety-relevant component of the vehicle, such software constitutes an AI system that, within the EU single market, falls under the EU AI Act. 

The placing on the market of automated motor vehicles (namely with type approval) in the EU is subject to a complex network of specific international provisions (namely UNECE regulations) and extensive EU regulations, which automated vehicles must comply with in addition to the AI Act. Accordingly, the specific EU regulations for motor vehicles are aligned with the AI Act insofar as the EU Commission has to take into account the AI-specific requirements for high-risk AI systems when adopting the relevant implementation and delegation acts. Consequently, one must wait for those rules to be enacted by the EU Commission before the actual impact of the AI Act on the placing on the market of automated motor vehicles in the EU internal market can be reliably assessed. 

The AI dog picture and the AI regulation

Blog 3 concerned a digital image of a painting dog that had been generated by using an application available on the internet. In particular, the question of the copyright of the image under intellectual property law was discussed. According to the AI Act, such applications (unless freely accessible under an open-source licence) qualify as general-purpose AI models (without systemic risk). Therefore, anyone wishing to put such AI models on the EU internal market must comply with the requirements of this regulation, such as the provision of technical documentation as well as information on the integration of the AI model into other AI systems, on the compliance with copyright and on the training data having been used. In addition to the requirements for placing such AI models on the market, there certain requirements as to the outcome generated by their use, i.e. in the present case for the digital image of the painting dog. Under the AI Act, this image must include a reference to its artificial generation. 

AI in medical technology and the AI regulation

In blog 4, we discussed the fact that medical technology is heavily regulated already today. The existing regulation aims to ensure a socially acceptable risk-benefit ratio for medical devices. This raises the question of whether additional specific AI regulation is still required for medical technology and what specific impact the AI Act in force in the EU is likely to have on the corresponding industry in Switzerland. If a medical device using AI is classified as a high-risk AI system, it must necessarily comply with the requirements of the EU regulation for medical devices, namely those relating to registration, to conformity assessment (including by third parties), as to technical documentation and risk management. Against this background, it is desirable that experience from the MedTech sector is incorporated into any AI regulation in Switzerland. The most relevant stakeholders should be involved in this process at an early stage so that sector-specific expertise can be incorporated and unnecessarily complicated and detailed regulation, as is the case with EU law, can be avoided. 

Conclusions

With the adoption of the AI Act, the EU has taking on a pioneering role. The definition of AI systems is inspired by that of the OECD, but is more general and thus covers a larger number of ICT applications. There is presumably a large consensus in Switzerland that the AI practices prohibited in the AI Act should neither be authorized in this country. No one in Europe wants a situation like that in China, for example, with social scoring or similarly intrusive AI applications. However, various exclusions, particularly those for the military, administration and public authorities, pave the way for intrusive state practices under the AI Act. It is all the more important that in those areas the benefits and risks of AI applications are weighed up carefully. 

In many areas (e.g. automated vehicles, medical devices), strict regulatory requirements have long been in place and imposed challenging benchmarks on distributors. One should therefore avoid that the implementation of the AI Act brings about unnecessary additional hurdles that foster bureaucracy without effectively increasing safety for society. At any rate in Switzerland, AI legislation should be approached pragmatically and cautiously. Selective bans on particularly harmful AI applications and risk-based sectoral requirements, which fill any gaps in existing regulation, seem to us to be the proper way to go forward.

[Translate to English:]