Since the shocking arrival of generative artificial intelligence[1] in the media world, triggered by chatGPT from OpenAI, everyone has been talking about artificial intelligence (AI). Subliminal fears are spreading among the population, education is having to reinvent itself - as with the introduction of the pocket calculator - and in the healthcare sector there is a fear that robots will soon be treating us unsupervised. This is despite the fact that the research and application of AI dates back to the middle of the last century and is therefore not so new. European legislators are currently trying to counteract these emotions in a reassuring way and have enacted the world's first law on AI with the EU AI Act .[2]
In our podcast[3] , we came to the conclusion that state regulation of AI should be based on a socially balanced consideration of the benefits and risks. In the last blog[4] , we looked at the potential impact of the EU AI Act on various applications. In many examples, it is unclear how this is to be implemented and what impact such regulatory interventions will have on the various areas.
In this article, we examine the question of whether regulatory requirements such as the EU AI Act for the medical devices sector are proportionate and even necessary in their current form. The healthcare sector is particularly important to society and is therefore a very sensitive and already heavily regulated field. Two statements can be made in advance. 1) Due to the very broad definition of AI systems in the EU AI Act, it can be difficult to determine which systems are effectively affected[5] . A technical expert, on the other hand, can usually quickly decide whether a specific case is a relevant AI system or not and what its potential risk is, if such a system exists at all. 2) There are still many unanswered questions as to how the EU AI Act should be implemented for high-risk AI systems, which includes MedTech[6] . This is unsettling for distributors - especially manufacturers of medical devices who have been using AI successfully and without problems for years. And uncertainty is known to be "poison" for innovation.
The Medical Device Regulation (EU MDR)[7] is a good example of the consequences that EU harmonisation legislation can have for Switzerland.
There is currently no specific AI law for Switzerland. Such a law will probably be based on the definition of the OECD[8] and the Council of Europe Framework Convention[9] , in which it played an active role. This definition has already been adopted in slightly modified form in the EU AI Act[10] . The proposed "Safe and Secure Innovation for Frontier Artificial Intelligence Models Act" from California also uses a slightly modified version of this[11] . A potential harmonisation is therefore emerging here. The aforementioned EU AI Act has been in force in the EU since this summer. For medical devices, the basic regulatory path is already clear, with or without the AI Act. A medical AI system includes software that contains the AI elements. Such software (stand-alone or combined with a device) is regulated in the aforementioned EU MDR. This is assigned to a risk class[12] ; in addition, a software safety class is determined[13] . The benefit-risk ratio is a fundamental safety and performance requirement of the EU MDR[14] : product safety is achieved by controlling the risks. AI is an obvious risk in a medical technology system[15] . The linchpin is therefore risk management[16] , which is also a requirement in the EU AI Act[17] . Topics such as risk management, cybersecurity, AI and quality are covered in MedTech by corresponding management systems, which are also integrative[18] . These are based on internationally recognised standards, most of which have already been harmonised by the EU .[19]
The question therefore arises as to why the EU AI Act is still needed for the MedTech sector. All the requirements of the EU AI Act, namely risk management system, data and data governance, technical documentation (TD), record-keeping obligation, registration, transparency, etc. are already demonstrably covered by the EU MDR. Implementation could result in a medical device manufacturer having to have its management systems certified by two different notified bodies (NBs) and also have its product TD assessed twice separately by an NB. This is because very few certification bodies might be accredited for both areas (i.e. medical technology and AI). The effort involved is considerable. Even if a European, MedTech-accredited NB manages to become accredited for the EU AI Act, the corresponding "scope" will immediately result in restrictions. The same applies to the obligation to register in a separate European database for AI systems. EUDAMED already exists for MedTech.
It is therefore to be feared that the EU AI Act will lead to a lot of double-tracking for AI-using MedTech. At most, one or two large NBs will be able to offer both at the same time. However, this means that innovative MedTech is far removed from the conformity and very close to an authorisation procedure, which would represent a relevant paradigm shift for market access. In addition, the EU regulations exacerbate the problems at the certification level, as access to an accredited body (NB) becomes a bottleneck. In Switzerland, there are no longer any accredited bodies for MedTech. We have become dependent on the EU. If there are only a few large accredited NBs left in the European market, the centrally relevant "expertise" will flow out of Switzerland and abroad. This has already been demonstrated by the EU MDR.
Large companies can swallow such regulatory hurdles to a certain extent (and transfer the resulting costs to the product). For small and medium-sized companies, however, these are a real challenge. And for start-ups and spin-offs from our colleges and universities, they are a killer argument. It is fair to ask why we invest so much money in invention if we then hinder innovation, i.e. the successful commercial realisation of an invention for the benefit of society. There is a similar threat with the EU AI Act, as described above. It would be desirable for Switzerland to learn from history (in this case the experience of the EU MDR).
The USA is taking a different approach[20] . The AI law proposed in California is primarily intended to regulate the highest risks. Medical devices are only touched by the proposed legislation in rare cases because they are unlikely to fulfil the requirements of a "covered model"[21] - in contrast to large, generative AI models. The FDA, the US regulatory authority for medical devices, also publishes so-called (add-on) guidelines on an ongoing basis, thereby providing clarity and certainty on the topic of "AI/ML-enabled medical devices". It is therefore not surprising that many AI-based medical devices can be successfully placed on the market in the USA without any problems[22] . Europe, on the other hand, is years behind. Swiss MedTech manufacturers are also entering the US market first with affected products and are demanding acceptance of FDA-approved products in Switzerland[23] . Certain products are no longer available at all in Switzerland due to the high regulatory requirements. Our market is too small and the expense is therefore not justified.
MedTech has had good experience with management systems for years. International standards also exist for AI, in particular for management systems[24] . This approach is already being successfully implemented in Europe[25] and Switzerland for cyber security[26] . So why not choose the path that comes from the field and is internationally harmonised in order to make targeted adjustments later - instead of trying to cover everything politically, as is the case with AI? The EU's approach throws out the baby with the bathwater and leads to a relevant loss of expertise and experience for Switzerland - as experience with the EU MDR shows. In a subject as important as AI, this would be fatal for Switzerland as a centre of education.
Switzerland should therefore not provide any additional rules for AI in the MedTech sector and learn from the negative experiences with the EU MDR. A ban on certain AI systems also seems sensible in Switzerland[27] . However, it is still unclear how detailed a specific AI law should be and which requirements are effectively important for the protection of society. In medical technology, at any rate, it can be said with a clear conscience that all challenges are already covered and that implementing the EU AI Act in this area will probably only produce more paper. The consultants required for this, who have already mushroomed since the EU MDR, will be delighted. There are always winners (and consequently losers) in such situations. Who will it be this time?
[1] Generative AI differs from discriminatory AI in that it does not recognise patterns or assign them to learned patterns (e.g. diagnostic AI in MedTech to detect a disease), but creates content such as text, images, music, audio and videos or generates them with the help of statistical methods.
[2] Regulation (EU) 2024/1689 (eur-lex.europa.eu/eli/reg/2024/1689/oj)
[3] A sober look at the myth of AI: the impact of artificial intelligence on humans (www.satw.ch/de/news/ein-nuechterner-blick-auf-den-mythos-ki-auswirkungen-von-kuenstlicher-intelligenz-auf-den-menschen)
[4]How should "artificial intelligence" be regulated? - A balancing act between innovation and socially acceptable risk (www.satw.ch/de/news/wie-soll-kuenstliche-intelligenz-reguliert-werden-eine-gratwanderung-zwischen-wohlstandsstiftender-innovation-und-gesellschaftsvertraeglichem-risiko)
[5] Regulation (EU) 2024/1689, Article 96(f)
[6] Regulation (EU) 2024/1689, Article 6, point 1
[7] Regulation (EU) 2017/745 (eur-lex.europa.eu/legal-content/DE/TXT/)
[8] What is an AI system? (oecd.ai/en/wonk/definition)
[9] Council of Europe Framework Convention on AI and Human Rights, Democracy and the Rule of Law, Article 2 (rm.coe.int/1680afae3c)
[10] Regulation (EU) 2024/1689, Article 3, point 1 (eur-lex.europa.eu/eli/reg/2024/1689/oj)
[11] CA SB 1047, 22602, (b) (leginfo.legislature.ca.gov/faces/billNavClient.xhtml)
[12] Regulation (EU) 2024/1689, Annex VIII, Rule 11 (eur-lex.europa.eu/eli/reg/2024/1689/oj)
[13] EN ISO 62304, section 4.3
[14] Regulation (EU) 2024/1689, Annex I, point 1 (eur-lex.europa.eu/eli/reg/2024/1689/oj)
[15] Regulation (EU) 2024/1689, Article 6, point (1), (a) and Annex I, Section A (eur-lex.europa.eu/eli/reg/2024/1689/oj)
[16] Regulation (EU) 2024/1689, Annex I, number 1-5 (eur-lex.europa.eu/eli/reg/2024/1689/oj) and EN ISO 14971
[17] Regulation (EU) 2024/1689, Article 9
[18] www.sqs.ch/de/blog/ims-was-ist-ein-integriertes-managementsystem
[19] EN ISO 13854, EN ISO/IEC 27001, ISO/IEC 42001 and EN ISO 14971
[20] CA SB 1047 (leginfo.legislature.ca.gov/faces/billNavClient.xhtml)
[21] CA SB 1047, 22602, (c) (1) (leginfo.legislature.ca.gov/faces/billNavClient.xhtml)
[22] https://www.fda.gov/medical-devices/software-medical-device-samd/artificial-intelligence-and-machine-learning-aiml-enabled-medical-devices
[23] www.swiss-medtech.ch/sites/default/files/2023-05/Swiss%20Medtech_Position_Motion_20.3211_DE.pdf
[24] ISO/IEC 42001
[25] Regulation (EU) 2022/2555 (eur-lex.europa.eu/legal-content/DE/TXT/)
[26] www2.post.ch/-/media/post/ueber-uns/dokumente/factsheet-zertifizierungen.pdf, www.swisscom.ch/de/about/governance/risikomanagement/iso-iec-managementsystem.html, www.bsi.bund.de/DE/Themen/Unternehmen-und-Organisationen/Standards-und-Zertifizierung/Zertifizierung-und-Anerkennung/Zertifizierung-von-Managementsystemen/ISO-27001-Basis-IT-Grundschutz/iso-27001-basis-it-grundschutz_node.html, www.ar.admin.ch/de/informationssicherheit, https://swissgrc.com/staerkung-der-digitalen-resilienz-das-informationssicherheitsgesetz-des-bundes-isg/ etc.
[27] How should "artificial intelligence" be regulated? - A balancing act between innovation and socially acceptable risk (https://www.satw.ch/de/news/wie-soll-kuenstliche-intelligenz-reguliert-werden-eine-gratwanderung-zwischen-wohlstandsstiftender-innovation-und-gesellschaftsvertraeglichem-risiko)