Translated with DeepL
Today's medicine is based, among other things, on major advances in medical technology. Healthcare professionals can see through the human body with X-rays, measure the electrical activity of the heart or insert implants that support the human body in many ways.
Currently, artificial intelligence (AI) promises further ground-breaking advances in many areas of medicine. Integrated into medical products, it is intended to support doctors in the early detection of a potentially dangerous breast tumour, for example. Questions that have already been considered resolved are once again being discussed intensively and on a broader scale. Who is responsible if an AI fails to detect a malignant tumour (false negative diagnosis) or incorrectly classifies a harmless abnormality as dangerous (false positive diagnosis)[1]?
Since 2016, the number of AI-based approved medical devices has risen sharply, primarily in the field of medical imaging in the USA. Medical devices from the cardiovascular sector follow in second place with some gap in between[2]. In contrast to regulation in the USA, the new European laws place much higher requirements on medical device manufacturers. This is often perceived as a hurdle or even an obstacle. In fact, far fewer (i.e. only a very small number) medical devices that use AI have been brought to the market in the European Union (EU) under the corresponding new regulation (EU MDR). The recently adopted EU AI Act represents a potential additional regulatory hurdle for AI-based medical devices. The question therefore arises as to how much regulation is necessary to ensure the safety of medical devices effectively (i.e. in favour of the patient and not just on paper[3]) without jeopardising their benefits and continuing to allow sufficient innovation.
At the same time, futuristic images from the movie industry are giving us the uneasy feeling that robots will soon be operating on us, or that AI systems will be making a diagnosis that doctors can no longer understand. Media reports on the subject are creating a kind of “AI myth” in society. Understandably so - because only a few people really understand what AI does in medical technology. Using neural networks[4], AI is perceived as a black box into which you cannot see. The results are based on vast amounts of data (buzz word ‘big data’) and complex computing processes. As a result, it is almost impossible for most people to understand what is concretely happening, including in terms of technology and mathematics.
But is this complexity of AI-based medical devices really a problem? This question has hardly ever arisen with conventional medical devices. For example, what doctor or patient can explain how a magnetic resonance imaging (MRI) scanner works? An MRI is so complex that hardly anyone can explain it in all its aspects. Nevertheless, this medical technology device is often used. Healthcare professionals trust the manufacturers (and therefore the engineers who have developed and evaluated these devices) just as much as patients trust doctors to make a diagnosis. And that is a good thing, because experience shows that when it comes to complex challenges or systems, professional specialisation and the benefit-centred and risk-based approach[5] have proven their worth.
To illustrate the potential of AI in medicine, let us look at medical diagnostics. Let us imagine that we develop an innovative clinical thermometer. If our product fulfils a medical purpose, we must declare it as a medical device (see box). Correctly assessing whether this is the case is the first challenge. After all, what is a fever and how is it diagnosed? As the name suggests, a thermometer measures the temperature of the body at defined locations. As the course over time (so-called time series) plays a key role, the temperature is usually measured not just once but several times in succession. Depending on the temperature curve, medicine differentiates between several types of fever – for example, continuous, remitting, intermittent recurrent or undulant fever.
Fever is a sign of an ongoing infection – the increased temperature is a natural defence reaction of the body. In heart and lung patients, moderate fever can be dangerous due to the increased heart and respiratory rate. Extremely high fever (typically over 41° C) can cause dysfunction or even failure of organs. Fever can provoke febrile convulsions in children. So not all fevers are the same. If our innovative thermometer could process more information than conventional models – for example, by analysing body temperature over time – it would be possible to make more accurate diagnoses and prescribe more targeted treatment. In many cases, it would probably be possible to forfeit preventive medication.
We are therefore equipping our innovative body thermometer with AI, thanks to which it can analyse the course of the body temperature and tell us what to do in a specific case. The associated smartphone app can record the medication taken, symptoms and other information. By anonymously analysing the data, the AI continues to learn and can derive rules over time as to which therapy is best for whom in a particular situation[6].
Just as doctors write reports on the course of a disease, carry out studies with a sufficiently large number of test subjects, publish the results in scientific medical journals and derive rules for the healthcare system from them. However, if the available data does not adequately reflect reality, AI can draw false conclusions. Much like a study may have a statistical bias or an inadequate study design. However, the rules can be continuously improved through ongoing progress and peer reviews. The healthcare system learns and benefits from this, and ultimately the patients. Just as it has been the case so far.
For some time now, the trend in healthcare has been shifting from individual expert opinion to evidence-based medicine: subjective assessments by specialists are increasingly being replaced by decisions whose effectiveness is supported by empirical knowledge. This evidence is typically generated by means of scientific studies. The knowledge can then be implemented in everyday clinical practice. This guarantees the most up-to-date and innovative healthcare possible. In the pharmaceutical industry, for example, successful studies are required before a drug may be launched on the market. Patients and doctors trust the study results and accept the existence of a certain black box character.
AI-based medical device development is organised in a comparable way: first, data is collected, from which rules are derived – with the help of machine learning (ML) – which are then applied to new individual cases. These rules can then be checked using suitable test methods. In both cases, potential risks should be identified and mitigated at an early stage. The Medical Devices Ordinance (see box) already requires risks to be assessed preventively and continuously (i.e. also for the use of AI in medical devices). The benefits should outweigh the remaining residual risk. This risk-based approach has long been established in the MedTech industry. It should also be evident that zero risk only exists in theory. The same applies in the case of taking approved medicines – any given substance has a main pharmacological effect as well as undesirable side effects[7].
In contrast to humans, AI or ML can find complex relationships that are otherwise hidden to humans in an automated way and based on higher mathematics. AI and ML are much faster, more efficient and, above all, reproducible; like a calculator that helps us with complex calculations. Once trained and "frozen"[8], an AI always produces the same output for identical inputs – regardless of the time of day or mood – which is not the case with humans. AI does not tire, either. AI therefore supports and complements evidence-based medicine in a way that we will not want to do without in the future. However, this does not mean that we should rush headlong and emotionally into AI-based MedTech, but rather use the currently available methods wisely to create a positive benefit-risk ratio that serves the patient.
AI that makes a diagnosis A today and a diagnosis B tomorrow in the same situation but for no reason cannot be our goal. Technology should serve people and not the other way round. This has always applied to medical devices (in the traditional sense) and should continue to be applied in the future (which includes AI-based medical devices). It is therefore up to us humans to decide whether AI in medical technology will be a curse or a blessing – we can influence this and have it in our hands.
[1] For the sake of simplicity, in this blog post we will only focus on the field of diagnostic medical technology.
[2] www.fda.gov/medical-devices/software-medical-device-samd/artificial-intelligence-and-machine-learning-aiml-enabled-medical-devices
[3] See Implant Files (https://www.bmj.com/content/363/bmj.k4997) which caused far less of a stir than the breast implants (PIP) scandal (https://de.wikipedia.org/wiki/Poly_Implant_Proth%C3%A8se ).
Author's note:
It is now relatively easy to show that even stricter regulation such as the EU MDR cannot prevent such extreme cases. However, a discussion of this would go far beyond the scope of this blog.
[4] This refers to "deep neural networks" (dNN).
[5] This refers to the acceptable risk-benefit ratio (see Risk management in medical technology - www.iso.org/standard/72704.html).
[6] This in the sense of personalised medicine.
[7] In medical technology, the main effect of the medication corresponds to its intended purpose and the side effects correspond to the residual risk of the medical product.
[8] «Freezed Artificial Intelligence»
Switzerland defines medical devices in the Medical Devices Ordinance (MedDO), which is based on the Therapeutic Products Act (TPA), which in turn is founded on Article 118 – The Protection of Health – of the Swiss Federal Constitution. This definition is based on the European regulation (Art. 2 of the EU MDR) and has large overlaps with the definition of the US Food and Drug Administration (FDA).
According to this definition, a medical device fulfils a so-called medical purpose. A medical purpose can be, for example, the diagnosis of an illness or the treatment of injuries. It also includes products for contraception or contraceptive devices as well as products that are specially designed for the cleaning, disinfection or sterilisation of products that fulfil a medical purpose, their accessories and special products without a medical purpose according to a separate list.
The MedDO therefore covers a wide range of quite different medical devices and their accessories: simple products such as an adhesive plaster, a hospital bed or a wheelchair, but also extraordinarily complex ones such as an MRI scanner or defibrillators.
The blog posts in this series offer an interdisciplinary view of current AI developments from a technical and humanities perspective. They are the result of a recurring exchange and collaboration with Thomas Probst, Emeritus Professor of Law and Technology (UNIFR), and SATW member Roger Abächerli, Lecturer in Medical Technology (HSLU). With these monthly contributions, we endeavour to provide a factually neutral analysis of the key issues that arise in connection with the use of AI systems in various application areas. Our aim is to explain individual aspects of the AI topic in an understandable and technically sound manner without going into too much technical detail.