Translated with DeepL
The following accident was recently reported on social media: A jogger was injured by a Robodog on his Sunday jog along the footpath beside a river. The dog had been developed by a bioengineer and trained to move around outdoors without an (electronic) leash. On the day of the accident, the bioengineer was out for a training walk with his Robodog. When the jogger coming from behind wanted to overtake the Robodog on the left, the Robodog suddenly swerved to the left due to a large stone lying on the footpath, cutting off the jogger's path. The jogger reflexively tried to swerve out of the way, but his right foot caught on the Robodog, he lost his balance, fell down the river embankment and hit his head hard against a stone block. He suffered a head injury, which significantly impaired his cognitive abilities and ultimately led to partial disability.
The accident was taken by a local politician as evidence that responsibility for the use of robots must be regulated by law, as AI poses a real danger to society and nobody really understands what is going on in neural networks and such "black boxes" are not socially acceptable.
Is this view correct? A comparison of the (artificial) Robodog with a normal (biological) dog will help.
If, in our (fictitious) case, the jogger had been brought down and injured by a Labrador, the legal responsibility would obviously lie with the animal owner. They bear the risk and the responsibility if their dog harms other people through instinctive misbehaviour. Anyone who keeps a dog is therefore obliged to raise and train it in such a way that it does not endanger or harm people. Depending on the specific circumstances, the dog must therefore be kept on a lead outdoors and, if necessary, wear a muzzle. When assessing the dog owner's liability, it is irrelevant what has happened in the dog's brain, i.e. in its neuronal network. The harmful behaviour of a dog is attributed to its owner - regardless of the biological and chemical processes in the dog's skull. It is therefore irrelevant to the animal owner's liability if the dog's brain is an intransparent, incomprehensible black box.
With this in mind, it is surprising that in many discussions artificial neural networks are characterised as black boxes and presented as per se dangerous for society because they lack transparency and comprehensibility. A more sober approach would rather conclude: If the attribution of responsibility for damage caused by inexplicable processes in the biological neuronal network of a dog has not been a problem for centuries, why should non-transparent processes in the artificial neuronal network of a robodog pose a new problem for liability for damage? In both cases, the owner should be liable for the harmful behaviour of their dog or Robodog.
From this perspective, the problem is not the lack of transparency of certain processes in (biological or artificial) neural networks, as attribution and responsibility must be based on the same principles in both cases for reasons of consistency. The problem lies more in the tendency to try to grasp and understand complex phenomena (such as AI) from the narrow perspectives of individual scientific disciplines. This is neither expedient nor promising. In other words: Labrador and Robodog should go for a walk together from time to time, with or without a lead, to encourage their owners to engage in an interdisciplinary exchange of ideas.
Anyone who keeps an animal is liable for any damage caused by the animal unless he can prove that he exercised all due care and attention in keeping and supervising the animal or that the damage would have occurred even if this care had been exercised.
The blog posts in this series offer an interdisciplinary view of current AI developments from a technical and humanities perspective. They are the result of a recurring exchange and collaboration with Thomas Probst, Emeritus Professor of Law and Technology (UNIFR), and SATW member Roger Abächerli, Lecturer in Medical Technology (HSLU). With these monthly contributions, we endeavour to provide a factually neutral analysis of the key issues that arise in connection with the use of AI systems in various application areas. Our aim is to explain individual aspects of the AI topic in an understandable and technically sound manner without going into too much technical detail.