A realistic look at the AI myth: Automated cars injure pedestrians - what now?

Translated with DeepL

Automated vehicles have been a much-discussed and promising topic in specialist circles for several years. Accordingly, numerous national and international expert committees are working intensively to not only enable the safe use of such vehicles on the road, but also to ensure this as far as possible.

This work is hardly recognised by the public. This is probably because the media consider them to be of too little interest to the general public[1]. Instead, speculative considerations tend to take centre stage in the media on the topic of automated driving.

In some cases, a rather futuristic - or even eschatological - attitude is adopted, recognising in the automated vehicle the harbinger of a progressive replacement of humans by machines. This is because supposedly "self-driving" vehicles using "artificial" intelligence[2] utilise a highly developed combination of hardware and software to control vehicles and make drivers superfluous. This view is primarily fuelled by two factors:

  • On the one hand, the marketing departments of car manufacturers like to convey the impression in their public statements that the breakthrough of so-called "self-driving" vehicles is imminent on a broad front.
    However, they regularly fail to make it clear that these vehicles still require a driver in the car itself or online monitoring by an external supervisor and can and may only drive themselves temporarily under more or less restrictive traffic conditions (e.g. driving on the motorway at speeds of up to 60 km/h). This technology is still a long way from an actual "self-driving" vehicle that can fully control itself in all traffic situations without a driver or supervisor and cannot currently be the subject of a rational and knowledge-based discussion, but at best of unsubstantiated speculation.
  • On the other hand, technologically and scientifically orientated circles often use a vocabulary in public that gives the impression that an "intelligent whole" or even an "intelligent being" is created from a combination of hardware (machines, robots) and software (AI), which is supposed to have human or even superhuman characteristics and abilities.
    Such systems are then more or less linguistically ascribed not only the ability to make judgements and decisions, feelings and sensations (e.g. hunger), but also rationality, intellect and consciousness[3].
    . This neglects the fact that these terms have a predominantly humanistic character and connotation in society, which is not very suitable for inorganic matter and is more likely to mislead the public than to provide clarity.

Much too optimistic marketing promises about the technical capabilities of automated vehicles and dubious conceptual analogies between man and machine contribute to the fact that automated driving is often presented in the media in a distorted and unrealistic way.

So there is hardly any information about what so-called "self-driving" vehicles can actually do technically and what they are legally allowed to do, but instead there is a preference for speculating about possible accident scenarios and their consequences. This typically centres on the question of whether a "self-driving" car should run over a first-grader at a pedestrian crossing or a retired senior citizen if it is no longer possible to stop in time. The focus is usually on an ethical perspective.

What should we make of such media reports and to what extent do they contribute to a better social understanding of the advantages and disadvantages of automated driving? The answer to this question may be inferred from the following observations and comments:

 

  1. The ethical dilemma situation of whether a "self-driving" car should run over the child or the elderly person at the pedestrian crossing does not create any concrete gain in knowledge that would improve society's understanding of automated driving. Instead, the dilemma question leads to uncertainty and fosters diffuse fears.
    A) The dilemma situation is based on the incorrect assumption that - at that moment in which the driver or the "self-driving" vehicle supposedly has to decide who is to be run over - the future behaviour of the two pedestrians at risk is known. However, this is precisely not the case. How the two pedestrians will behave is not known in advance, as both people have the option of reflexively stopping or swerving to avoid a collision with the car in extremis. Anyone who therefore believes that, in the case of unknown future traffic behaviour of people, they must take the precaution of defining decision criteria for weighing up goods with a view to potential accidents, accepts the risk of "pre-programming" accidents. In other words, he causes accidents that would not have occurred without him, because in reality one of the pedestrians made a reflexive jump to the side and the car could have passed the corresponding half of the road without an accident. This may constitute criminal culpability on the part of the car manufacturer.
    B) The dilemma situation is speculative or hypothetical in nature and has no practical significance for real traffic. In the relevant literature and case law, there is no known case in which a driver has ever consciously decided to run over a child or an elderly person and been held accountable for doing so. It is therefore not apparent why this should now become a concrete problem with automated vehicles.
    C) Ethics itself is unable to provide an answer to the dilemma question raised, as it is not allowed to evaluate human lives and rank or prioritise them. This would be unethical. However, concrete transport problems cannot be solved with unanswerable philosophical questions.
  2. The authorisation of motor vehicles for road traffic and liability in the event of road traffic accidents are subject to a sophisticated network of international, European and national legal regulations that have been established over many years of practice.
    Accordingly, technical standards and legal norms are decisive for the safe use of motor vehicles. Ethical judgements in the sense of "good" or "bad" are not safety-relevant factors. In fact, drivers should be required to drive in legally compliant and not philosophically and ethically correct behaviour. The latter is not helpful because ethical criteria are not sufficiently justiciable to oblige drivers to pay compensation or to impose fines or prison sentences. In other words, drivers do not have to be "do-gooders", but law-abiding drivers.
  3. Contrary to a widespread opinion, the origin of which cannot be determined, accidents involving automated vehicles do not lead to any fundamentally new liability problems.
    In practice, accident victims regularly have no specific interest in a claim for damages against the driver, as they can assert their claim directly against the vehicle owner - to whom the behaviour of the driver is directly attributed - or against the driver's liability insurer. As a result, it makes no practical difference to the injured party whether the driver was driving the vehicle or the vehicle was driving itself in the accident.
    In both cases, it will ultimately be up to the liability insurer to decide whether it wishes to take recourse against the car manufacturer for the compensation payments it has made (to the accident victim) if a malfunction of the (automated) vehicle (partly) caused the accident. If the liability insurer successfully takes recourse against the car manufacturer, the latter will in turn be able to take recourse against any supplier of a defective car component.
  4. Finally, the punishability of accidents involving automated vehicles that temporarily drive themselves will depend crucially on whether the driver in the car or the external supervisor had a reasonable opportunity to prevent the accident involving their automated vehicle based on the specific circumstances. Driverless vehicles that can drive completely autonomously on the roads without any human supervision will not exist in the foreseeable future. If they are actually registered on our roads one day, there will be fatalities without a driver or supervisor being penalised.

    However, this is not a novelty. Our society has long accepted nolens volens that, for example, a suicidal pilot cannot be held responsible for the deaths of passengers in plane crashes.
    Accidental deaths without criminal sanctions are unavoidable in very rare situations. Every technology harbours certain risks, and this also applies to automated driving. Anyone who places idealistically high demands and expectations on automated vehicles therefore runs the risk of missing the "train to the future" of automated driving.


    [1]     However, the assumption that the public is unlikely to be interested in the safety of motor vehicles seems doubtful, as road accidents are the subject of media coverage practically every day.

    [2]     The question of how meaningful this term actually is will not be explored further at this point.

    [3]     Depending on the circumstances, these human qualities are understood to be merely "emulated" or "simulated" in so-called "intelligent" systems.

Reference to the legislation on liability for motor vehicles, Art. 58 Swiss Road Traffic Law (SRT)

1 If a person is killed or injured or property damage is caused by the operation of a motor vehicle, the owner shall be liable for the damage.

2 If a road accident is caused by a motor vehicle that is not in use, the owner is liable if the injured party proves that the owner or persons for whom he is responsible are at fault or that the motor vehicle was defective.

3 At the judge's discretion, the keeper is also liable for damage resulting from assistance rendered after accidents involving his motor vehicle, provided that he is liable for the accident or that the assistance was rendered to himself or to the occupants of his vehicle.

4 The keeper shall be liable for the fault of the driver and any assisting persons in the same way as for his own fault.

The blog posts in this series offer an interdisciplinary view of current AI developments from a technical and humanities perspective. They are the result of a recurring exchange and collaboration with Thomas Probst, Emeritus Professor of Law and Technology (UNIFR), and SATW member Roger Abächerli, Lecturer in Medical Technology (HSLU). With these monthly contributions, we endeavour to provide a factually neutral analysis of the key issues that arise in connection with the use of AI systems in various application areas. Our aim is to explain individual aspects of the AI topic in an understandable and technically sound manner without going into too much technical detail.