This blog post looks at the safety and ethical issues of self-driving cars and the liability in the event of an accident from various perspectives.
The Minority Report is a film adaptation of Philip K. Dick’s 1956 novel of the same name, directed by Steven Spielberg. Set in Washington, D.C. in 2054, the film depicts a society in which a pre-crime system has been introduced to arrest criminals before they commit murder. The film is a science fiction film that depicts a cold and bleak future society, and Steven Spielberg imagined and expressed various high-tech IT devices that would be available in the future at the time. Interestingly, the cutting-edge technologies in the movie are slowly being implemented in reality.
There are many interesting scenes in Minority Report, and the most eye-catching scene is when the car drives on its own instead of Tom Cruise, who cannot drive because he is trying to lose the pursuers.
When the movie was released, autonomous driving was considered a technology that would only be realized in the distant future. However, since Google officially announced its plan to develop an autonomous car in 2010, various automobile and IT industries have been actively researching and investing in autonomous cars, and commercialization is gradually becoming a reality. As autonomous cars have become a trend, manufacturers are paying great attention to this field.
However, autonomous vehicles are not welcomed by everyone. Some people question whether autonomous vehicles can be trusted from the consumer’s point of view. For example, in May 2016, a driver using Tesla’s autonomous driving function collided with a passing trailer and died. The white trailer was mistaken for the sky, causing the malfunction. The US government subsequently concluded that Tesla’s Autopilot system was not at fault and that the driver was at fault for failing to take action in the event of a collision. This allowed Tesla to avoid legal liability for the first fatality involving an autonomous vehicle.
However, accidents caused by Tesla’s autonomous driving system have been occurring frequently since the accident. According to statistics from the National Highway Traffic Safety Administration (NHTSA), 736 accidents involving autonomous driving systems occurred in the United States over the four years from 2019, and 91% of all autonomous driving accidents were caused by Tesla’s Autopilot and Full Self-Driving systems. As a result, consumers have doubts about the safety of autonomous driving systems, and a survey by the US marketing information company JD Power showed that there has been an increase in mistrust of autonomous driving.
The Tesla accident did not only affect consumer sentiment. Although the establishment of systems in various countries has been in full swing since the accident, there are still ongoing controversies over the responsibility for accidents involving autonomous vehicles, insurance processing, and legal regulations. For example, let’s assume that when you are in a fully autonomous vehicle, the car automatically performs all driving actions when the driver sets the destination. In this case, should the occupant not be responsible for the accident? If so, who is responsible for the accident? The confusion arises as to who is responsible: the car owner, the manufacturer, or the state under state regulatory oversight. The insurance industry argues that manufacturers should be held responsible for accidents because they are in a position to control the risk of accidents. On the other hand, the automobile industry is of the opinion that it is too excessive for manufacturers to bear 100% of the responsibility for traffic accidents. In Germany, Tesla’s electric vehicle’s Autopilot function is not allowed to be installed because it is a test version that is not fully functional, and in South Korea, Japan, Europe, etc., standards are being established to define the conditions for autonomous vehicles that allow drivers to overtake or change lanes without operating the steering wheel, and will be adopted by each country.
The problem with autonomous vehicles is that in addition to legal liability, ethical judgment is involved. An example of an ethical problem in autonomous driving systems can be found in the thought experiment presented on the TED-ed YouTube channel. For example, let’s say that an autonomous vehicle is driving and has to avoid an object that is falling from a truck in front of it. It has three choices: first, it can go straight and collide with the object; second, it can turn right and collide with a motorcycle; or third, it can turn left and collide with an SUV. At this point, a driver can make a decision based on a reflex, but an autonomous vehicle will act according to the judgment set in advance by the programmer. If so, what is the programmer basing this judgment on? Can we consider it a planned murder? If the assumption is extreme, consider a case where a vehicle is set to follow the ethical judgment of its passengers. Would this judgment be a better choice than programming to minimize damage?
MIT is conducting a public opinion poll game called Moral Machine to address ethical judgments like these.
Moral Machine is a platform for collecting social perceptions of ethical decisions made by artificial intelligence, such as those made by self-driving cars. It assumes a situation in which an unmanned vehicle must make an ethical choice between sacrificing a passenger or a pedestrian while driving, and induces the survey respondents to make a judgment that they would accept. For example, in the event of an accident, the occupants, pedestrians, social status, physical condition, and age are randomly assigned and a decision is made as an external observer. If the results of this survey are used to program an autonomous vehicle, is it right to set it to save as many lives as possible in the event of an unavoidable accident, or is it more desirable to prioritize the lives of the occupants?
When the programming of these value judgments leads to an actual accident, can we be free from responsibility for the accident?
I believe that the legal issues of autonomous vehicles can be solved to some extent through agreements between individuals or between individuals and society. However, ethical issues are different. In modern times, the issue of ethics has been raised as a result of rapid technological development. The development of science and technology enriches our lives, but it often leads to situations that are contrary to human ethics. When self-driving cars become commercialized, the possibility of life-threatening accidents such as drowsy driving, drunk driving, reckless driving, and retaliatory driving will decrease. In addition, improved traffic flow will reduce the time it takes to reach your destination and increase the chances of enjoying your leisure time. However, autonomous vehicles cannot escape ethical issues. This is an ethical issue that will face not only autonomous vehicles, but also artificial intelligence, robots, and even humanity as a whole. The question of whether human life can be weighed and whether the weight of animal and human life is different must be addressed before the commercialization of autonomous vehicles.