Should we expect a ban on human drivers?
According to this article in the Guardian, autonomous cars might be set to take over our roads. The article quotes Elon Musk, chief executive of Tesla, who recently estimated that the switch from human-operated to autonomous cars might take about 20 years. Musk went on to predict that the switch could lead to a ban on human drivers. “You can’t have a person driving a two-ton death machine’’, he said, arguing that research already shows that autonomous cars will be capable of outperforming human drivers.
Interestingly, Elon Musk is one of several high-profile persons who have made headlines recently, warning about the dangers of AI, referring to it as a potential “existential threat’’ to humanity. However, when it comes to autonomous cars and their possible implications, Musk adopts a much less dramatic view, adding that his company, Tesla, intends to be at the forefront of developing the technology further.
Autonomous cars, Musk argues, is merely a “narrow form’’ of AI. Moreover, Musk does not appear to be embracing the idea that autonomous cars will be moral agents. Instead, he compares them to elevators. But is this really the appropriate analogy for an AI operating a “two-ton death machine’’? To put the ethical question in a more dramatic light, one might go on to ask how an AI-optimistic take on autonomous cars could influence our approach to autonomous killer drones. Here, similar questions arise, but the (ethical) stakes appear to be even higher, with some now taking a clear stance against the further development of this kind of technology.