Driver error accounts for 94 percent of all traffic collisions. One of the ideas behind autonomous vehicles (AVs) was that once perfected, that number would be reduced significantly. Unfortunately, there have been 30 accidents involving AVs in the state of California since 2014.
It’s clear from this single statistic that AVs are still a long way from delivering on the promise of zero traffic collisions as well as the injuries and fatalities that will no doubt continue to plague us until that promise is delivered on. The only relief that anyone who is injured in such an accident has is to hire a lawyer. This article outlines other legal liabilities of AVs.
What a Human to Do?
Considering the issues mentioned above, it becomes clear that as more AVs and the supporting artificial intelligence (AI) operate in the world, the need to address issues of accountability and responsibility grow. It is obvious that, if the facts discussed above are a harbinger of things to come, there is plenty of cause for concern. Unfortunately, the bottom line is who is ultimately responsible when things go wrong? Is it the maker, the human overseer, the human as a backup?
Humans have a long history of interactions with machines, consistently overestimating the capacities and capabilities of what a machine can do. Despite this, it is obvious that the human, whose role is as a backup for the vehicle, is to ensure the safe functioning of that vehicle. And, although autonomous, the vehicle is certainly not perfect 100 percent of the time.
The Role of Humans in the Loop
AVs are certainly not the only technology that is involved in this phenomenon. There are documented cases of everything from media platforms to online delivery systems that require human oversight to ensure that they “smooth over the rough edges” in performance. This is what keeps things operating “intelligently,” ensuring accountability and safety of all systems.
Human oversight is one of the most important aspects in the future of AI systems for AVs. This isn’t a straightforward solution, but it is a start. Designing methods to certify the human-AI collaboration will be a major challenge facing the innovation of the final stage of AVs.
The Law Gets Involved
In 2018 a Tesla self-driving automobile was responsible for the death of a pedestrian in an accident that took place in Phoenix, Arizona. The debate over who was at fault—the AV or the human who was supposed to be monitoring the vehicle—came to the forefront of the legal implications involved.
Interestingly, this accident occurred only months after a law authorizing AVs on Arizona streets took effect. The investigation revealed that the AV was functioning properly, which leaves the human who was supervising the driving—even though he wasn’t—responsible for the accident.
So Goes the Law
The law has naturally followed humans in their pursuit of progress. This naturally leaves a lag between humans and the situations that are caused by the technology and the law in response. This creates a