As more and more organizations push to develop our self-driving car technology, many wonder what will come of this? Can we expect more reported accidents as more of these cars hit the road?
Adding to the confusion, drivers and non-drivers alike don’t often understand where we are with this technology. Cars are not fully autonomous yet.
That said, the self-driving technologies we have on the road are less accident prone than their human counterparts. That is a huge understatement.
In almost 100 percent of the cases, accidents are the fault of humans. It’s either those behind the wheel of the other car or the ones carelessly managing their own semi-autonomous car.
Although there have been more cases than what follows, these are three examples from the last two years. You may draw your own conclusions.
In 2016, a self-driving car from Google caused a crash for the first time. The short version of the story was that a bad assumption led to a minor fender-bender.
It wasn’t the first time a Google autonomous vehicle (AV) had traded paint. It was the first time the Google car was at fault.
What happened… a Lexus-branded Google AV pulled in front of a transit bus, assuming the bus would allow it into traffic. Silly robot.
The bus was going about 15 when it hit the Google AV going two mph, causing minor body damage, but no injuries.
The scenario was a little more complicated as there were sandbags in the road from some roadwork, but the crash was the fault of the AV.
Tesla
A recently released report from the National Highway Traffic Safety Administration (NHTSA) detailed the first deadly crash involving an AV. This case was a Tesla’s autopilot system.
Killed in the crash was a man named Joshua Brown, a 40-year-old resident of Ohio. He was traveling through Florida at the time.
The problem with this case is that they were unable to find the fault in the scenario. It seems that Brown had enough time to react by taking control of the car.
Tesla lobbies that they’ve never told drivers they could take their hands off the wheel yet. The technology isn’t yet there, but owners of these vehicles do it all the time.
Tesla further claims that, since that accident, software upgrades put in place would have prevented it.
In fairness, Brown received seven visual and six audible warnings before the accident. He simply ignored them.
Uber
Since the beginning, Uber has minced no words about their intentions. They have always intended to make their service autonomous once the technology became available.
Last year they started limited testing of AVs in Arizona. For their vehicles, they’ve been using a Volvo XC90 SUV.
Trips in those cars have two engineers in the front of the car for times when a human needs to take over. Still, a high-speed accident took place in Tempe this past March.
The Uber car was in self-driving mode at the time, but it had the right away. It was the error of the human driving the other car.
The accident caused no injuries, but it paused the Uber test for awhile.
The question on the table these days is who is liable in these crashes?
The answer in each situation varies. Expect to hear more and more of these cases coming to light in the future. We are officially in uncharted territory.