Google's self-driving cars still in need of the human touch

Steven Loeb · January 13, 2016 · Short URL:

Over 300 times drivers had to take over, and 13 times when a driver prevented an accident

The idea of self-driving cars is fascinating. Yet, after they've been on the road for more than three years,  it's still nice to know humans are needed.  

While Google - which is going all in on the self-driving car - would like you to think that their vehicles are pretty much infallible, and that any problems that arise are, naturally, the fault of other drivers, we all know that isn't the case. Driver or no driver, these cars can still get into an accident.

report put out by Google itself earlier this week shows just how true that is.

During tests of self-driving cars that took place between November of 2014 and September 2015, there were a total of 341"disengagements," or instances where something went wrong and test drivers were forced to take over, the report shows.

There were three months in which at least 40 disengagements were reported, peaking with 53 of them in January 2015.

Of course not all disengagements are made equal. The vast majority of them, 272 of them, were due to the failure of the technology, in which the car asked the driver to take over. 

"In events where the software has detected a technology 'failure' -- i.e. an issue with the autonomous technology that may affect the safe operation of the vehicle -- the SDC will immediately hand over control to the driver; we categorize these as 'immediate manual control' disengagements. In these cases, the test driver is given a distinct audio and visual signal, indicating that immediate takeover is required," Google wrote.

Those kinds of incidents can be anything from "anomalies in sensor readings," which wouldn't be a big deal, or problems with steering or braking, which obviously would be a big problem.

The other 69, of them, are incidents in which "safe operation of the vehicle requires control by the driver," where the driver made the decision felt it was necessary for them to take control "for a variety of reasons relating to the comfort of the ride, the safety of the vehicle, or the erratic or unpredictable behavior of other road users."

There were a total of 13 times where Google determined that a driver prevented the car from hitting something. In the vast majority, 10 out of those 13, the driverless car would have been the one causing the accident, and in only three cases would it have been the fault of the other driver. 

Ok, so 10 times in a year the car would have caused an accident. That doesn't seem like much, but it would be plenty to anyone involved in getting hit by those cars.

Google is positive about the figures saying that they indicate that things are getting better.

"What we find encouraging is that 8 of these incidents took place in ~53,000 miles in ~3 months of 2014, but only 5 of them took place in ~370,000 miles in 11 months of 2015. This trend looks good, and we expect the rate of these incidents to keep declining,"" Chris Urmson, Software Lead for Google Self-Driving Cars, wrote in a blog post

"Although we’re not quite ready to declare that we’re safer than average human drivers on public roads, we’re happy to be making steady progress toward the day we can start inviting members of the public to use our cars."

What about those other 272 times when the car failed, though? Couldn't some of them have potentially caused accidents as well if there wasn't a driver in the car to potentially take over, especially if the car couldn't brake?

These types of potential incidents were behind the regulations proposed by the California Department of Motor Vehicles in December, which would require a lincensed driver to be behind the wheel at all times, in case of incidents just like the ones putlined in Google's report.

Google, of course, pushed back on that limitation, saying in a statement, "We’re gravely disappointed that California is already writing a ceiling on the potential for fully self-driving cars to help all of us who live here."

Driverless car safety

The potential safety and regulation of driverless cars is a contentious issue, but one that those in the space have been trying to confront.

Volvo for example, said that the company "will accept full liability whenever one if its cars is in autonomous mode." 

Håkan Samuelsson, president and chief executive of Volvo Cars, also urged the United States to establish federal guidelines for the technology, rather than doing it on a state by state basis.

Google too has tried to take the lead by filing a patent for how self-driving cars will be able to communicate with pedestrians, basically using censors to figure out what to do, then using signaling devices to declare its intentions.

For example, the vehicle might use physical signaling device, an electronic sign or lights, or a speaker to tell pedestrians what it is going to do. 

There's also the issue of vehicles potentially being hacked, something that made headlines in 2015. 

(Image source:

Support VatorNews by Donating

Read more from our "Trends and news" series

More episodes