Unfortunately, accidents are a way of life, which is why we provide only the best accident reconstruction software and investigation software. We’re only human after all, and must take heart in the fact that we will make mistakes, and sometimes you have to investigate these mistakes. How about robot drivers? Robots we assume are impervious to such mistakes; they’re designed without flaw or so that’s what we’ve been told by science fiction. That’s also what we’ve been told by Google, especially when it concerns their self-driving cars. C’mon, it’s Google – do they make mistakes? Yes, they do, and a recent accident involving one of their self-driving cars may change how we view auto accidents. Of course, the head of Google’s self-driving car program, Chris Urmson, blamed the other car for the accident. What exactly happened?
In this La Prensa article, it states, “According to Urmson, a Lexus RX450h, equipped with Google sensors, while moving in Google’s home city, Mountain View, California, stopped at a traffic intersection with green light, so as not to block it, as traffic on the far side was not moving. However, the vehicle behind it did not stop and rear ended the Google car at a speed of 17 mph.”
There were injuries to all involve. While there have certainly been worse accidents in the history of roads, this one in particular raises many questions. How can we punish or regulate self-driving cars? Is the company producing them to blame or the people actually in the car? Turns out this isn’t the first accident involving a Google self-driving car. There have been 14 accidents in about six years the company has been conducting test runs.
What do you think? Are these accidents something to be greatly concerned about?