Are self-driving cars safer than human drivers?
I don’t think so, In fact, there are two major limitations of self-driving cars that make them extremely unsafe compared to human drivers.
AI Models cannot generalize beyond the data.
This means that they cannot interpret and infer the variations that exist in the real world. It must be fed all the data of all the various scenarios where it needs to drive otherwise the AI model is doomed to fail.
Self-Driving cars must be fed with data of all the variations that can possibly occur on the road. That’s just not possible!
Machine Learning Models don’t learn. They memorise data!
This is the biggest issue that happens with self-driving cars. Self-driving cars are no doubt a splendid innovation in the automotive industry, however, AI is just not there yet to fulfil the promises that Self-Driving cars are making.
Self-driving cars must have had all the data of all the variations that can occur on the road in order to act intelligently about those situations. This is just not possible.
Building self-driving cars that drive just as good as humans do is quite a big challenge and a huge limitation of current AI technologies.
Here’s one great example. Tesla Artificial Intelligence could not understand what to make of traffic lights being carried in a vehicle in front of it.
A Tesla driving behind a truck carrying traffic lights gets confused and thinks it’s on an infinite roadway of traffic lights. Another example of how machine learning is just pattern recognition and not intelligence in any meaningful sense of the word. https://t.co/bM8PwsOTgO— Dare Obasanjo (@Carnage4Life) June 4, 2021
Not being able to generalise the data beyond the data itself.
Self-driving cars are given a lot of data on roads. This is done so that they can infer from the data similar things that are likely to occur in real world. However, it’s simply not able to do that.
For every single scenario, you have to give it data for that particular scenario. It’s simply not possible to produce data for every single scenario because there are too many unique scenarios. There could be certain scenarios that some people have actually come across on the roads which most people don’t come across most of the time.
Others have probably not come across many scenarios such as driving in bad conditions, in a particular part of the country. They may also not have come across road works in a particular part of the road, in a particular part of the country, with bad conditions.
The list of scenarios such where self-driving cars may not have data goes on.
Once I was driving on country roads in Spain, it was very difficult for me, because I had never seen that terrain, so I was very slow. However, the locals were extremely fast, even though the road terrain was very difficult, but they were used to it. So, this shows that I had not come across that scenario before.
If artificial intelligence had to know about this scenario, we first have to product data for it and make AI learn that scenario.
Suppose, it starts raining on very different terrain or there’s a roadblock. What do you do? These are the sort of problems that artificial intelligence simply cannot generalize and know how to solve even if the problem is general and specific data is given for the problem.
In my understanding, this is a major issue and the reason why it is likely that self-driving cars will be delayed from being deployed on the roads.
Get in touch with us if you need help with understanding whether you need to use Artificial Intelligence. If you do need to use it, to what extent do you need to use it?