Skip to main content
  1. Home
  2. Cars
  3. News

Finding the ‘blind spots’ in autonomous vehicle artificial intelligence

Add as a preferred source on Google

Autonomous vehicles are becoming more and more sophisticated, but concerns still abound about the safety of such systems. Creating an autonomous system that drives safely in laboratory conditions is one thing, but being confident in those systems’ ability to navigate in the real world is quite another.

Researchers from Massachusetts Institute of Technology (MIT) have been working on just this problem, looking at the differences between how autonomous systems learn in training and the issues that arise in the real world. They have created a model of situations in which what an autonomous system has learned does not match actual events that occur on the road.

Recommended Videos

An example the researchers give is understanding the difference between a large white car and an ambulance. If an autonomous car has not been trained on or does not have the sensors to differentiate between these two types of vehicle, then the car may not know that it should slow down and pull over when an ambulance approaches. The researchers describe these kind of scenarios as “blind spots” in training.

A model by MIT and Microsoft researchers identifies instances where autonomous cars have “learned” from training examples that don’t match what’s actually happening on the road, which can be used to identify which learned actions could cause real-world errors. MIT News

To identify these blind spots, the researchers used human input to oversee an artificial intelligence (A.I.) as it goes through simulation training, and to give feedback on any mistakes that the system makes. The human feedback can then be compared with the A.I. training data to identify any situations where the A.I. needs more or better information to make safe and correct choices.

“The model helps autonomous systems better know what they don’t know,” author of the paper Ramya Ramakrishnan, a graduate student in the Computer Science and Artificial Intelligence Laboratory, said in a statement. “Many times, when these systems are deployed, their trained simulations don’t match the real-world setting [and] they could make mistakes, such as getting into accidents. The idea is to use humans to bridge that gap between simulation and the real world, in a safe way, so we can reduce some of those errors.”

This can also work in real-time, with a person in the driving seat of an autonomous vehicle. As long as the A.I. is maneuvering the car correctly, the person does nothing, but if they spot a mistake then they can take the wheel, indicating to the system that there is something that it missed. This teaches the A.I. in which situations there are conflicts between how it expects to behave and what a human driver deems safe and responsible driving.

Currently the system has only been tested in virtual video game environments, so the next step is to take the system on the road and test it in real vehicles.

Georgina Torbet
Georgina has been the space writer at Digital Trends space writer for six years, covering human space exploration, planetary…
This Android Auto update is trying to change how you drive and use your car
Road, Electronics, Credit Card

I use Android Auto every day, and at this point, it feels like a quiet co-driver sitting on my dashboard. That’s exactly why this upcoming refresh from Google actually matters. It is not just a visual tweak; it is a proper overhaul of how Android Auto should feel inside a modern car. The biggest change is the design. Google is bringing its Material 3 Expressive design language from phones into cars. That means Android Auto is getting a more modern, more fluid look with expressive fonts, smoother animations, and even support for wallpapers. This should really make the entire interface feel less rigid and more alive while you are driving.

Widgets finally make Android Auto feel useful at a glance

Read more
BYD’s latest EV costs just over $10,000, goes 250 miles, and packs a LiDAR, too
LiDAR, 250 miles, and a five-figure price tag: the 2026 Seagull is proof that the future of affordable EVs is already here, just not in the West.
BYD 2026 Seagull.

BYD has officially unveiled the 2026 Seagull, sold internationally as the Dolphin Mini or Dolphin Surf, and the numbers deserve your attention. 

The updated compact EV’s price starts from 69,900 yuan, which is around $10,300, in China, and tops out at 85,900 yuan, which is around $12,600. It debuted at the 2026 Beijing Auto Show before going on sale this week (via CarsNewsChina). 

Read more
BYD’s blazing-fast Flash charging tech for EVs got hot enough to roast a turkey
A real-world test of BYD's Megawatt Flash Charge pushed battery temps to 169.6°F.
BYD Flash charging

A real-world test of BYD's Megwatt Flash Charge technology showed the battery hitting 169.6°F during a charging session. That's hot enough to roast a turkey, and well above China's recommended safety ceiling of 149°F for lithium iron phosphate battery cells. The test, conducted by an automotive blogger who livestreamed the session (via ChinaEVHome), has raised concerns about whether the heat generated by ultra-fast charging degrades long-term battery health.

Why the heat matters

Read more