Skip to main content
  1. Home
  2. Emerging Tech
  3. Computing
  4. News

MIT researchers are working to create neural networks that are no longer black boxes

Add as a preferred source on Google

Whether you like it — as companies like Google certainly do — or don’t entirely trust it — logical artificial intelligence proponent Selmer Bringsjord being one outspoken critic — there is no denying that brain-inspired deep learning neural networks have proven capable of making significant advances in a number of AI-related fields over the past decade.

But that is not to say it is perfect by any stretch of the imagination.

Recommended Videos

“Deep learning has led to some big advances in computer vision, natural language processing, and other areas,” Tommi Jaakkola, a Massachusetts Institute of Technology professor of electrical engineering and computer science, told Digital Trends. “It’s tremendously flexible in terms of learning input/output mappings, but the flexibility and power comes at a cost. That is it that it’s very difficult to work out why it is performing a certain prediction in a particular context.”

This black-boxed lack of transparency would be one thing if deep learning systems were still confined to being lab experiments, but they are not. Today, AI systems are increasingly rolling out into the real world — and that means they need to be available for scrutiny by humans.

“This becomes a real issue in any situation where there are consequences to making a prediction, or actions that are taken on the basis of that prediction,” Jaakkola said.

Fortunately, that is where a new project from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) comes into play. What researchers there have come up with is preliminary work showing that it is possible to train neural networks in such a way that they do not just offer predictions and classifications, but also rationalize their decision.

For the study, the researchers examined neural nets that were trained on textual data. This network was divided into two modules: one which extracted segments of text and scored them on their length and coherence, the second which performed the job of predicting or classifying.

A data set the researchers tested their system on was a group of reviews from a website in which users rated beers. The data the researchers used included both a text review and also a corresponding star review, ranked out of five. With these inputs and outputs, the researchers were able to fine-tune a system which “thought” along the same lines as human reviewers — thereby making its decisions more understandable.

Ultimately, the system’s agreement with human annotations was 96 percent and 95 percent, respectively, when predicting ratings of beer appearance and aroma, and 80 percent when predicting palate.

The research is still in its early stages, but it is an intriguing advance in developing AI systems which make sense to human creators and can justify decisions accordingly.

“The question of justifying predictions will be a prevalent issue across complex AI systems,” Jaakkola said. “They need to be able to communicate with people. Whether the solution is this particular architecture or not remains to be seen. Right now, we’re in the process of revising this work and making it more sophisticated. But it absolutely opens up an area of research that is very important.”

Luke Dormehl
I'm a UK-based tech writer covering Cool Tech at Digital Trends. I've also written for Fast Company, Wired, the Guardian…
DJI’s first 360° drone offers 8K video recording and a freakishly long transmission range
From omnidirectional obstacle sensing to 42 GB of onboard storage, the Avata 360 is DJI doing what DJI does best: raising the bar for everyone else.
DJI Avata 360° drone.

DJI has officially entered the 360° drone arena with the launch of the Avata 360. It’s the company’s first-ever fully immersive FPV drone, and a direct shot at the Antigravity A1, a rival built by an Insta360-incubated brand. Looks like the drone wars just got more interesting. 

What makes the Avata 360 worth looking at?

Read more
I transferred all my chats from other AI apps to Gemini — and it works flawlessly
Google Gemini Graphics Featured

You know that moment when AI assistants like ChatGPT, Gemini, or Claude suddenly lose the plot mid-conversation and start hallucinating like they’re absolutely sure they’re right? Yeah…it’s equal parts funny and painfully annoying. My usual reaction is switching between apps, hoping one of them gets it right. But the real problem is that I have to start over every single time. It feels like I’m stuck in a loop explaining my life story to different AIs, one after the other.

Now with Gemini, I can now jump in from other AI apps without that whole reset conversation. Finally, the Google gods have blessed us. I tried it out expecting the usual hiccups, but it was surprisingly smooth and quick.

Read more
Google expands Search Live globally with voice and camera AI
The feature is now available in 200+ countries with multilingual support
Google Search Live

Google is taking another big step toward turning Search into a full-blown AI assistant. The company has officially expanded Search Live globally, making the feature available in over 200 countries and territories, along with support for dozens of languages.

https://twitter.com/google/status/2037201891130523917

Read more