Skip to main content
  1. Home
  2. Computing
  3. News

ChatGPT creator seeking to eliminate chatbot ‘hallucinations’

Add as a preferred source on Google

Despite all of the excitement around ChatGPT and similar AI-powered chatbots, the text-based tools still have some serious issues that need to be resolved.

Among them is their tendency to make up stuff and present it as fact when it doesn’t know the answer to an inquiry, a phenomenon that’s come to be known as “hallucinating.” As you can imagine, presenting falsehoods as fact to someone using one of the new wave of powerful chatbots could have serious consequences.

Close up of ChatGPT and OpenAI logo.
Image used with permission by copyright holder

Such trouble was highlighted in a recent incident in which an experienced New York City lawyer cited cases — suggested by ChatGPT — that turned out never to have happened. The lawyer may face sanctions as a result of his action.

Recommended Videos

Another incident received widespread attention in April when ChatGPT apparently rewrote history by saying that an Australian mayor had been jailed for bribery while working for a bank when in fact he’d been a whistleblower in the case.

To make its chatbot technology more reliable, OpenAI engineers have revealed that they’re currently focusing on improving its software to reduce and hopefully eliminate these problematic occurrences.

In a research paper released on Wednesday and picked up by CNBC, OpenAI said that chatbots “exhibit a tendency to invent facts in moments of uncertainty,” adding: “These hallucinations are particularly problematic in domains that require multi-step reasoning since a single logical error is enough to derail a much larger solution.”

To tackle the chatbot’s missteps, OpenAI engineers are working on ways for its AI models to reward themselves for outputting correct data when moving toward an answer, instead of rewarding themselves only at the point of conclusion. The system could lead to better outcomes as it incorporates more of a human-like chain-of-thought procedure, according to the engineers.

But some experts expressed doubt about the work, telling CNBC it’s of little use until it’s incorporated into ChatGPT, which in the meantime will carry on hallucinating. OpenAI hasn’t said if and when it might incorporate its work into its generative AI tools.

While it’s good to know that OpenAI is working on resolving the issue, it could be a while before we see any improvements. In the meantime, as OpenAI itself says, ChatGPT may occasionally generate incorrect information, so be sure to confirm its responses if they’re part of any important tasks.

Trevor Mogg
Contributing Editor
Not so many moons ago, Trevor moved from one tea-loving island nation that drives on the left (Britain) to another (Japan)…
A simple coding mistake is exposing API keys across thousands of websites
Security gaps that are easier to miss than you think
Computer, Electronics, Laptop

After analyzing 10 million webpages, researchers have found thousands of websites accidentally exposing sensitive API credentials, including keys linked to major services like Amazon Web Services, Stripe, and OpenAI.

This is a serious issue because APIs act as the backbone of the apps we use today. They allow websites to connect to services like payments, cloud storage, and AI tools, but they rely on digital keys to stay secure. Once exposed, API keys can allow anyone to interact with those services with malicious intent.

Read more
AMD’s latest Ryzen 9 9950X3D2 pushes X3D to the limit
Dual 3D V-Cache, higher power, and a focus on enthusiast performance
AMD Ryzen 9 9950X3D2 FEatured

AMD has unveiled what might be its most extreme desktop CPU yet, the Ryzen 9 9950X3D2. And it’s going all-in on one thing: cache.

https://twitter.com/jackhuynh/status/2037159705395491033?s=20

Read more
Next-gen AI breakthrough promises chatbots that can read the room better
Researchers are teaching AI chatbots to read between the lines
Generative AI

Have you ever asked a chatbot something and felt like it completely missed your point? You say something with a bit of nuance, and the AI misses the subtlety entirely. That is exactly the problem researchers are trying to solve.

Even though the emotional connection with AI can feel deeper than human conversation for many users, most AI systems today still treat a sentence as a single block of sentiment. If you mix praise and criticism, the nuance often gets lost.

Read more