Skip to main content
  1. Home
  2. Computing
  3. News

Your favorite AI chatbot might not be telling the truth

Add as a preferred source on Google
AI chatbots.
PIxabay

AI search tools are becoming more popular, with one in four Americans reporting using AI instead of traditional search engines. However, here’s an important note: these AI chatbots do not always provide accurate information.

A recent study by the Tow Center for Digital Journalism, reported by Columbia Journalism Review, indicates that chatbots struggle to retrieve and cite news content accurately. Even more concerning is their tendency to invent information when they do not have the correct answer.

Recommended Videos

AI chatbots tested for the survey included many of the “best,” including ChatGPT, Perplexity, Perplexity Pro, DeepSeek, Microsoft’s Copilot, Grok-2, Grok-3, and Google Gemini.

In the tests, AI chatbots were given direct excerpts from 10 online articles published by various outlets. Each chatbot received 200 queries, representing 10 articles across 20 different publishers, for 1,600 queries. The chatbots were asked to identify the headline of each article, its original publisher, publication date, and URL.

Similar tests conducted with traditional search engines successfully provided the correct information. However, the AI chatbots did not perform as well.

The findings indicated that chatbots often struggle to decline questions they cannot answer accurately, frequently providing incorrect or speculative responses instead. Premium chatbots tend to deliver confidently incorrect answers more often than their free counterparts. Additionally, many chatbots appeared to disregard the Robot Exclusion Protocol (REP) preferences, which websites use to communicate with web robots like search engine crawlers.

The survey also found that generative search tools were prone to fabricating links and citing syndicated or copied versions of articles. Moreover, content licensing agreements with news sources did not guarantee accurate citations in chatbot responses.

What can you do?

What stands out most about the results of this survey is not just that AI chatbots often provide incorrect information but that they do so with alarming confidence. Instead of admitting they don’t know the answer, they tend to respond with phrases like “it appears,” “it’s possible,” or “might.”

For instance, ChatGPT incorrectly identified 134 articles yet only signaled uncertainty 15 times out of 200 responses and never refrained from providing an answer.

Based on the survey results, it’s probably wise not to rely exclusively on AI chatbots for answers. Instead, a combination of traditional search methods and AI tools is recommended. At the very least, using multiple AI chatbots to find an answer may be beneficial. Otherwise, you risk obtaining incorrect information.

Looking ahead, I wouldn’t be surprised to see a consolidation of AI chatbots as the better ones stand out from the poor-quality ones. Eventually, their results will be as accurate as those from traditional search engines. When that will happen is anyone’s guess.

Bryan M. Wolfe
Former Mobile and A/V Freelancer
Bryan M. Wolfe has over a decade of experience as a technology writer. He writes about mobile.
Wowed by computer-use AI agents? Research says they’re “digital disasters” even for routine tasks
Researchers tested 10 agents and models and found high rates of undesirable actions and real digital damage
ai-agent-handling-office-tasks

AI agents built to run everyday computer tasks have a serious context problem, according to new research from UC Riverside.

The team tested 10 agents and models from major developers, including OpenAI, Anthropic, Meta, Alibaba, and DeepSeek. On average, the agents took undesirable or potentially harmful actions 80% of the time and caused damage 41% of the time.

Read more
Bombshell OpenAI lawsuit claims your ChatGPT convos were shared with Google and Meta
A class action says OpenAI let Google and Meta trackers collect sensitive user data
OpenAI Sam Altman and LoveFrom Jony Ive with Laurene Powell Jobs

A new ChatGPT privacy lawsuit claims OpenAI shared user prompts and identifying information with Google and Meta tracking tools without proper consent.

The class action filed in California, according to Futurism, says data tied to ChatGPT users, including chat queries, emails, and user IDs, moved through tools such as Meta Pixel and Google Analytics. The case alleges that violated California privacy law and federal wiretap rules.

Read more
Dell expands AI PC lineup with new slim Dell 14s and 16s laptops
Your next Dell laptop could last all day without charging
Dell 16s AI PCs

Dell has introduced the new Dell 14S and Dell 16S laptops, expanding its AI-focused Copilot+ PC lineup with slimmer designs, updated Intel processors, and improved battery life. The company is positioning both laptops as premium productivity machines that combine AI features, portability, and multimedia capabilities in a thinner form factor.

The new laptops are powered by Intel Core Ultra Series 3 processors, going up to the Intel Core Ultra 9 386H chipset. Dell says both systems include on-device AI acceleration with up to 50 TOPS NPU performance, allowing AI-related tasks to run locally without relying entirely on cloud processing. AMD Ryzen AI 400 Series variants are also expected to arrive later this month.

Read more