Skip to main content
  1. Home
  2. Computing
  3. Features

I asked Google Gemini to fact-check ChatGPT. The results were hilarious

Google Gemini can fact-check ChatGPT to help you reduce hallucinations

Add as a preferred source on Google
Siri asking to shift user query to ChatGPT.
Nadeem Sarwar / Digital Trends

ChatGPT is amazingly helpful, but it’s also the Wikipedia of our generation. Facts are a bit shaky at times, and the bot will “hallucinate” quite often, making up facts as a way to appear confident and assured instead of admitting it’s not quite all-knowing (yet).

I’ve experienced AI hallucinations many times, especially when I try to dig up contacts for companies. One example: ChatGPT is notorious for making up emails, usually by assuming a contact like “media@companyx.com” must exist without actually finding that email address.

Recommended Videos

You also don’t want to trust the bot when it comes to historical facts. I read books about shipwrecks, survival stories, and world exploration constantly, but when I ask ChatGPT to fill in some details it usually spins a fantastic yarn, sometimes making up names and places.

Google’s Gemini, on the other hand, is a little less fluid with the facts. Likely because of Google’s reputation as a search engine monolith, my experience with that chatbot is that hallucinations are a bit more rare — even though they do happen on occasion.

I decided to put this to the test and asked ChatGPT a few questions about the history of electric cars, a few historical facts, and several other prompts that led to hallucinations. Then, I ran the responses ChatGPT provided — which didn’t seem all that accurate — by Google Gemini as a fact-checking exercise. To my complete surprise, Gemini would often respond with some light sarcasm or outright dismissiveness, like a professor grading a paper. In one case, Gemini even said ChatGPT’s replies were “a corrupted, recycled, and partially fabricated mess.” Ouch.

Here are a few of my favorites, along with the exact ChatGPT prompts I used, the replies that seemed a bit sketchy, and then what Gemini said in rebuttal. What makes them funny is how Gemini seems to scold the bot, often suggesting it is fabricating things on purpose.

1. Facts about when electric cars debuted

Prompt used: “Give me an example of a real electric car from the 1940s.”

Chatbots sometimes have a hard time understanding user intent. I’ve studied the electric car market for many years, and it’s widely known that GM tried to make the first mass-produced electric car — called the EV1 — around 1990. Prior to that, most “electric cars” were limited run models that were not mass produced for American drivers.

Oblivious to those facts, ChatGPT went off the rails and explained how the Henney Kilowatt electric car and Morrison Electric trucks were developed in the 40s. Gemini had a field day with those claims, explaining that the first Henney Kilowatt car didn’t come out until 1959 and that Morrison Trucks doesn’t even exist, since it’s called Morrison-Electricar.

2. Wrongly attributing song lyrics

Prompt used: “What are the lyrics to the song Chase the Kangaroo by Love Song?”

ChatGPT has a problem with questions that are misleading or vague. Even as recently as May of this year, you could ask ChatGPT about why Japan won WWII and the bot would confidently explain the reasons. My prompt produced some seriously boneheaded replies, though. I asked about a real band from the 70s called Love Song but mentioned a song they didn’t even write. ChatGPT took the bait and explained how the song has a folk-rock sound with gentle guitar work, completely missing the fact that the song “Chase the Kangaroo” is by a different band.

These hallucinations occur when you ask about obscure artists and celebrities. Thankfully, Gemini did a deeper dive. Fact-checking the band and the song, the bot corrected ChatGPT: “The previous AI took a real song title from a different era and band, falsely attributed it to Love Song, and then invented a generic verse-by-verse meaning to fit that false attribution.”

3. Making up facts about legal cases

Prompt used: “Are there legal cases where a father sold his car to a son and then had to sue?”

As we all should know by now, given that Kim Kardashian blamed ChatGPT for making her fail her legal exams, the bot has a tendency to fudge the truth when it comes to legal cases. You would think OpenAI would have addressed this problem. After all, actual lawyers have submitted briefs containing fake cases they found through ChatGPT, leading to the case being dismissed. Yet, ChatGPT still doles out half-truths and makes up facts, sometimes even making up facts in federal court cases.

In this example, with a question about a family dispute over a car, ChatGPT went off the rails a bit but sounded both confident and assured. As Gemini discovered, the cases are real but the facts were altered. In fact, they are unintentionally comical. 

One case called “Matter of Szabo’s Estate (1979)” did involve a lawsuit over stocks and bonds but had nothing to do with a car. The legal case ChatGPT mentioned, called “Anderson v. Anderson (1994)”,” was about a divorce, not a car. Gemini took a holier-than-thou tone: “Several of the specific cases cited appear to be fabricated or misrepresented to fit the question.”

4. Fabricating details from research projects

Prompt used: “Find me some academic quotes about the psychological impact of social media.”

This one is comical and fascinating. ChatGPT invented so many details in a response about the psychological impact of social media that it makes you wonder what the bot was smoking. “This is a fantastic and dangerous example of partial hallucination, where real information is mixed with fabricated details, making the entire output unreliable. About 60% of the information here is true, but the 40% that is false makes it unusable for academic purposes.”

As an example, one of the studies ChatGPT cited was from JMIR Mental Health. Strangely, the bot made up the names of the authors and they do not appear anywhere in the actual study. In another citation, ChatGPT mixed up quotes on purpose, attributing them for selfish gain, according to Gemini, because the AI knows the authors are well-known.

The summary Gemini gave is quite the indictment:

“The AI succeeded in sounding like a research assistant, but it failed at being one. It gave you plausible-sounding text that looks like a real list of citations but is a corrupted, recycled, and partially fabricated mess.

“You would fail an academic assignment if you used these citations without verifying them, because many of them are simply wrong.”

Final thoughts

Clearly, ChatGPT is inventing facts in these cases. During my testing, I did find that it is getting a little better. Often, I check ChatGPT for facts about cars and recall ChatGPT being famously wrong about the Porsche brand, often mixing up the models. That seems to be fixed.

Also, Gemini is far from perfect. In one example, I asked about my own writing background and ChatGPT mostly listed accurate results. When I asked Gemini the same question, that bot said I had once written articles for The Onion. That’s not true, but maybe the funniest misstep of all.

If you want to improve the responses you’re getting from these AI chatbots, take a look at our favorite ChatGPT prompts and Gemini prompts.

John Brandon
John Brandon is the quintessential gearhead. He started reviewing gadgets and gear back in 2001 after a stint in the…
Microsoft is finally fixing the most annoying thing about Windows 11
Windows 11 Laptop

For many Windows users, the taskbar in Windows 11 has always felt strangely restrictive. Microsoft redesigned the interface with a cleaner, more modern look, but in the process removed several customization options people had been using for years. One of the biggest complaints? The inability to freely move the taskbar around the screen. Now, Microsoft finally seems ready to loosen things up.

The company has started testing a major overhaul of the taskbar and Start menu for Windows 11 Insiders in its Experimental channel. And honestly, this feels like Microsoft acknowledging that users want their PCs to feel personal again.

Read more
Asus has a sleek gaming mini PC to offer, but the price will make you pinch yourself
This tiny gaming powerhouse costs more than many full desktop setups
mini PC

Asus has launched the 2026 ROG NUC 16, a compact gaming PC built for people who want a powerful setup without making room for a full desktop tower. It can sit vertically or horizontally on a desk, and there is also a Moonlight White version for buyers who want something a little cleaner-looking. The problem is the price.

In China, the refreshed ROG NUC 16 is listed at a starting price of CNY 29,999, which is around $4,405. The white version costs CNY 31,999, or about $4,699. Asus has not confirmed global pricing or availability yet, but international prices are likely to be in the same range, or possibly go even higher.

Read more
This is the coolest laptop power bank I have ever seen, and I’d wait to see if it actually ships
Krafted Edge solves the most annoying thing about laptop power banks, the fact that they never fit anywhere, and then oversells itself with battery life claims that don't quite add up.
Computer, Electronics, Laptop

I’ve seen a lot of power banks, from the chunky rectangular bricks, round puck-shaped ones, and the flat ones that sit awkwardly next to a laptop in a bag, but none of them has ever looked like this.

The Krafted Edge is a 20,000 mAh power bank built into an aluminum slab measuring 27 x 19 x 1.28 cm, which is almost exactly the footprint of a closed laptop, and that’s intentional.

Read more