Skip to main content
  1. Home
  2. Computing
  3. News

ChatGPT now interprets photos better than an art critic and an investigator combined

Add as a preferred source on Google
chatgpt visual intelligence with o3 model.
OpenAI

ChatGPT’s recent image generation capabilities have challenged our previous understanding of AI-generated media. The recently announced GPT-4o model demonstrates noteworthy abilities of interpreting images with high accuracy and recreating them with viral effects, such as that inspired by Studio Ghibli. It even masters text in AI-generated images, which has previously been difficult for AI. And now, it is launching two new models capable of dissecting images for cues to gather far more information that might even fail a human glance.

OpenAI announced two new models earlier this week that take ChatGPT’s thinking abilities up a notch. Its new o3 model, which OpenAI calls its “most powerful reasoning model” improves on the existing interpretation and perception abilities, getting better at “coding, math, science, visual perception, and more,” the organization claims. Meanwhile, the o4-mini is a smaller and faster model for “cost-efficient reasoning” in the same avenues. The news follows OpenAI’s recent launch of the GPT-4.1 class of models, which brings faster processing and deeper context.

ChatGPT is now “thinking with images”

With improvements to their abilities to reason, both models can now incorporate images in their reasoning process, which makes them capable of “thinking with images,” OpenAI proclaims. With this change, both models can integrate images in their chain of thought. Going beyond basic analysis of images, the o3 and o4-mini models can investigate images more closely and even manipulate them through actions such as cropping, zooming, flipping, or enriching details to fetch any visual cues from the images that could potentially improve ChatGPT’s ability to provide solutions.

Introducing OpenAI o3 and o4-mini—our smartest and most capable models to date.

For the first time, our reasoning models can agentically use and combine every tool within ChatGPT, including web search, Python, image analysis, file interpretation, and image generation. pic.twitter.com/rDaqV0x0wE

— OpenAI (@OpenAI) April 16, 2025

With the announcement, it is said that the models blend visual and textual reasoning, which can be integrated with other ChatGPT features such as web search, data analysis, and code generation, and is expected to become the basis for a more advanced AI agents with multimodal analysis.

Recommended Videos

Among other practical applications, you can expect to include pictures of a multitude of items, such flow charts or scribble from handwritten notes to images of real-world objects, and expect ChatGPT to have a deeper understanding for a better output, even without a descriptive text prompt. With this, OpenAI is inching closer to Google’s Gemini, which offers the impressive ability to interpret the real world through live video.

Despite bold claims, OpenAI is limiting access only to paid members, presumably to prevent its GPUs from “melting” again, as it struggles to keep up the compute demand for new reasoning features. As of now, the o3, o4-mini, and o4-mini-high models will be exclusively available to ChatGPT Plus, Pro, and Team members while Enterprise and Education tier users get it in one week’s time. Meanwhile, Free users will be able to limited access to o4-mini when they select the “Think” button in the prompt bar.

Tushar Mehta
Tushar is a freelance writer at Digital Trends and has been contributing to the Mobile Section for the past three years…
The Android Show 2026: Gemini Intelligence, Googlebook, Android 17 updates, and everything else
Gemini Intelligence, Googlebooks, Android 17, and redesigned Android Auto. Google didn't hold back at its pre-I/O show, and the main event is still a week away.
The Android Show 2026

Every year, Google front-loads its Android announcements in a separate pre-show the week before its annual I/O conference. This year, the company did exactly that, and The Android Show: I/O Edition was anything but a warmup act. 

Google showed up well prepared, with plenty of software and a major hardware announcement that took everyone by surprise. One by one, let's talk about everything, including a deeply integrated AI overhaul, a long-overdue security upgrade, an Android Auto makeover that feels like it was designed for 2026, and a brand-new laptop category. 

Read more
Google just announced a new kind of laptop, and it puts Gemini everywhere
Google's new Googlebook platform puts Gemini at the center of every laptop interaction, from the cursor to the desktop, with devices from major PC makers arriving this fall.
Googlebook

Google wants Gemini to be the brain of your next laptop, and the company has announced a whole new category to make that happen. Dubbed Googlebook, the new laptop platform puts Gemini at the center of the experience, with devices from Acer, Asus, Dell, HP, and Lenovo expected this fall.

What makes it different

Read more
Google just made Gemini for Home a lot better at running your smart home
Google just updated Gemini for Home with smarter features and faster controls.
Google-gemini-for-home-updates

If you have a Google smart display or speaker at home, there are new updates you should know about. Google has rolled out a fresh batch of improvements to Gemini for Home, making the assistant noticeably smarter and faster across smart speakers and displays.

Gemini for Home is getting smarter and more personal

Read more