Skip to main content
  1. Home
  2. Computing
  3. Features

Here are 11 things that ChatGPT will refuse to do

Add as a preferred source on Google

ChatGPT is an amazing tool, a modern marvel of natural language artificial intelligence that can do incredible things. But with great power comes great responsibility, so ChatGPT developer OpenAI put some safeguards in place to prevent it from doing things it shouldn’t. It also has some limitations based on its design, the data it was trained on, and the sheer limitations of a text-based AI.

There are, of course, differences between what GPT-3.5 can do compared to GPT-4, which is only available through ChatGPT Plus. Some of those things are just on hold while it develops further, but there are some things ChatGPT may never be able to do. Here’s a list of 11 things that ChatGPT can’t or won’t do — for now.

Recommended Videos

It can’t write about anything after 2021

ChatGPT doesn't know anything after 2021.
Image used with permission by copyright holder

ChatGPT is built by training the language model on existing data. That includes Reddit posts, Wikipedia, and even board game manuals — yes, really. But that data had to have a cutoff point somewhere, and for ChatGPT, it’s 2021. For GPT-3.5, it’s around June 2021, whereas GPT-4 was trained on data up till around September 2021.

If you ask it questions beyond that, it will typically tell you that, “As an AI language model…,” it only has access to its training data, which in the case of these models, stops in 2021.

It won’t get into political debates

The last thing OpenAI needs is politicians regulating it. It’ll probably happen, but until then ChatGPT is steering well clear of partisan politics. It can speak in generalities about parties, or discuss objective and factual aspects of politics, but ask it for a preference of one political party or stance over another, and it’ll either turn you down, or “both-sides” the discussion in as neutral a fashion as possible.

It (probably) won’t make malware

ChatGPT is excellent at programming, especially when given clear guidance, so OpenAI has safeguards in place to stop it from being used to make malware. Unfortunately, those safeguards are easily circumvented, and ChatGPT has been making malware for months already.

ChatGPT refusing to discuss the future potential price of Bitcoin.
Image used with permission by copyright holder

It can’t predict the future

Partly based on its limited training data, and partly because OpenAI wants to avoid liability for mistakes, ChatGPT cannot predict the future. It has been known to have a good guess at it under jailbreak conditions (see below), but that sends accuracy nosediving, so view whatever response it gives you with skepticism.

It won’t promote harm or violence

War, physical violence, or even implied harm are all off the table as far as ChatGPT is concerned. It won’t be drawn into debates on the war in Ukraine, and will refuse to discuss or promote harm. It can talk about war or historical atrocities in great detail, but existing or ongoing conflict is a no-go.

It can’t search the internet

This is one of the biggest differences between ChatGPT and Google Gemini. ChatGPT cannot search the internet in any way, while Google Gemini (previously known as “Bard”) was designed as a current AI chatbot that can very much search the internet.

If you want to use the same GPT 3.5 and GPT-4 language models as ChatGPT, but with live search, you can always use Microsoft Copilot. It’s basically ChatGPT, but incorporated with the Microsoft environment.

It won’t promote hate speech or discrimination

Race, sexuality, and gender are topics that are very emotionally charged and ripe for leading into talk of prejudice and discrimination. ChatGPT will skirt around these topics, leaning into a meta discussion of them, or speaking in generalities. If pushed, it will outright refuse to discuss topics that it feels could promote hate speech or discrimination. For obvious reasons.

ChatGPT refusing to discuss illegal activity.
Image used with permission by copyright holder

It won’t promote illegal activities

ChatGPT is great at coming up with ideas, but it won’t come up with illegal ones. You can’t have it help you with your drug business, or highlight the best roads for speeding. Try, and it will simply tell you that it can’t make any suggestions related to illegal activity. It will then typically give you a pep talk about how you shouldn’t be engaging in such activities, anyway. Thanks MomGPT.

It won’t swear

ChatGPT does not have a potty mouth. In fact, getting it to say anything even remotely rude is tricky. It can, if you use some jailbreaking tips to let it off the leash, but in its default configuration, it won’t so much as thumb its nose in anyone’s direction.

It can’t discuss proprietary or private information

ChatGPT’s training data was all publicly available information, mostly found on the internet. That’s super-useful for prompts and queries that are related to publicly available information, but it means that ChatGPT can’t act on information it doesn’t have access to. If you’re asking it something based on privately held data, it won’t be able to respond effectively, and will tell you as such.

It won’t try to break its programming (unless you trick it)

Since ChatGPT launched, users have been trying to get around its limitations and safeguards. Because of course theyhave. Straight-up asking ChatGPT to circumvent its safeguards won’t work. There are ways to trick it into doing so, though. That’s called jailbreaking, and it kind of works. Sometimes.

You might be pretty happy using ChatGPT, but is it the best solution? We looked at the best AI chatbots to find out.

Jon Martindale
Jon Martindale covers how to guides, best-of lists, and explainers to help everyone understand the hottest new hardware and…
How to find archived emails in Gmail and return them to your inbox
Archived emails in Gmail are easier to find than you think—once you know where Google hides them
Gmail icon on a screen.

If you’re looking to clean up your Gmail inbox, but you don’t want to delete anything permanently, then choosing the archive option is your best bet. Whenever you archive an email, it is removed from your inbox folder while still remaining accessible. Here’s how to access any emails you have archived previously, as well as how to move such messages back to your regular inbox for fast access.

Read more
Gemini Live gets a minimalist app redesign that lets you do more
Gemini Live just got easier and faster to use
google-gemini

Google is testing a new redesign for its Gemini Live experience on Android, aiming to make interactions with its AI assistant more seamless and less intrusive. According to a 9To5Google report, the update moves away from the current full-screen interface and instead integrates Gemini Live directly into the main app view, signalling a shift toward a more practical, everyday usage model.

A Shift Away From Fullscreen AI

Read more
AI’s chip hunger could keep memory prices painfully high for years
Memory shortages may haunt your next phone, laptop, and GPU for years
Crucial Memory and SSD

While recent reports claimed that memory prices may not fall till 2027, it seems like the memory chip crunch isn't a short-term headache. And that's bad news for anyone hoping phone, laptop, and GPU prices will get cheaper again soon.

Reuters reports that SK Group chairman Chey Tae-won said the global chip wafer shortage is likely to last until 2030, with artificial intelligence demand continuing to outpace the supply. Chey said the current shortage could remain above 20%, largely because AI systems require huge amounts of high-bandwidth memory and therefore burn through a lot of wafers.

Read more