Skip to main content
  1. Home
  2. Computing
  3. Features

I brought ChatGPT to the board game world. Is it ready for game night?

Add as a preferred source on Google

We all know that ChatGPT is great at speeding up mundane tasks. What could be drier than explaining the rules of a complicated game at board game night?

There’s no substitute for just knowing the game, but being able to reach for AI instead of the rulebook could make things a whole lot easier. Nothing derails a great game of Twilight Imperium like breaking out the Living Rules and endlessly scrolling. So, if I were to bring ChatGPT to board game night, I could definitely see it coming in handy. But before I subjected my friends to a robot reading them the rules, I decided to test it out with some basic questions to see if it was up to snuff.

Recommended Videos

A defined ruleset sounds ideal

I have no idea why the GPT-4-powered ChatGPT knows the rules of so many games, but it does. Or at least thinks it does. While it might be tricky to find an online manual for some of my collection, ChatGPT seems to have it all — some of those billions of data points it was trained on reportedly included the errata for something as obscure as the third Battlestar Galactica expansion.

That’s great, though, because it should know all the rules I don’t, right? Within the limited, constrained, and very particular environment of a board game, ChatGPT with its limited understanding of anything, but extensive knowledge of certain topics, should be great at it.

Unfortunately, as with everything else ChatGPT confidently posits to know, it’s often not quite right, and sometimes it’s outright wrong.

ChatGPT tries to answer a question on board gaming.
Image used with permission by copyright holder

The initial answer is quite correct: you don’t add the dice to the hunt pool. But hunt tiles aren’t added every turn. Maybe it’d be better if I had ChatGPT tell me where in the rulebook I can find this information?

Asking ChatGPT more board game questions.
Image used with permission by copyright holder

Hmm. Apparently putting it on the spot means it then corrects itself and gets the rule more wrong than it did before. It thinks there might be multiple “Gandalfs” in the Fellowship, and that there are special “Will of the West,” dice, rather than that being one of the possible results on the game’s action dice.

It then goes on double down on that error by citing a page in the rulebook that doesn’t have anything to do with “The Hunt.” There’s a section called “The Hunt For the Ring,” but it’s not until Page 40.

War of the Ring rulebook.
Image used with permission by copyright holder

But maybe this isn’t ChatGPT’s best game. Let’s give it one more chance to help with a game that’s somehow even bigger and more complicated than War of the Ring: Twilight Imperium.

ChatGPT answering board game queries.
Image used with permission by copyright holder

Here, ChatGPT does an admirable job, in that it does get the answer right, but for the wrong reasons. You can’t take a home system because you can’t invade it, not because you can’t move ships there.

If that seems pedantic, I get it. I don’t like telling my friend they can’t do something because they’ve misunderstood the very specifically worded text on the card they’ve played. These details matter in games, and if I’m going to get ChatGPT to do it for me, I need to be able to fully trust it.

It’s back to reading the rulebooks over and over

This was just a snippet of my time quizzing ChatGPT on how to play my favorite games. It knew how to launch ships in Battlestar Galactica, even if it wasn’t clear what part of your turn you do it in. It had a good idea of how to get cave tokens in Quest for El Dorado, but was very wrong on the cost you had to pay for them.

It did know Kingdom Death: Monster quite well, though, accurately reporting the stats of some of the monsters, and even making suggestions on how to modify those stats to my advantage.

It was a fun exercise seeing what ChatGPT knows about games, and it feels like one area where in the future, it could be invaluable. It wouldn’t even need to know all games. I can imagine a scenario where game publishers could have their own AI to help teach you their games, and I wouldn’t be surprised if it could act as a stand-in player one day too.

And who knows, maybe GPT-4 in ChatGPT Plus would have already solved this problem. Perhaps there will be a time when ChatGPT can handle board games.

For now though, since I can’t trust it, it’s back to reading rulebooks on the toilet so that when one of the players has a question, I can answer it. Because ChatGPT can’t. Yet.

Jon Martindale
Jon Martindale covers how to guides, best-of lists, and explainers to help everyone understand the hottest new hardware and…
Macbook Neo stress test shows Apple could’ve made it run cooler with a simple fix
This simple mod makes the MacBook Neo faster.
Apple MacBook Neo with users hands on it

Apple's MacBook Neo arrived as a shock to the industry. It is the new cheap MacBook that is designed to be silent, efficient, and affordable. But a new stress test suggests that it could have been noticeably better with a very simple change.

As per a recent test, the addition of a basic copper plate to the cooling setup can improve both thermals and performance by a meaningful margin. And the frustrating part? It isn't some complex engineering overhaul and is relatively straightforward.

Read more
The Mac Pro is dead at Apple, and I’ll miss the cheese-grater powerhouse
RIP Mac Pro. The Mac Studio is taking the throne, and we're okay with that.
Electronics, Computer, Pc

Apple has officially discontinued the Mac Pro. It’s been removed from Apple’s website, and Apple has confirmed to 9to5Mac that there are no plans to release a future version. The buy page now redirects to Apple’s Mac homepage, where the Mac Pro no longer exists.

Why did Apple kill the Mac Pro?

Read more
March Madness, Revisited: The AI Model Did Well. But Mad Things Still Happen
Stills from NCAA games.

(NOTE: This article is part of an ongoing series documenting an experiment with using AI to fill the NCAA brackets and see how it fares against years of human experience. The original article is as follows.)

A week ago, I wrote about entering an NCAA tournament pool with a more disciplined process than I usually use.

Read more