Skip to main content
  1. Home
  2. Computing
  3. Emerging Tech
  4. Features

Sony’s table tennis robot made me think about what happens when AI gets a body

Ace starts as a flashy sports demo and quickly turns into a preview of AI moving from screens into factories, hospitals, farms, and homes

Add as a preferred source on Google
Ball, Sport, Tennis
Sony

I wanted to dismiss Sony’s table tennis robot as another expensive lab flex. A machine that can rally against elite players is impressive, sure, but it also sounds like the kind of demo built to make executives clap in a room where everyone already agreed to be impressed.

But table tennis is a nastier test than it looks. The ball is small, fast, spinning, and rude enough to change direction the moment it hits the table. Sony’s system faces something less forgiving than calculation. It has to see, predict, and act before the point is gone.

Recommended Videos

Sony tested Ace against five elite players and two professionals under official competition rules, and the robot came away with several wins.

The more useful detail is what it had to handle during those matches: fast, high-spin shots that change direction after the bounce and punish even small delays. In plain English, Ace wasn’t just hitting the ball back. It was reading motion, making a prediction, and moving before the rally escaped it.

AI is leaving the board

The usual “AI beats human” headline undersells what Ace is actually testing. We’ve already seen that story in cleaner arenas. IBM’s Deep Blue beat Garry Kasparov in 1997, and the symbolism still hangs over every old contest between human skill and machine calculation.

But chess, for all its strategic depth, is polite to computers. The board doesn’t wobble. The pieces don’t spin. A knight never comes screaming back at 60 miles per hour because someone clipped it at a nasty angle.

Sony’s robot points to a different shift. When AI has to move, intelligence becomes a timing problem. The system has to read the world quickly enough to act inside it. That’s more useful, and much harder to keep neatly boxed in.

The body changes the problem

This is where the table tennis demo starts doing more work. A robot that can track spin, predict motion, and adjust its response in real time isn’t automatically a factory worker, warehouse picker, nurse assistant, farmhand, or disaster-response machine. That leap would be too neat, which usually means it’s wrong.

The broader robotics market is already well past the cute-demo stage. The International Federation of Robotics says 542,000 industrial robots were installed in 2024, more than double the figure from a decade earlier. It expects installations to reach 575,000 in 2025 and pass 700,000 by 2028. That doesn’t make Ace a factory product, but it does make it part of a bigger automation story that’s already showing up on production floors.

On controlled industrial floors, robots need to handle variation instead of repeating one perfect motion forever. In logistics, they face crushed boxes, bad angles, missing labels, and people walking through the wrong lane at the worst possible time. Outdoors, mud, weather, uneven ground, and produce shaped by nature aren’t known for respecting software requirements.

The labor side is where the story gets less cute. McKinsey estimates that today’s technology could theoretically automate activities accounting for about 57% of current US work hours. That isn’t a clean jobs-lost number, and McKinsey is careful about that point.

The pressure is subtler and probably messier: tasks get split apart, roles get redesigned, and some workers discover that “efficiency” has a habit of arriving with a spreadsheet and a forced smile.

Some settings raise the penalty for being wrong. A chatbot that gets something wrong can waste an afternoon. A robot that misreads a patient’s balance, a wheelchair, or a hospital hallway can do real damage. The more embodied AI becomes, the less forgiving its mistakes get.

The bill comes with the body

The infrastructure doesn’t disappear when AI gets legs, wheels, or a robot arm. It still depends on chips, data centers, cooling systems, electricity, water, and a grid that wasn’t built around every company suddenly discovering it needs more compute.

The International Energy Agency expects global data center electricity consumption to double to around 945 TWh by 2030, representing just under 3% of global electricity consumption. That share may sound small until a local grid, a water system, or a community near a new data center has to absorb the concentration.

It’s not all grim though. Smarter robots could reduce factory waste, help inspect dangerous sites, improve precision agriculture, and take on work that breaks human bodies for a living. The upside is real, but so is the cost.

Deep Blue made AI feel powerful inside a board game. Ace makes it feel like the board is gone, and the pieces are now factories, hospitals, farms, grids, and workers trying to guess what happens next.

Asimov imagined robots bound by rules. The version we’re actually building may be bound first by economics.

Paulo Vargas
Paulo Vargas is an English major turned reporter turned technical writer, with a career that has always circled back to…
Tired of Gemini and ChatGPT? Claude now has your back with Spotify, Uber, and more connectors
Your weekend plans, grocery runs, and dinner reservations just got an AI upgrade.
Claude new app connections

One of the reasons I have preferred Gemini over Claude on my iPhone is its deep integration with Android apps. But all that changes today as Anthropic has just added support for 15 new app connectors to Claude, including AllTrails, Audible, Booking.com, Instacart, Intuit TurboTax, Resy, Spotify, StubHub, Taskrabbit, Thumbtack, TripAdvisor, Uber, Uber Eats, and Viator. 

While the feature launched back in 2025 and supported over 100 app connections, today’s release is what makes it truly useful for regular users, as the list includes apps we use daily. 

Read more
How to take a screenshot on a Chromebook in 2026
Use the Screenshot key or ChromeOS Screen Capture tools to grab a full, partial, or window screenshot in seconds
A woman uses the trackpad of the HP 14-inch 2-in-1 touch laptop.

Taking a screenshot on a Chromebook is easier than it used to be. Newer models include a dedicated Screenshot key, while all current Chromebooks also support ChromeOS Screen Capture tools for full-screen, partial, and window screenshots. If your device uses an older keyboard layout, you can still use the familiar Show windows shortcut.

You can also take screenshots in tablet mode, use an external keyboard, and change where screenshots are saved. Here’s how it all works.

Read more
OpenAI pushes ChatGPT toward autonomous work with GPT-5.5
OpenAI upgrades ChatGPT again - no breaks allowed
GPT-5.5

OpenAI has unveiled GPT-5.5, its latest artificial intelligence model powering ChatGPT, as the company continues to shift from conversational AI toward systems that can handle complex, real-world work. The new model is being rolled out across ChatGPT and Codex for Plus, Pro, Business, and Enterprise users, with a more advanced “Pro” version reserved for higher-tier subscribers.

Unlike earlier updates that focused on improving responses, GPT-5.5 is designed to handle multi-step tasks more effectively. The model can interpret less structured prompts, plan workflows, execute tasks, and check its own output with fewer iterations required from the user.

Read more