Skip to main content
  1. Home
  2. Emerging Tech
  3. News

Google is redefining the cursor for computers, and it’s AI-charged future looks ridiculous

Google’s Magic Pointer could be the next evolution of AI on laptops

Add as a preferred source on Google
AI, App
Google

The humble mouse pointer has barely changed in decades. It moves, clicks, selects, drags, and occasionally turns into a spinning wheel of frustration. Google now wants to turn that tiny arrow into one of the most powerful AI tools on your laptop, which sounds ridiculous until you think about how often you use it.

The company has announced Magic Pointer for Googlebook, its new category of Gemini-powered laptops. The feature gives the cursor AI abilities, allowing it to understand what you are pointing at and help you act on it without needing a long prompt or a separate chatbot window.

Can the cursor become the new AI button?

In a new DeepMind post, the company explained how it is rethinking the pointer for the AI era. The idea is to make Gemini understand the exact part of a webpage, image, table, document, or video frame the user is referring to. That turns the cursor from a basic navigation tool into a kind of AI remote control for the entire screen.

This is where the whole thing starts to sound wonderfully absurd. A pointer could turn a table into a chart, compare products you select on a webpage, summarize a PDF into bullets for an email, or identify a building in a photo and pull up directions. The cursor, once used mainly to click tiny buttons, is suddenly being asked to understand context, intent, and action.

Why does this matter for Googlebooks?

Google has taken inspiration from the way people already communicate offline. You usually do not describe every object in a room before asking someone to move it. You point and say, “move this” or “fix that.” Magic Pointer brings that same idea to the screen. The cursor tells Gemini what you are referring to, while short commands such as “add this,” “merge those,” or “what does this mean?” tell it what action to take.

This new feature will be deeply integrated into Googlebook laptops, as Magic Pointer is being announced as part of that platform. That means Googlebook users should be able to use it more freely across the laptop experience, instead of being limited to a single app or browser window.

Recommended Videos

For everyone else, this AI pointer will be limited to Gemini in Chrome for now. Google says users can point to specific parts of a webpage and ask questions, such as comparing multiple selected products, summarizing technical specs from a product listing, or instantly converting prices into a different currency.

If Magic Pointer works well, everyday AI tasks may no longer need a prompt box at all.

Sudhanshu Kumar Mangalam
I’ve got about 4 years of experience, mostly covering gaming, PC hardware, and smartphones. In my free time, I like…
6 things Gemini Intelligence is about to do across your Android devices
Logo, Disk, Symbol

Google is bringing Gemini Intelligence to Android, which brings the best of Gemini to its most intelligent devices. The company really wants you to get your work done by Gemini throughout the day, all while staying in control and keeping your data private. Google is rolling out these features starting with the Samsung Galaxy and Google Pixel devices this summer. Furthermore, we’ll see these features on other Android devices, including watches, cars, glasses, and laptops, later this year.

Your assistant is about to get a lot more hands-on, without you having to ask twice

Read more
Google’s next Chrome update is a big deal for Android users
Electronics, Mobile Phone, Phone

Gemini is clearly becoming the centerpiece of Google’s AI strategy, and that focus is now extending deep into Chrome on Android. Starting in June, Chrome is getting a fresh wave of AI-powered features built around Gemini, and the goal is pretty simple: turn your browser into something that actually helps you think, plan, and act, instead of just showing you pages.

Chrome is about to get a little too helpful in the best way

Read more
Rice grain-sized sensor could give robots a delicate touch and keep them from breaking stuff
Sprout Robot

Robots are incredibly precise, but being gentle is not always their strong suit. A machine that can build a car with near-perfect accuracy can still apply too much pressure when working in places where even the smallest mistake matters, like inside a human eye or during delicate surgery. That is why researchers at Shanghai Jiao Tong University are developing a new type of force sensor that could help robots “feel” what they are touching more accurately.

The sensor is tiny, about the size of a grain of rice at just 1.7 millimeters wide, making it small enough to fit inside advanced surgical tools. What makes it especially interesting is that it does not rely on traditional electronics. Instead, it uses light to measure force from every direction, including pressure, sliding movements, and twisting. Here is how it works. At the tip of an optical fiber sits a soft material that slightly changes shape when it comes into contact with something. That tiny deformation alters how light travels through the sensor. The altered light pattern is then sent through optical fibers to a camera, which captures it like an image. Researchers then use a machine learning model to study those light patterns and translate them into precise force readings. In simple terms, the system learns how to “read” touch through light alone, without needing a bunch of wires or multiple separate sensors packed into such a tiny space.

Read more