Skip to main content
  1. Home
  2. Emerging Tech
  3. News Round Ups

Google’s next Chrome update is a big deal for Android users

Add as a preferred source on Google
Electronics, Mobile Phone, Phone
Google

Gemini is clearly becoming the centerpiece of Google’s AI strategy, and that focus is now extending deep into Chrome on Android. Starting in June, Chrome is getting a fresh wave of AI-powered features built around Gemini, and the goal is pretty simple: turn your browser into something that actually helps you think, plan, and act, instead of just showing you pages.

Chrome is about to get a little too helpful in the best way

At the heart of this update is a more contextual version of Gemini inside Chrome. Google wants it to function like a real assistant that understands what you are looking at on a webpage. So instead of copying text into another app or juggling tabs, you can tap a Gemini icon and ask questions directly about the page you are viewing. It can break down long articles, simplify complex topics, and offer clearer explanations without forcing you to leave the page.

But Google is clearly not stopping at summaries. Gemini is also being pushed into productivity territory inside Chrome. The idea is that it connects across Google’s ecosystem and actually does things for you. You will be able to add events to your calendar, save recipe ingredients to Keep, or pull specific information from Gmail, all without breaking your browsing flow. It is less about searching and more about completing small tasks in context, which is where this starts to feel genuinely useful.

It wants to handle the tedious bits so you don’t have to

Then there is Nano Banana, which leans into the more creative side. It lets users generate and personalize visuals based on what they are seeing online. In a learning context, it can even turn dense text into visual summaries, which is Google’s way of saying it wants Gemini to adapt content to how you prefer to consume it, not the other way around.

Recommended Videos

Chrome on Android is also getting something called auto-browse, which is designed to handle repetitive or tedious tasks in the background. For example, if you are planning to visit a place and need information like parking details, you can simply share the event, and Chrome will automatically gather the relevant information for you. It is the kind of feature that quietly removes friction from everyday browsing, even if it sounds a bit futuristic at first glance.

Of course, Google is also leaning heavily on safety here. These features are being built with protections against emerging threats like prompt injection attacks, which is Google’s way of saying it is trying to keep AI from being tricked into doing the wrong things.

The rollout begins in June for select Android 12 or newer devices in the US. Auto-browse, meanwhile, will be limited to AI Pro and Ultra subscribers on supported devices at launch. It is still early days, but Chrome is clearly moving from being just a browser to something that wants to actively participate in how you get things done online.

Shimul Sood
Shimul is a contributor at Digital Trends, with over five years of experience in the tech space.
6 things Gemini Intelligence is about to do across your Android devices
Logo, Disk, Symbol

Google is bringing Gemini Intelligence to Android, which brings the best of Gemini to its most intelligent devices. The company really wants you to get your work done by Gemini throughout the day, all while staying in control and keeping your data private. Google is rolling out these features starting with the Samsung Galaxy and Google Pixel devices this summer. Furthermore, we’ll see these features on other Android devices, including watches, cars, glasses, and laptops, later this year.

Your assistant is about to get a lot more hands-on, without you having to ask twice

Read more
Rice grain-sized sensor could give robots a delicate touch and keep them from breaking stuff
Sprout Robot

Robots are incredibly precise, but being gentle is not always their strong suit. A machine that can build a car with near-perfect accuracy can still apply too much pressure when working in places where even the smallest mistake matters, like inside a human eye or during delicate surgery. That is why researchers at Shanghai Jiao Tong University are developing a new type of force sensor that could help robots “feel” what they are touching more accurately.

The sensor is tiny, about the size of a grain of rice at just 1.7 millimeters wide, making it small enough to fit inside advanced surgical tools. What makes it especially interesting is that it does not rely on traditional electronics. Instead, it uses light to measure force from every direction, including pressure, sliding movements, and twisting. Here is how it works. At the tip of an optical fiber sits a soft material that slightly changes shape when it comes into contact with something. That tiny deformation alters how light travels through the sensor. The altered light pattern is then sent through optical fibers to a camera, which captures it like an image. Researchers then use a machine learning model to study those light patterns and translate them into precise force readings. In simple terms, the system learns how to “read” touch through light alone, without needing a bunch of wires or multiple separate sensors packed into such a tiny space.

Read more
Meta’s own employees are having a hard time digesting AI. Who would’ve thought?
Artificial Intelligence

If you wanted a snapshot of what it looks like when a tech giant tries to force-feed its workforce an AI future, look no further than Meta right now. The company that built its empire on knowing everything about its users has turned that same appetite inward, and its employees are not happy about it. Last month, Meta quietly informed tens of thousands of its U.S. workers that their corporate laptops would begin tracking their keystrokes, mouse movements, clicks, and screen activity. The purpose was to feed that behavioral data into Meta's AI models so they could learn how people actually use computers. The reaction was immediate — within hours, internal comment threads were flooded with anger, confusion, and more than a hundred emoji reactions that left little to the imagination about how employees felt.

When an engineering manager asked how to opt out, Meta's chief technology officer, Andrew Bosworth, had a blunt answer: there was no opt-out, at least not on a company laptop. This is the same company that is also tying AI tool usage to performance reviews, running mandatory "AI Transformation Weeks" to retrain its workforce, and building internal dashboards that gamify how many AI tokens employees consume in a day — a metric so aggressively tracked that some workers started building AI agents to manage their other AI agents. The whole thing started to resemble a feedback loop eating itself.

Read more