Skip to main content
  1. Home
  2. Phones
  3. Mobile
  4. Photography
  5. Virtual Reality
  6. News

As AR heads to Google search, Lens learns to translate, add tips, and more

Add as a preferred source on Google
Riley Young/Digital Trends

Computer vision puts the camera to use when you’re at a loss for words — but Google Lens can soon do more than just reverse search for similar items or details about what’s in that photo. During I/O on Tuesday, May 7, Google demonstrated new search capabilities powered by the camera and expanded Lens skills for calculating tips, translating text, and more.

During the keynote, Aparna Chennapragada, Google’s vice president for the camera and augmented reality products, demonstrated how Google’s search results can use AR to bring 3D models into the room with you, without leaving the search results. A new “view in 3D button” pops up in the search results whenever 3D content is available.

Recommended Videos

Besides allowing users to look around the 3D object from every angle, the update will also bring that 3D item into AR, mixing the model with the content from your camera to see the object in front of you. Chennapragada says the tool will be helpful for tasks such as research along with shopping.

The camera feature for search is expected to arrive later in May. Partners like NASA, New Balance, Samsung, Target, Visible Body, Volvo, Wayfair, and others will be among the first to have their 3D content pop up in the search results.

As search becomes more camera-heavy, Google Lens is moving beyond simply searching with a camera. At a restaurant, Lens can soon scan the menu, highlight the most popular dishes, bring up photos and even highlight reviews from other diners using Google Maps. The camera first has to differentiate between the different menu options before matching the text with relevant results online. At the end of the meal, Lens will calculate the tip or split the bill with friends when pointing the camera at the receipt.

Google Lens is also gaining the ability to verbally translate text. While earlier versions could use Smart Text to highlight text to copy or translate, Lens can soon read the text out loud or overlay the translated text over the original image in more than 100 languages. Alternately, Lens can also use text-to-speech in the original language, a feature that could be helpful for those with vision or reading difficulty.

The text-to-speech feature is launching first inside Google Go, a lightweight app designed for new smartphone users. Chennapragada says that the team managed to fit those languages onto just over 100KB of space, allowing the app to run on budget phones.

“Seeing is often understanding,” Chennapragada said. “With computer vision and AR, the camera is turning into a powerful visual tool to understand the world around you.”

Lens will also gain a handful of new features as part of partnerships. Readers of Bon Appetit, for example, can scan a recipe page to see a video of the dish being created. In June, Lens will uncover hidden details about paintings at San Fransisco’s de Young Museum.

The updates join a growing list of features for Google Lens like the ability to look up the artist behind a piece of artwork, shop for similar styles, or find the name of that flower you spotted. Google Lens, which has now been used more than a billion times, is available inside Google Assistant, Photos, and directly in the native camera app on a number of Android devices.

Hillary K. Grigonis
Hillary never planned on becoming a photographer—and then she was handed a camera at her first writing job and she's been…
Gemini Intelligence has strict requirements, and your phone may not qualify
Gemini Intelligence

Google’s new Gemini Intelligence platform is quickly becoming one of the biggest talking points in the Android world right now. After being highlighted during this week’s Android Show, the feature is already being tied to several upcoming premium foldables and flagship phones. But there’s a catch: not every high-end Android device will be able to run it. And surprisingly, even some of Google and Samsung’s latest foldables may miss out.

According to Google’s requirements, Gemini Intelligence isn’t just another software update you can casually push to older devices. The company appears to be building this around a much stricter hardware and long-term software support system. To qualify, a phone needs a flagship-grade chipset, at least 12GB RAM, support for AI Core, and Gemini Nano v3 or newer. That immediately creates a problem for several current-generation phones.

Read more
Meta’s Ray-Ban Display now types messages from your finger movements
Neural Handwriting is a really cool feature, but Meta opening the Ray-Ban Display to developers is the quiet announcement that turns a clever wearable into a platform with immense possibilities.
Meta Ray-Ban Display and EMG Band.

Six months into its life, the Meta Ray-Ban Display is starting to look less like an experiment, thanks to what is arguably the most significant update Meta has ever pushed for the device. 

The headline feature is Neural Handwriting, which is now available to every Ray-Ban Display owner, having spent its early months in limited access for Messenger and WhatsApp users. 

Read more
WhatsApp is testing disappearing messages that wait for you to actually read them before vanishing
WhatsApp's new After Reading timer deletes messages only after the recipient reads them.
whatsapp-disappearing-messages-after-reading-timer

WhatsApp has always let you send messages that vanish on a timer, but the clock starts the moment you hit send, not when the other person actually read it. That means a message could sit unread for hours and still disappear before anyone sees it.

This is why WhatsApp is testing a new feature called 'After Reading' timer for disappearing messages, spotted in the latest iOS beta update by WABetaInfo.

Read more