During TED2025, Google unveiled sleek smart glasses equipped with a heads-up display (HUD), though they were characterized as “conceptual hardware”.
Shahram Izadi, who leads Android XR at Google, demonstrated the HUD glasses alongside Samsung’s forthcoming XR headset. His presentation, lasting 15 minutes, is now available for public viewing.
The smart glasses come equipped with a camera, microphones, and speakers, similar to the Ray-Ban Meta glasses, but feature a “small high-resolution in-lens display that is full color.” This display seems to be monocular, bent light in the right lens at certain angles observed during the demonstration, and has a relatively limited field of view.
Izadi’s demonstration centered around Google’s Gemini, a multimodal conversational AI system that showcases Project Astra, which enables it to remember visual information through “continuously encoding video frames, merging video and speech inputs into a timeline, and caching this data for effective recall”.
Here’s an overview of the features showcased by Izadi and his colleague Nishtha Bhatia during the demo:
- Basic Multimodal: Bhatia prompted Gemini to create a haiku inspired by what she observed in the audience, and it replied with, “Faces all aglow. Eager minds await the words. Sparks of thought ignite.”
- Dynamic Contextual Memory: After glancing away from a shelf holding various items, Bhatia asked Gemini for the title of “the white book that was on the shelf behind me,” to which it responded correctly. She then posed a more challenging question, inquiring where her “hotel keycard” was without referencing the shelf, and Gemini accurately identified that it was located near the music record.
- Advanced Multimodal: While holding an open book, Bhatia questioned Gemini about the meaning of a diagram, and it provided the correct answer.
- Language Translation: Bhatia directed her gaze at a Spanish sign and requested that Gemini translate it into English without revealing the language. It succeeded, and to confirm the live nature of the demo, Izadi asked the audience to choose another language; when Farsi was selected, Gemini managed to translate the sign correctly.
- Multi-language Capability: Bhatia communicated with Gemini in Hindi without needing to switch any language “mode” or “setting,” receiving an immediate response in Hindi.
- Action Initiation (Music): To illustrate how Gemini can activate functions on a phone, Bhatia looked at a physical album she was holding and instructed Gemini to play a track from it. The song began streaming via Bluetooth to the glasses.
- Navigation Assistance: Bhatia requested Gemini to “navigate me to a park nearby with views of the ocean.” When looking straight ahead, she viewed a 2D turn-by-turn guide, while glancing downwards revealed a 3D (though fixed) minimap illustrating the route.
This isn’t Google’s initial introduction of smart glasses featuring a heads-up display, nor is it the first time the capabilities of Project Astra have been highlighted. Nearly a year ago at Google I/O 2024, the company presented a brief pre-recorded demonstration of the technology.
However, the glasses unveiled at TED2025 were notably slimmer compared to the previous year’s models, indicating ongoing efforts from the company toward miniaturization with the intention of delivering a product.
Nevertheless, Izadi continues to refer to Google’s presentation as “conceptual hardware,” and no specific product or timeline has been disclosed.
In October, Sylvia Varnham O’Regan of The Information reported that Samsung is developing a competitor to the Ray-Ban Meta glasses using Google Gemini AI, although it’s uncertain whether this device will incorporate a HUD.
If a HUD is included, it would not be the sole contender in the market. Beyond the numerous startups that showcased prototypes at CES, Mark Zuckerberg’s Meta is reportedly set to launch smart glasses with a HUD later this year.
Much like Google’s model displayed at TED2025, Meta’s version is said to feature a small display in the right eye, with a strong emphasis on multimodal AI (specifically Meta AI driven by Llama).
In contrast to Google’s primarily voice-controlled glasses, Meta’s HUD glasses are expected to be operable through finger gestures, recognized by an integrated sEMG neural wristband.
Additionally, Apple is said to be working on its own smart glasses, aiming for a product launch in 2027.
All three tech giants are likely looking to capitalize on the early success of the Ray-Ban Meta glasses, which have surpassed 2 million units sold, with plans to significantly ramp up production.
Anticipate a highly competitive landscape for smart glasses in the years to come as these major players vie for dominance in the AI technology that interprets what you perceive and augments your view with visual content.