What you see is not what you get when it comes to digital eyewear, as Satyen K. Bordoloi profiles their evolution and future in the age of AI.


In 2018, while writing a film about a blind detective, I conjured her being able to do detection with the help of smart glasses with AI in them. The spectacle is equipped with AI, a camera and a RADAR that relays everything it sees and senses – like distance, to her ears. In a crunch situation around the midpoint, she even drives a car. Being five years before the Generative AI revolution means it sounded like futuristic science fiction. Today, with the numerous options of such eyewear available on the market, it has become a mundane everyday reality.

Google Glass brought the term “Glasshole” into our lexicon, a label for early adopters more interested in recording their surroundings than engaging with them (Image Courtesy: Wikipedia)

My inspiration for that blind glass came from a classic tech tragedy of a giant who were so far ahead of the curve they fell off the edge of the map. This was the fate of Google Glass, the cyborgian spectacles that, in 2013, promised to beam the internet right in front of our retinas. Instead, they beamed the term “Glasshole” into our lexicon, a label for those early adopters more interested in recording their surroundings than engaging with them. The Google Glass was a solution in desperate search of a problem and became a case study in how not to introduce a world-changing technology.

Google eventually shut down the consumer version of Glass in 2015. But as we look around today, we realise: were they being hasty in doing so? Because the dream of digital eyewear didn’t die, it evolved, powered by an unstoppable force: Artificial Intelligence.

The Oculus Rift CV1 (Consumer Version 1), a virtual reality headset made by Oculus VR and released in 2016 (Image Courtesy: Wikipedia)

From Sci-Fi Fantasy to (Clunky) Reality

Long before Sergey Brin wore Glass at a charity event, cinema had already imagined it. The vision of both the Terminator and Predator parsed the world with chilling, data-rich efficiency. In Mission: Impossible – Ghost Protocol, an agent had contact lenses that could take images, overlay data, and transmit visuals remotely. And let’s not forget Iron Man, where his helmet screen displayed everything: from diagnostic data, facial recognition, to weapons targeting systems.

After simmering in academic and industrial circles for decades, the real commercial reality of this sci-fi fantasy began in the 2010s. Palmer Luckey’s Oculus Rift Kickstarter reignited the dream of consumer VR (Virtual Reality) glasses, launching in August 2012 and raising over $2.4 million, nearly ten times its $250,000 goal. This became a watershed moment for consumer VR when Facebook (now Meta) acquired Oculus VR in March 2014 for approximately $2 billion in cash and stock.

Suddenly, the race was on. But these devices – like Oculus’s models like DK1 and DK2 were bulky, tethered to powerful PCs, yet had low resolution screens and limited positional tracking, leading many to feel motion sickness. They were great for gamers and a handful of niche applications, but for an average person trying to watch a movie, it was a hard sell.

The eyes of the girl that you see in this image are not her real eyes but a live projection on a lenticular screen (Image CreditApple)

From Dumb Glass to Smart Lens

Two waves define this AR/VR (Augmented Reality) reality. The first one was mainly about hardware: better screens, faster processors, more precise gyroscopes. The second, current revolution is about intelligence. AI is the magic ingredient that transforms a wearable screen into a contextual companion. Take the difference between Google Glass then and now.

While the old smart glass saw a barcode, scanned it, and pulled up a product page, the one now, when it considers a complex machine, recognises every single part and overlays the official repair manual directly onto the components to guide your hands through the fix using animated arrows.

AI – along with radar, Lidar, and other sensors- provides the spatial understanding and contextual awareness that early devices lacked. While the device once displayed information, today it understands the environment to deliver whatever the user might require.

Computer Vision AI can now identify objects, translate foreign text on a menu in real-time, and recognise faces. Not just that, but machine Learning algorithms over time learn your preferences and anticipate what information you might need or seek. Large Language Models on these devices could soon act like a real-time, conversational guide to the world around you, much like the AI spectacles guiding my fictional blind detective in 2018.

This AI infusion is what makes the modern concept not just wearable computing, but spatial computing: a term Apple is banking its entire wearable glasses vision on.

Ray-Ban and Meta’s smart glasses is the first step into a future where smart glasses are an everyday reality (Image Courtesy: Wikipedia)

The New Players and a Market Explosion

Today, the market is no longer a niche. According to market research firms such as IDC and Mordor Intelligence, the global AR and VR market size was valued at over $30 billion in 2023, projected to increase to over $100 billion by 2028. Today, the ones adopting them aren’t just gamers; it’s also enterprise training, healthcare simulations, remote collaboration, retail visualisation, collaborative design, and more.

Meta has sold millions of its Quest headsets, dominating the affordable VR space. Microsoft’s HoloLens 2 has established a strong foothold in the enterprise, enabling engineers and surgeons to work with complex 3D models (they have since announced their exit from HoloLens hardware development). But the event that truly signaled the category’s second coming was the announcement of the Apple Vision Pro.

Apple, with this device, as they have done before, didn’t just release a product; they reframed the entire conversation. This isn’t just a VR headset for gaming, they argued; not even AR glasses – they claimed. Instead, they called it a “spatial computer.” And indeed, with its breathtaking displays and intuitive eye-and-hand-tracking interface, the Vision Pro is what Google Glass never even fantasised. This is not for surreptitiously recording the world, but a device for immersion, productivity, and entertainment, designed (somewhat) with social acceptance in mind via its front-facing “EyeSight” display. It’s expensive, it’s a first-gen product, and it has its own quirks, but it has successfully made the world take the category seriously again. Apple bets that the future of computing isn’t in your pocket or on your desk—it’s all around you.

Another collaboration that is taking smart glasses to the masses by taking a stylish leap into wearable tech is the one between Ray-Ban and Meta. Their latest Ray-Ban Meta smart glasses feature a discreet 12MP camera, open-ear speakers, and five microphones, all packed into classic Wayfarer and Headliner frames. Users can take photos, record immersive videos, make calls, and even livestream directly to Instagram or Facebook: all hands-free. The glasses respond to voice commands, such as “Hey Meta,” and can provide real-time assistance, translations, and contextual information about what you’re seeing. Sales numbers suggest the device has been a success.

Microsoft’s HoloLens 2 found a firm footing in enterprise, helping engineers and surgeons work with complex 3D models (Image Courtesy: Wikipedia)

The Future

Through the Looking Glass: So, where do we go from here? The trajectory is clear: smaller, lighter, smarter, more integrated, and as Apple is realising, reasonably priced devices. The ultimate goal is a pair of normal-looking glasses that can seamlessly toggle between AR and full VR immersion. We may be years away from this, but Meta and startups like Brilliant Labs are already pushing the boundaries with functional, if limited, spectacles.

The future glasses are like Jarvis in Iron Man – you’re always on companies that won’t just be a screen, but everything else, from a photographer and translator to a personal assistant and health monitor. They’ll remind you of names, warn you of some mistake you’re making (like say in writing this article while I’m writing it, wearing dumb spectacles), and analyse the nutritional content of your meal the moment it is served.

Of course, this future is not without its perils. Privacy concerns that contributed to the downfall of Google Glass are resurfacing with a vengeance. The world of such glasses all around is a world of constant, passive surveillance. And what if such devices, like some phones do today, are always on and relay their recordings or observations to the company. Who owns such data? These and many other societal questions need to be addressed before the tech truly becomes ubiquitous.

The dream born in sci-fi novels and films is now, finally, coagulating into a reality. It’s no longer a question of if we’ll all be wearing computers on our faces, but when and how. And this time, with the power of AI and the lessons of the past, we might just get it right. The future is looking bright. And it’s looking right back at us.

In case you missed:

Satyen is an award-winning scriptwriter, journalist based in Mumbai. He loves to let his pen roam the intersection of artificial intelligence, consciousness, and quantum mechanics. His written words have appeared in many Indian and foreign publications.

Leave A Reply

Share.
© Copyright Sify Technologies Ltd, 1998-2022. All rights reserved