Skip to main content
  1. Home
  2. Emerging Tech
  3. News

ORA-X smart eyeglasses to challenge Google Glass in 2015 with $300 price tag

Add as a preferred source on Google

Google Glass is the most recognizable name in smart eyewear, but at $1,500 it’s not exactly the most viable option for the masses. The ORA-X, on the other hand, is a pair of smart eyeglasses that’s on track to be sold to consumers for a much more palatable $300 a pop next summer.

France-based eye-display technology company Optinvent is hardly a new player in the space. To the contrary, the company has been working on augmented-reality eyeglasses for several years now. With the launch of a Kickstarter campaign for its ORA-1 smart glasses for developers, the company appears ready to finally launch a mainstream product to challenge Google Glass.

Recommended Videos

Optinvent makes it clear on its Kickstarter page that the ORA-1 isn’t meant to be a consumer-ready pair of smart glasses. This pair is meant to be a platform for developers to build apps upon.

Related: These smart glasses light up with a disco ball of color when you get notifications

The ORA-1 features “Flip-Vu,” which basically means the wearer can be in augmented-reality mode (i.e., the virtual display is in the center of the wearer’s field of vision) or glance mode (i.e., the virtual display is pointed downward and is visible only at the bottom of the wearer’s view).

The smart eyeglasses can also run as a standalone Android device (currently on 4.2.2, with plans to upgrade to KitKat) natively running Android apps. “The ORA is like an Android tablet in the form of eyeglasses,” according to the ORA-1’s Kickstarter page.

While the developer-oriented ORA-1 looks quite unwieldy, Optinvent says the next generation will see a 60 percent “improvement in form factor” and a 40 percent improvement in power consumption. The ORA-1 is scheduled to ship in January 2015, and the consumer-ready ORA-X is scheduled to ship in June 2015.

Jason Hahn
Former Contributor
Jason Hahn is a part-time freelance writer based in New Jersey. He earned his master's degree in journalism at Northwestern…
Rice grain-sized sensor could give robots a delicate touch and keep them from breaking stuff
Sprout Robot

Robots are incredibly precise, but being gentle is not always their strong suit. A machine that can build a car with near-perfect accuracy can still apply too much pressure when working in places where even the smallest mistake matters, like inside a human eye or during delicate surgery. That is why researchers at Shanghai Jiao Tong University are developing a new type of force sensor that could help robots “feel” what they are touching more accurately.

The sensor is tiny, about the size of a grain of rice at just 1.7 millimeters wide, making it small enough to fit inside advanced surgical tools. What makes it especially interesting is that it does not rely on traditional electronics. Instead, it uses light to measure force from every direction, including pressure, sliding movements, and twisting. Here is how it works. At the tip of an optical fiber sits a soft material that slightly changes shape when it comes into contact with something. That tiny deformation alters how light travels through the sensor. The altered light pattern is then sent through optical fibers to a camera, which captures it like an image. Researchers then use a machine learning model to study those light patterns and translate them into precise force readings. In simple terms, the system learns how to “read” touch through light alone, without needing a bunch of wires or multiple separate sensors packed into such a tiny space.

Read more
Meta’s own employees are having a hard time digesting AI. Who would’ve thought?
Artificial Intelligence

If you wanted a snapshot of what it looks like when a tech giant tries to force-feed its workforce an AI future, look no further than Meta right now. The company that built its empire on knowing everything about its users has turned that same appetite inward, and its employees are not happy about it. Last month, Meta quietly informed tens of thousands of its U.S. workers that their corporate laptops would begin tracking their keystrokes, mouse movements, clicks, and screen activity. The purpose was to feed that behavioral data into Meta's AI models so they could learn how people actually use computers. The reaction was immediate — within hours, internal comment threads were flooded with anger, confusion, and more than a hundred emoji reactions that left little to the imagination about how employees felt.

When an engineering manager asked how to opt out, Meta's chief technology officer, Andrew Bosworth, had a blunt answer: there was no opt-out, at least not on a company laptop. This is the same company that is also tying AI tool usage to performance reviews, running mandatory "AI Transformation Weeks" to retrain its workforce, and building internal dashboards that gamify how many AI tokens employees consume in a day — a metric so aggressively tracked that some workers started building AI agents to manage their other AI agents. The whole thing started to resemble a feedback loop eating itself.

Read more
Sci-fi got the gadgets right, but the vibes wrong
Sci-fi got plenty of consumer tech right, but reality keeps delivering the useful, compromised version of the dream
Officer K looking up at a neon-colored hologram in Blade Runner 2049.

I was recently waiting for an Uber when the GPS decided to lie for sport. The car was somewhere nearby, I was somewhere nearby, and somehow both of us were trapped in that modern ritual of wrong pins, slow turns, vague waving, and "I'm here" messages that help absolutely no one.

That was when I had a very reasonable thought: this is exactly where a hologram of a giant arrow pointing at me would be useful.

Read more