Companies keep trying to make glassholes happen. Understandably. After the smartphone and the wrist, the face is the next local battlefield for computational space, if decades of science fiction movies have taught us anything. But we’ve seen the Google Glass, the Snapchat Spectacles, The Magic Leap, the whatever that thing that Samsung just semi-announced was.
Contact lenses have been mentioned in that same conversation for some time, as well, but technical limitations have placed the bar much higher than a heads-up display standard pair of spectacles. California-based Mojo Vision has been working on the breakthrough for a number of years now, and has a lofty sum to show for it, with $108 million in funding, including a $58 million Series B closed back in March.
The technology is compelling, certainly. I met with the team in a hotel suite at CES last week and got a walkthrough of some of the things they’ve been working on. While executives say they’ve been dogfooding the technology for some time now, the demos were still pretty far removed from an eventual in-eye augmented reality contact lens.
Rather, two separate demos essentially involved holding a lens or device close to my eye in order to get a feel for what an eventual product would look like. The reason was two-fold. First, most of the work is still being done off-device at the moment, while Mojo works to perfect a system that can exist within the confines of a contact while only needing to be charged once in a 25-hour cycle. Second, the issue of trying on a pair of contacts during a brief CES meeting.
I will say that I was impressed by the heads-up display capabilities. In the most basic demo, monochrome text resembling a digital clock is overlaid on images. Here, miles per hour are shown over videos of people running. The illusion has some depth to it, with the numbers appearing as though they’re a foot or so out.
In another demo, I donned an HTC Vive. Here I’m shown live video of the room around me (XR, if you will), with notifications. The system tracks eye movements, so you can focus on a tab to expand it for more information. It’s a far more graphical interface than the other example, with full calendars, weather forecasts and the like. You can easily envision how the addition of a broader color palette could give rise to some fairly complex AR imagery.
Mojo is using CES to announce its intentions to start life as a medical device. In fact, the FDA awarded the startup a Breakthrough Device Designation, meaning the technology will get special review priority from the government body. That’s coupled with a partnership with Bay Area-based Vista Center for the Blind and Visually Impaired.
That ought to give a good idea of Mojo’s go to market plans. Before selling itself as an AR-for-everyone device, the company is smartly going after visual impairments. It should occupy similar space as many of the “hearable” companies that have applied for medical device status to offer hearing-enhancing Bluetooth earbuds. Working with the FDA should go a ways toward helping fast-track the technology into optometrist offices.
The idea is to have them prescribed in a similar fashion as contact lenses, while added features like night vision will both aid people with visual impairments and potentially make those with better vision essentially bionic. You’ll go to a doctor, get prescribed, the contact lenses will be mailed to you and should last about the length of a normal pair. Obviously they’ll be pricier, of course, and questions about how much insurance companies will shell out still remain.
In their final state, the devices should last a full day, recharging in a cleaning case in a manner not dissimilar from AirPods (though those, sadly, don’t also clean the product). The lenses will have a small radio on-board to communicate with a device that hangs around the neck and relays information to and from a smartphone. I asked whether the plan was to eventually phase out the neck device, to which the company answered that, no, the plan was to phase out the smartphone. Fair play.
I also asked whether the company was working with a neurologist in addition to its existing medical staff. After 10 years of smartphone ubiquity, it seems we’re only starting to get clear data on how those devices impact things like sleep and mental well-being. I have to imagine that’s only going to be exacerbated by the feeling of having those notifications more or less beaming directly into your brain.
Did I mention that you can still see the display when your eyes are closed. Talk about a (pardon my French) mind fuck. There will surely be ways to silence or disable these things, but as someone who regularly falls asleep with his smartphone in-hand, I admit that I’m pretty weak when it comes to the issue of digital dependence. This feels like injecting that stuff directly into my veins, and I’m here for it, until I’m not.
We still have time. Mojo’s still working on the final product. And then it will need medical approval. Hopefully that’s enough time to more concretely answer some of these burning questions, but given how things like screen time have played out, I have some doubts on that front.
Stay tuned on all of the above. We’ll be following this one closely.
Companies keep trying to make glassholes happen. Understandably. After the smartphone and the wrist, the face is the next local battlefield for computational space, if decades of science fiction movies have taught us anything. But we’ve seen the Google Glass, the Snapchat Spectacles, The Magic Leap, the whatever that thing that Samsung just semi-announced was. […] by: Brian Heater via https://www.AiUpNow.com/