Google Acquires Eyefluence to Expand Focus on Multimodal Interaction

Last month, startup Eyefluence quietly announced on its blog that Google had acquired the company. Welcome to the dawn of a new era in human-computer interaction.

Google Gains Eye Tracking Technology

Eyefluence, a startup from LeapPad founder Jim Marggraff, has developed eye tracking technology that allows people to control computers with their eyes mainly through virtual reality (VR) and augmented reality (AR). This is not done by blinking or some other physical command, but just by looking at something. Marggraff says this visual user interface (UI) takes less than 2 minutes to learn. In 2014, U.S. researchers discovered that the human brain can interpret images the eye sees in just 13 milliseconds. It is hard to fathom the speed at which we could manipulate a computer by using our eyes.

Evolution of Computer Interfaces

Thanks to Moore’s Law, the power and usefulness of computers has grown exponentially. However, the way humans interface with computers has been slow to evolve and has not matched the computing power available today. Our interfaces have evolved from keystroke commands and graphical user interfaces (GUIs) to touchscreens. As we move slowly into a world where computing becomes three-dimensional through VR and AR, new interfaces are required. In these early days of VR, we are seeing hand controllers and talk of exoskeletons, so that we can use our hands to navigate and manipulate and generally control the interaction. If VR and AR are to become mainstream, these types of mechanical tools will not suffice; they are too cumbersome and expensive. Most experts believe the evolved interface will be a multimodal stew of hand gestures, voice commands, and eye tracking.

This brings us back to Eyefluence and Google. Google’s investment in VR is growing rapidly with the introduction of Daydream and now Eyefluence. Look for Google to reap immediate (within the next 6 to 12 months) benefits from integrating Eyefluence’s capabilities into Daydream for the specific purpose of foveated rendering. This technique takes advantage of the fact the human eye can only view a limited area in full detail by determining where the eye is focused and rendering objects in the periphery with less detail. The result is reducing the load on the GPU with no discernible degradation of the user experience. A lighter load on the GPU means less battery drain as well, which has been a key market barrier for mobile VR.

But that is not all. The next version of Android called N will feature a special mode for VR apps.

As reported by TechCrunch in May:

This new mode gives VR apps exclusive access to the device’s processor cores when they are in the foreground. Combined with an improved sensor pipeline and the work Google did on bringing support for the Vulkan graphics API to Android, the company claims the VR mode can bring latency on the Nexus 6P down to about 20 milliseconds, which is pretty much the gold standard for mobile VR at the moment.

So with this new VR Mode, apps now get full access to all of the power of the phone’s CPU and GPU to render images as fast as possible. The team also changed how its graphics buffering works by using a single buffer and having the app chase the scan line on the screen.

Google will certify phones to be Android VR ready and unsurprisingly, the Nexus 6P, the company’s own current flagship phone, is the first to receive this label. Google noted that Samsung, HTC, LG, Huawei, ZTE and others are also releasing VR-ready phones or having older phones certified.

Long-Term Investment in Virtual and Augmented Reality

In our Virtual Reality for Enterprise and Industrial Markets and Virtual Reality for Consumer Marketsreports published in 3Q 2015, Tractica projected that Google would increase its leadership in VR market adoption. We will we publish updates to those reports soon.

The Eyefluence acquisition is a validation in Google’s long-term investment and vision of VR and AR. Google sees a significant amount computing going into VR and AR; future generations will have no learning curve or friction for computer interaction. As the touchscreen became a natural, easy to use interface for anyone from toddlers to seniors, eye tracking, gesture, and voice commands will eliminate any barriers to human-computer interaction within (and even outside of) 3D computing.

Leave a Reply