This earphone wearable recreates facial expressions by bouncing sound off your cheeks

EarIO works like a ship sending out pulses of sonar.

When you purchase through affiliate links on MusicTech.com, you may contribute to our site through commissions. Learn more
Sonar Earphone Device

Image: Ke Li / Cornell University

Researchers at Cornell University have developed a wearable earphone device – or ‘earable’ – that uses sonar to detect and recreate the wearer’s facial expressions.

Named EarIO, the system works by bouncing sound off the user’s cheeks and rendering the echoes onto an avatar of a person’s entire moving face.

A speaker on each side of the earphone sends audio acoustic signals to the sides of the wearer’s face while a microphone picks up the echoes, which changes as the wearer speaks, smiles or raises their eyebrows. A deep learning algorithm then uses artificial intelligence to process the data and translate the shifting echoes into complete facial expressions.

According to researchers, EarIO can transmit those facial movements to a smartphone in real time. The device is also compatible with commercially available headsets for hands-free, cordless video calls.

Devices that track facial movements with a camera are “large, heavy and energy-hungry, which is a big issue for wearables,” said Cheng Zhang, principal investigator of the Smart Computer Interfaces for Future Interactions Lab, who led the team behind the project. “Also importantly, they capture a lot of private information.”

Facial tracking using acoustic technology can offer better privacy, affordability, comfort and battery life, Zhang added. By collecting sound instead of data-heavy images, the earable can communicate with a smartphone via a Bluetooth connection, keeping the user’s information private. By contrast, with images, the device would need to connect to a Wi-Fi network and send data back and forth to the cloud, making it more susceptible to hacking.

That said, the device in its current stage does come with a couple of limitations. For one, EarIO needs about 32 minutes of facial data training before it can be put to work, though researchers say that they aim to make it a plug-and-play device eventually.

The team also hopes to improve EarIO’s ability to tune out nearby noises and other disruptions in future iterations.

“The acoustic sensing method that we use is very sensitive,” said Ruidong Zhang, one of the paper’s co-authors. “It’s good, because it’s able to track very subtle movements, but it’s also bad because when something changes in the environment, or when your head moves slightly, we also capture that.”

As for battery life, the device currently lasts about three hours on a single charge. Future research will focus on extending its run time.

logo

Get the latest news, reviews and tutorials to your inbox.

Subscribe
Join Our Mailing List & Get Exclusive DealsSign Up Now
logo

The world’s leading media brand at the intersection of music and technology.

© 2024 MusicTech is part of NME Networks.