How New AI Headphones Translate Multiple Voices in Real Time

Forget robotic voices, this new headphone system translates multiple speakers at once, keeping each voice’s tone and direction. It's a major step toward natural communication across languages, even in noisy public places.

Study: Spatial Speech Translation: Translating Across Space With Binaural Hearables. Image Credit: witsarut sakorn / ShutterstockStudy: Spatial Speech Translation: Translating Across Space With Binaural Hearables. Image Credit: witsarut sakorn / Shutterstock

Tuochao Chen, a University of Washington doctoral student, recently toured a museum in Mexico. Chen doesn't speak Spanish, so he ran a translation app on his phone and pointed the microphone at the tour guide. But even in a museum's relative quiet, the surrounding noise was too much, making the text useless.

Various technologies have emerged lately promising fluent translation, but none of these have solved Chen's problem of public spaces. Meta's new glasses, for instance, function only with an isolated speaker; they play an automated voice translation after the speaker finishes.

"Spatial speech translation" is an intelligent hearable system that translates speakers in the wearer’s auditory space, preserving the direction and unique voice characteristics of each speaker in the binaural output. (A) Two speakers have a conversation, and the wearable translates both in real-time, while maintaining their spatial and acoustic features. (B) In a crowded environment, the hearable uses binaural cues for directional translation, translating only the speaker from a specifc direction (e.g., where the wearer is looking) and ignoring other speakers in the environment. (C) The noise-canceling headset captures binaural input, processes the signals, and plays back the translated binaural speech in real time.

"Spatial speech translation" is an intelligent hearable system that translates speakers in the wearer’s auditory space, preserving the direction and unique voice characteristics of each speaker in the binaural output. (A) Two speakers have a conversation, and the wearable translates both in real-time, while maintaining their spatial and acoustic features. (B) In a crowded environment, the hearable uses binaural cues for directional translation, translating only the speaker from a specifc direction (e.g., where the wearer is looking) and ignoring other speakers in the environment. (C) The noise-canceling headset captures binaural input, processes the signals, and plays back the translated binaural speech in real time.

Chen and a team of UW researchers have designed a headphone system that translates several speakers at once, while preserving the direction and qualities of people's voices. The team built the Spatial Speech Translation system with off-the-shelf noise-cancelling headphones fitted with microphones. The team's algorithms separate the different speakers in a space and follow them as they move, translate their speech, and play it back with a 2-4 second delay.

The team presented its research on April 30 at the ACM CHI Conference on Human Factors in Computing Systems in Yokohama, Japan. The code for the proof-of-concept device is available for others to build on. "Other translation tech is built on the assumption that only one person is speaking," said senior author Shyam Gollakota, a UW professor in the Paul G. Allen School of Computer Science & Engineering. "But in the real world, you can't have just one robotic voice talking for multiple people in a room. For the first time, we've preserved the sound of each person's voice and the direction it's coming from."

Spatial Speech Translation: Translating Across Space With Binaural Hearables

The system has three innovations. First, when turned on, it immediately detects the number of speakers in an indoor or outdoor space.

"Our algorithms work a little like radar," said lead author Chen, a UW doctoral student in the Allen School. "So they're scanning the space in 360 degrees and constantly determining and updating whether there's one person or six or seven."

The system then translates the speech and maintains the expressive qualities and volume of each speaker's voice while running on a device, such as mobile devices with an Apple M2 chip, like laptops and Apple Vision Pro. (The team avoided using cloud computing because of privacy concerns related to voice cloning.) Finally, when speakers move their heads, the system continues to track the direction and qualities of their voices as they change.

The system functioned when tested in 10 indoor and outdoor settings. And in a 29-participant test, the users preferred the system over models that didn't track speakers through space.

In a separate user test, most participants preferred a delay of 3-4 seconds, since the system made more errors when translating with a delay of 1-2 seconds. The team is working to reduce the speed of translation in future iterations. The system currently only works on commonplace speech, not specialized language such as technical jargon. For this paper, the team worked with Spanish, German, and French, but previous work on translation models has shown they can be trained to translate around 100 languages.

"This is a step toward breaking down the language barriers between cultures," Chen said. "So if I'm walking down the street in Mexico, even though I don't speak Spanish, I can translate all the people's voices and know who said what."

Qirui Wang, a research intern at HydroX AI and a UW undergraduate in the Allen School while completing this research, and Runlin He, a UW doctoral student in the Allen School, are also co-authors on this paper. Moore Inventor Fellow award and a UW CoMotion Innovation Gap Fund funded this research.

Source:
Journal reference:
  • Tuochao Chen, Qirui Wang, Runlin He, and Shyamnath Gollakota. 2025. Spatial Speech Translation: Translating Across Space With Binaural Hearables. In Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems (CHI '25). Association for Computing Machinery, New York, NY, USA, Article 352, 1–19. DOI: 10.1145/3706598.3713745, https://dl.acm.org/doi/10.1145/3706598.3713745

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of AZoAi.
Post a new comment
Post

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.