Virtual reality work earns award for acoustics researcher

Ellen Peng, PhD
Ellen Peng, PhD

Waisman researcher Ellen Peng, PhD, is the recipient of an Editor Award from the American Speech-Language Hearing Association (ASHA) for her work on the impact of room acoustics on listener comprehension. Her article “Listening Effort by Native and Nonnative Listeners Due to Noise, Reverberation, and Talker Foreign Accent During English Speech Perception,” which is based on work she did as a doctoral student of architectural engineering at the University of Nebraska-Lincoln, was published in the Journal of Speech, Language & Hearing Research.

“An Editor’s Award is truly a high honor, with selection limited to the most impactful works that meet the highest quality standards in research design and presentation,” says ASHA chief staff officer for science research Margaret Rogers, PhD, CCC-SLP.

Peng is now a research associate in the Waisman Center’s Binaural Hearing and Speech Lab, which is led by Waisman investigator and professor of communication sciences and disorders Ruth Litovsky, PhD. “I didn’t even know that the award existed until I was the recipient!” Peng says. “I think it is a great idea to commend the work we do as researchers.”

The experiment, which Peng conducted with her then-mentor Lily Wang, PhD, used acoustic virtual reality to simulate different indoor environments of various noise levels to understand how those levels affect listening effort by adults. “Humans communicate with one another mostly in indoor spaces that can be noisy and reverberant,” Peng says. “These virtual environments mimic classrooms ranging from fairly quiet and acoustically ‘dry’ to very noisy and reverberant.”

Peng and Wang invited 115 listeners to perform a speech comprehension task, in which participants listened to conversations or short monologues and answered questions related to the content while being immersed in simulated acoustic environments. Afterward, listeners rated how effortful the task was and how successful they felt in completing the task.

A non-native English speaker herself, Peng says she “was very interested in knowing what poor classroom acoustics will do to our ability to understand speech.”

The researchers found that, in most indoor spaces, non-native listeners rated speech understanding to be more difficult than native listeners. In general, listeners rated higher effort and less success in completing an assigned task with increasing noise level and reverberation severity. Peng says this is a strong indication that good acoustic designs are critical for learning spaces. “Adverse room acoustics can make listening to speech more difficult and require more effort to focus, particularly for those who are non-native speakers,” she adds.

According to the ASHA website, a single article from each of four ASHA journals is picked annually by the journals’ editors “on the basis of experimental design, teaching-education value, scientific or clinical merit, contribution to the professions, theoretical impact, and/or other indices of merit.” The winning manuscripts were published the year before. Peng received notification in September that her April 2019 study was one of the 2020 picks.

Today, Peng can continue her work on speech perception and processing in Litovsky’s lab at the Waisman Center. She recently received a grant from the Hearing Health Foundation to study the effects of reverberant environments on brain activity in children with hearing loss. In this new project, which is

stemmed directly from the work for which she won the ASHA award, Peng hypothesizes that reverberation increases the listening effort more for children with hearing loss.

Peng hopes the impact of her research is two-fold: first, she’d like her work to encourage school administrators and classroom engineers to think about what best facilitates learning; and second, she’d like to encourage more researchers in hearing science to use VR technologies in their work.

“Acoustic virtual reality is very similar to the idea of what a VR headset can do for us but just limited to sounds only for now,” she says. “By using VR, we can bring the real world into sound booths and study behaviors in very controlled lab environments. That way, the outcomes will lead to a better understanding of how we listen when we engage in everyday activities.“Perhaps one day, we can bring the lab to our patients with hearing loss and use VR as part of rehabilitative care to help them better navigate the real world. That’s what I’m currently continuing to do here at the Waisman Center.”