By Peter Jurich, Waisman Communications
If you’ve ever seen a graphical representation of a sound, you are probably familiar with what it looks like: hundreds of steep, tightly packed peaks and valleys, all of different heights, moving above and below a common line of symmetry that cuts horizontally through the middle. “When a sound travels through the air, it basically sets the molecules around us in motion, using sound pressure to create sort of a wave,” says Waisman researcher Michaela Warnecke, PhD. “If I used a microphone to record that sound and print it out on a piece of paper, you would see a graph that plots the amount of sound pressure variation across time.”
Warnecke is the first author on a new paper out of the Binaural Hearing & Speech Lab – run by Waisman investigator Ruth Litovsky, PhD. The paper looks closely at what cues within a soundwave help listeners determine whether a sound is in motion or not. Specifically, they focus on a cue that is accessible to listeners with normal hearing, or NH, but inaccessible to listeners with cochlear implants, or CI. “The impacts of temporal fine structure and signal envelope on auditory motion perception” was published in the journal PLOS ONE in August 2020.
“Navigating our natural acoustic environment is a complex task that depends on the successful identification and – more importantly – localization of sound sources,” the paper says. “Naturally occurring sounds are often in motion, either because the sound source moves, or the listener does.”
In the paper, the authors find that when an acoustic cue called temporal fine structure (TFS) is removed from a sound wave, listeners with NH have trouble figuring out whether a sound is moving. This is significant because the processing algorithms in CIs, in their current form, modulate part of the TFS in order to operate properly. “In fact, they all but remove TFS, so listeners with CIs cannot use information that might be embedded within the signal’s TFS,” Warnecke says. “We confirmed that access to low-frequency TFS is important to distinguish sound motion. We know that low-frequency TFS is important for many other things, too, like understanding speech when there is background noise.”
TFS is a collection of frequencies perceived by listeners over time. In the below photo of the representation of the word “blanket”, for example, the TFS comprises all the black vertical lines – the information removed by CI algorithms. The red lines on either side of the TFS are called the signal envelope, or ENV. It illustrates the shape of the wave, highlighting its extremes. By contrast, this information is not impacted through CI usage.
“The TFS in our study is what is often referred to as ‘the carrier’, which ‘carries’ that envelope,” Warnecke says. “In other words, it’s all the stuff between the red envelope lines. In very simple terms, you could think of the carrier, or TFS, as a collection of frequencies that are modulated in amplitude by the signal envelope. Together, they make up a complex sound that we can process in our auditory system.”
To conduct this study, Warnecke and her team created a series of stimuli with two different kinds of envelope that either contained low-frequency TFS or did not. Then, NH listeners sat in a sound-attenuated chamber and listened to these sounds that were presented as both stationary and moving within a 120-degree arc around them. “We asked our listeners to use a pointer to indicate where they heard the sound and could thereby calculate how our sound manipulations effected listener’s ability to tell the motion of a sound,” Warnecke says.
With this study, she and her team “wanted to show that degrading an acoustic signal in a way that would mimic challenges experienced by listeners with CIs can cause a drop in performance, compared to when the signal is not degraded. This shows the impact of certain sound processing algorithms embedded in CIs, which may make it more difficult for listeners with hearing impairments to navigate their surroundings.”
Motion perception is a relatively new subfield in auditory studies. “When I looked into the literature on auditory motion perception, I found that we don’t know much about what components of a sound help us distinguish between stationary and moving sounds,” Warnecke says. She adds that once more is known about how listeners with NH perceive moving sounds, researchers can investigate how that perception is altered when hearing is impaired.
She also notes that it’s important to shift the focus of these studies to individuals with hearing impairments because when individuals lose their hearing, they are often prescribed devices like CIs that return only part of their hearing. “The sound may be not as clear or it may be difficult to tell the identity of some sounds especially in louder environments, such as crowded spaces,” she says. “By evaluating these differences, we can suggest ways in which the sound processing in devices could be adapted to aid listeners with hearing impairments to represent the acoustic environment more faithfully. “
Warnecke says this study was inspired by previous work in the lab in which CI listeners showed difficulty in determining whether a sound was stationary or in motion. “CI listeners were biased to identifying sounds as moving, even when they were stationary – a bias that was not consistently observed for listeners with normal hearing,” she says. “As CI listeners had trouble distinguishing stationary and moving sounds, but listeners with NH didn’t, the results of this former CI study suggested that components of the sound which are not available to CI patients, namely low-frequency TFS, may be a cue to help distinguish stationary from moving sounds.”
The Binaural Hearing & Speech Lab investigates the challenges that listeners with hearing impairments experience. The lab studies components of the sound that are not well transmitted, and how the perception of sound is different between listeners with normal and impaired hearing.
Warnecke hopes the study will inspire more interest in auditory motion perception. “The majority of sounds around us are in motion, either because the sound source moves – such as a car driving by or a twittering bird flying overhead – or because we move our heads,” she says. “By testing hypotheses in more natural settings, we can learn a lot about how ecologically relevant stimuli and situations impact our perception.”