How speaking is a lot like playing darts

By Mason Muerhoff, Waisman Center Communications

Winning a game of darts requires being accurate. A player who can pick a spot on the board, focus their mind, and execute the specific motor action needed to land the bullseye will win the game. And if they miss, well, practice makes perfect.

The same goes for speech, says Waisman investigator and assistant professor of communication sciences and disorders Carrie Niziolek. Producing accurate speech requires motor dexterity, she says, meaning that the sound that comes out of your mouth is the sound you intended to make.

But missing, just as in darts, is all too common.

Niziolek studies speech in people who have aphasia – a condition that limits a person’s ability to comprehend or create language due to brain damage. Niziolek came to the Waisman Center as an investigator in 2017 from Boston University.

A singer, pianist, and someone who has studied French, Niziolek says she has always been interested in sound and language.

“The excitement about learning how a language is acquired when I was old enough to understand what’s going on, that made me want to investigate it,” she says, reflecting on the summer she returned from France with the ability to watch French movies without subtitles for the first time.

Niziolek’s research focuses on the most basic unit of language, called the phoneme ‒ a single speech sound such as “buh” or “guh.” She looks at the interactions between those small scale utterances and the higher level cognition that creates them.

“When we are speaking, we can hear our own voices,” says Niziolek. “Even if you aren’t paying attention to it, hearing your voice coming back to your own ears affects the way you speak.”

This process is called the auditory feedback system. Niziolek illustrates this by testing individuals wearing headphones. She instructs them to make a certain word or sound, but alters how their voice comes back to them through the headphones that they are wearing.

“Maybe we’ve changed the pitch to be higher, maybe we’ve changed the vowel to make it a different vowel,” Niziolek says. “And people are sensitive to that, and will actually change what they say.”

Undergraduate researcher Claire Manske working in Niziolek’s hearing lab

This system can become compromised in aphasia, making it harder to hear when you’ve made a mistake and to correct it.

“If someone doesn’t have the ability to hear the difference between something that is better or worse per se, can we show someone in a different modality? Can we show them visually?” Niziolek explains.

So how do you transform a sound into a sight? Enter Voystick.

Voystick is an experimental interface, synched with the code for the classic arcade game “Pong.” Niziolek wanted to try producing a speech game that could visualize for players how close they are to making a specific sound.

It works by scanning a microphone frequency for “formants,” or a player’s tongue position, which corresponds to a certain vowel sound. In real time, the computer translates the information from the microphone into computer code, which instructs the paddle on the screen to move. Moving the paddle to block the shots that the computer lobs at you requires making specific vowel sounds up and down a scale.

The idea is that if you can train someone to associate a particular motor action with a visual aid, you can get someone who has a compromised auditory feedback system to tailor their sounds to be more consistent, reducing variability, Niziolek explains.

The more accurate you are at producing a vowel sound, the easier it will be for you to spar back and forth with the computer, serving up fastballs and blocking clutch shots. Eventually, Niziolek says, that may translate into more accurate speech production.

Niziolek is developing a prototype of the game that would function with an X and a Y axis. That could integrate with a game like Pac-Man, where producing a certain sound would translate into a left, right, up, or down movement command, letting you dodge ghosts and eat cherries all while practicing your speech sounds.

Niziolek’s long term goal for the game is to develop it to a level where it can be used as an intervention for individuals with aphasia or other speech challenges.

“There are lots of possibilities [for the game],” Niziolek says. “But in its simplest form, this can actually create learning. It can change the way that somebody speaks over multiple iterations of practice.”