By Emily Leclerc, Waisman Science Writer
People say between 150 and 200 words a minute on average during a casual conversation. That breaks down to several syllables a second and adds up to hundreds, even thousands, of sentences. And for many people, the ability to speak is as reflexive as breathing as they chat back and forth. But what is going on behind the scenes? How are those most basic sounds formed and put together to form words? How does language develop in complexity to full length sentences that convey an endless variety of meanings?
At the Waisman Center, speech and language research has been a priority and focus area since the center’s opening in 1973. Speech and language researchers are working to better understand speech and language development in typically-developing individuals as well as in individuals with intellectual and developmental disabilities (IDDs) to help facilitate better and more effective communication.
The beginning of speech and language research at Waisman
Waisman has been at the forefront of speech and language research for decades. Three Waisman researchers, who were among the first to join Waisman after its opening in 1973, developed seminal technology that changed the way scientists studied and understood speech and language, particularly in children. Jon Miller, PhD, Robin Chapman, PhD, and Larry Shriberg, PhD, (all three are emeritus professors of communication sciences and disorders with Miller also being a speech-language pathologist) created two software programs called SALT (Systemic Analysis of Language Transcripts) and PEPPER (Program to Examine Phonetic and Phonologic Evaluation Records) that were able to illustrate typical vocabulary development and discern what was different in language development in children with IDDs. SALT and PEPPER also led to increased understandings of the causes of speech sound disorders and databases that provided condition descriptions to allow clinicians to make easier and more accurate diagnoses. The SALT program is still used today to study speech and language.
How infant noises develop into words and language
Much of Waisman’s current speech and language research rests on Miller, Chapman, and Shriberg’s foundations. A particular question SALT and PEPPER didn’t tackle though, is how do infants learn language. On average, by the age of one, most infants will have spoken their first words and by two-to-three years of age, toddlers will be putting coherent sentences together. At the moment, that learning process is not particularly well understood.
“Part of what interested me initially is that language is arguably the most complicated thing we have to learn as humans,” says Jenny Saffran, PhD, Waisman investigator and Distinguished Professor of Psychology. “And yet, it is learned by little tiny people. And in fact, it’s learned better by infants and little kids than it is by you and me. Why is it that babies and little kids who aren’t particularly good at other kinds of complex learning, are really good at this?”
Saffran, as a part of her Infant Learning Lab, is investigating how babies learn their native language. What exactly do words mean to babies and what in their environment helps them develop those meanings? How do babies learn to use predictions in their language and what computations are they using to do that? Her work in pursuit of answers to these questions can lead to a better understanding of typical and atypical development. “This allows us to develop new hypotheses about the different kinds of developmental trajectories that infants and young children who are not typically developing might be following,” Saffran says.
Saffran also co-leads the Little Listeners Lab with Susan Ellis Weismer, PhD, Waisman investigator and professor emerita of communication sciences and disorders. This lab is focused on understanding language learning in toddlers with autism and how it differs from typical kids. In her own lab, Ellis Weismer is looking into whether or not young children with autism generate expectations or make predictions in different ways than neurotypical toddlers do. This information can be used to help identify the cognitive functions that are influencing communication patterns. “We’re trying to get at these underlying cognitive processes and see where they’re having difficulties so it might give us intervention implications,” Ellis Weismer says. Saffran and Ellis Weismer’s work can give researchers and clinicians more effective methods for facilitating language skills in children with IDDs.
The impact of IDDs on language development
As kids continue to grow, their speech and language skills grow alongside them. Language becomes more complex as the vocabulary and understanding of grammar and syntax expands. Audra Sterling, PhD, Waisman investigator and associate professor of communication sciences and disorders, is investigating how that development is influenced and impacted by a variety of IDDs. As a child language researcher that looks at her work through the lens of lifespan development, she focuses in on language learning in kids with Down syndrome, fragile X syndrome, and autism.
Sterling takes her findings and uses them to help design and build better assessments for kids with IDDs. These assessments are used in many research studies as well as clinical spaces and they often focus heavily on the weaknesses a child might be displaying. “Many of the assessments we have in the field highlight a lot of the weaknesses but there are very important strengths within each person and globally, within each syndrome,” Sterling says. “So, how can we make sure that we’re mindful of the strengths as well as the weaknesses? A lot of research is coming out that shows that strength-based intervention approaches are so much better for the individual.”
The hope is that her work will lead to the development of more effective interventions that parents can provide their kids to help improve their speech and language skills. “Being able to communicate with people in your environment is a basic human right,” Sterling says.
Studying language development specifically in cerebral palsy
Alongside Down syndrome, fragile X, and autism, Waisman also houses an investigator who studies speech and language development specifically in cerebral palsy (CP). Katherine Hustad, PhD, professor of communication sciences and disorders and Waisman investigator, has been with Waisman since 2003 and has made several important leaps forward in improving the ability of children with cerebral palsy to communicate. “When I started this work 20 years ago, I was one of the only people in the country and really one of very few in the world who were working on communication in individuals with cerebral palsy,” Hustad says. “Now, fortunately, several other groups around the country and the world are also studying speech, language and communication in children with CP, and I’m lucky to be collaborating with several groups internationally.”
Hustad leads the Wisconsin Intelligibility, Speech and Communication Laboratory (WISC Lab) at the Waisman Center. In the WISC Lab, Hustad has been conducting an NIH funded longitudinal study of speech and language development in children with CP for the past 18 years, following the same group of children with CP, who visit the Waisman Center each year, from early childhood into young adulthood. This work has identified subgroups of children based on their speech and language characteristics and used these groups to understand how children within different profiles grow and develop in their communication abilities.
A key goal has been to understand early predictors of later speech development so that we can use this information to develop treatments that change later outcomes. “It has been an honor and a privilege to follow the life course of this group of about 100 children (now young adults) with CP, to see how their speech and language grow and change with time,” says Hustad. “While we were not able to do much to help our initial cohorts of children with CP because our mission was to first understand how development unfolds, we are now working on our next wave of studies where we are launching speech treatment using augmentative and alternative communication tools for groups of new study participants with specific speech/language profiles.”
Hustad’s work has found that children with CP who are not producing words by two years old are more likely to have significant challenges communicating throughout life, and need augmentative and alternative communication systems and strategies very early in childhood to support language development and ensure social participation. Additionally, the more intelligible children are at younger ages, the better their intelligibility will be as they get older. Hustad’s work has shown that children with CP continue to make changes in their speech development well into adolescence, much longer than previously believed. “There are so many tools and strategies to support communication in those with CP. Difficulty talking does not have to mean difficulty communicating. There is an important difference,” Hustad says. “The Waisman Center has wonderful services through the Communication Aids and Systems Clinic to support children who have trouble producing understandable speech.”
Living in a bilingual house
But what about the kids, with or without an IDD, that are growing up in a bilingual household? Are the interventions that work with monolingual speakers as effective with bilingual kids? Margarita Kaushanskaya, PhD, Waisman investigator and professor of communication sciences and disorders, has dedicated her lab to uncovering the answers to those questions.
At the moment, there is little research on the best way to treat speech and language issues in bilingual children with IDDs. And with more than 20% of the United States being bilingual, that is a particularly hefty gap in care. “There is a smattering of studies, maybe three or four randomized controlled trials, looking at whether it is better for bilingual children to be treated in both languages versus just one. So honestly, we have no idea what to do with these kids,” Kaushanskaya says.
In her lab, Kaushanskaya has two paths she is following. The first is studying language development in young Spanish speaking children with autism where Spanish is the primary language spoken at home. She is working to gain a better understanding of whether or not it is beneficial for parents of a child with a neurodevelopmental disability to mix languages. “Should you stick with one language? And if you do stick with one, for how long should you do it before switching over to the other language, assuming the parents want their child to be bilingual,” Kaushanskaya says. The second path is looking to understand the language development profile of a bilingual late talker. Their developmental trajectory is not well understood so there is a lack of knowledge on how to structure their environment to optimize development.
Kaushanskaya’s ultimate goal is to use the information gleaned from her research to build more effective interventions and therapies for bilingual kids with IDDs. That way the kids can learn to communicate effectively in both of their languages. “Once we figure out our end, what the experimental stuff tells us, then I think we can collaborate with good colleagues of mine to do some really nice randomized controlled trial intervention work to see what works for these kids,” Kaushanskaya says.
How the brain makes and learns from sound
The overall development of language has a heavy focus at the Waisman Center. But what a lot of people forget to think about is that it requires muscles and muscle coordination to make the sounds that then become speech. The speech production motor system is likely one of the most complicated and precise systems in the body.
The brain controls the movement of the face, the jaw, the tongue, and the mouth in a coordinated effort to make the specific combinations of sounds that form words and ultimately sentences. And many of the movements are on the scale of millimeters and within milliseconds. So, how does that speech motor system work? How do we use the information that our speech motor system sends back to us? The answers to these questions underlie and build the foundation upon which complex language development rests. Two researchers at Waisman are focused on unraveling the mysteries of the speech motor system.
Ben Parrell, PhD, Waisman investigator and assistant professor of communication sciences and disorders and Caroline Niziolek, PhD, Waisman investigator and assistant professor of communication sciences and disorders, run two independent but deeply interconnected labs that concentrate on the neural basis of speech motor control and how people use the feedback from speaking to refine speech movement.
“The focus of my lab is on how the brain produces movement, in general, with a particular emphasis on how the brain produces movement for speech production,” Parrell says. Alongside that, Parrell is developing a computational model of how the central nervous system controls the complex anatomy of the vocal tract. This type of model provides an easier way to study these types of small precise structures. Similar to others, he hopes to provide foundational information that can be used to develop treatments and interventions for those where the motor system has been disrupted.
Alongside Parrell, Niziolek investigates how people use the feedback from the speech motor system to adjust and improve speech movement over time. The sensory information that is gathered from the motion of the tongue, jaw, mouth, and face is used by the brain to revise speech but researchers are not entirely sure how the brain does it. “If we can figure out how this system uses feedback,” Niziolek says, “then we can use that to help individuals with disorders where that feedback could be helpful to them.”
While Parrell and Niziolek investigate their own research goals, they also work closely together on projects that combine their expertise. At the moment, they are collaborating on a grant that is looking at how certain types of speech manipulation can be used to help improve intelligibility. This work has implications for conditions like Parkinson’s where speech can become harder and harder to understand as the condition progresses. Together, their two labs make up the Speech Motor Neuroscience Group where they share resources, share data, and assist each other in training new students. “Generally, what we are trying to understand is what are the parameters of the speech and language production system that we can influence that are going to improve the quality of life for people with developmental and neurogenic disorders,” Parrell says.
From afar the speech and language field seems rather small. But as you step closer and look at all of the nuances of speech and language, the field blooms in size. There are so many aspects of talking, forming words, and communicating that may be hard to discern from a distance. As a crucial part of the human experience, the Waisman Center’s core of dedicated speech and language researchers strive to understand this fundamental part of human nature and how it can be enhanced for everyone including those with intellectual and developmental disabilities.
“I think improving communication is something that we can do that has rippling effects in the lives of people with disabilities or with any kind of disruption to speech,” Niziolek says. “It is kind of at the heart of everything and is something small that we can do that can have a big impact on people’s lives.”
|Your support makes a difference. Donate now to advance knowledge about human development, developmental disabilities, and neurodegenerative diseases through research, services, training, and community outreach.
|Learn more about the Waisman Center's 50th Anniversary, including events, history, stories and images:
50 Years | 1973 - 2023