Caroline A Niziolek, PhD

Position title: Assistant Professor, Communication Sciences and Disorders

Caroline A. Niziolek

PhD, Massachusetts Institute of Technology
Assistant Professor, Communication Sciences and Disorders

Contact Information:

Waisman Center
1500 Highland Avenue
Room 485
Madison, WI 53705
608-890-0192
cniziolek@wisc.edu
Brain, Language, & Acoustic Behavior (BLAB) Lab

Research Statement

My research centers on three questions in spoken communication and its neural underpinnings:

  1. How does what we hear affect what we say? Whenever we speak, our message is transmitted not only to our audience, but to our own ears. Auditory feedback—what we hear when we speak—enables us to learn and maintain speaking skills and to rapidly correct errors in our speech. By combining neuroimaging (fMRI, M/EEG, ECoG) with behavioral measures, my research has characterized speech feedback processing and what purpose it serves in communication.
  2. How do our cognitive and linguistic goals affect our speech? How do the higher-level properties that make up language influence the low-level sensorimotor control that causes movement and sound? Speech is a motor act like reaching or grasping, but unlike hand and limb movements, only speech results in an auditory signal whose primary purpose is communication. My work investigates the way in which our acoustic behavior depends on the particular communicative goal we are trying to achieve. For example, both vowels and pitch are important for speaking, but pitch is more important for singing, and this translates into a greater neural sensitivity to the pitch of our own voice as we sing.
  3. How can we leverage these effects for speech learning and rehabilitation? My ongoing work studies how the feedback system functions in persons with aphasia, many of whom have deficits in speech production and error awareness. I am developing laboratory studies of speech skill learning that can be translated to training interventions for speakers such as these who have trouble consistently producing auditory targets. My proposed training consists of vocal games that map speech to a real-time visual display. These visual speech training games have the potential to be adapted into tools to improve speech production in individuals with speech impairment, including deaf speakers and children with developmental speech and language disorders.

Selected Publications

Pubmed