Title: Adaptation to altered auditory feedback in stressed and unstressed syllables
Legend: Distribution of speakers’ mean change in the first formant frequency (F1) by word and syllable stress for Experiment 1 (left), in which auditory feedback of F1 was shifted up, and Experiment 2 (right), in which it was shifted down. Error bars show 95% confidence intervals.
Citation: Bakst, S. and Niziolek, C.A. (submitted). Does schwa have an auditory target? An altered auditory feedback study. Submitted to the Journal of the Acoustical Society of America.
Abstract: Schwa is cross-linguistically described as having a variable target. The present study examines whether speakers are sensitive to whether their auditory feedback matches their target when producing schwa. When speakers hear themselves producing a version of their speech where formants have been altered, they will change their motor plans online so that their altered feedback is a better match to the target. If schwa has no target, then feedback mismatches may not drive a change in production. In this experiment, participants spoke disyllabic words with initial or final stress where the auditory feedback of F1 was raised (Experiment 1) or lowered (Experiment 2) by 100 mels. Both stressed and unstressed syllables showed compensatory changes in F1. In Experiment 1, initial-stress words showed larger compensatory decreases in F1 than final-stress words, but in Experiment 2, stressed syllables overall showed greater compensatory increases in F1 than unstressed syllables, regardless of syllable order. These results suggest that schwa does have a target, but that stress and syllable order may mediate the compensatory response.
About the Lab: The Niziolek lab studies how the feedback system functions in persons with aphasia, many of whom have deficits in speech production and error awareness. They are developing laboratory studies of speech skill learning that can be translated to training interventions for speakers such as these who have trouble consistently producing auditory targets. The lab’s proposed training consists of vocal games that map speech to a real-time visual display. These visual speech training games have the potential to be adapted into tools to improve speech production in individuals with speech impairment, including deaf speakers and children with developmental speech and language disorders.