Ben Parrell, PhD – Slide of the Week

Parrell Slide of the Week

Title: The FACTS Model of Speech Motor Control: Fusing State Estimation and Task‐Based Control

Legend: Simulating speech production with the FACTS model (Feedback Aware Control of Tasks in Speech).  Figure 1 (left) shows the results of simulations with different types of sensory feedback available. Movements of the center of the tongue during production of the vowel sequence “uh‐ah‐ee” (/əɑi/) are shown. Variability is lowest when both auditory and somatosensory feedback is available (1B). Removing auditory feedback does not substantially alter the behavior (1C), but removal of somatosensory feedback (1D,E) leads to larger errors. Figure 2 (right) shows the response of the model to mechanical (1A) and auditory (1B) perturbations. In Figure 1A, when a force is applied to the jaw during production of the bilabial consonant “b” in “aba”, the model makes compensatory adjustments to the movements of the lips to attain lip closure. This behavior is not seen for “d”, which involves producing a constriction with the tongue tip, and not the lips. In figure 1B, the model makes a compensatory response to an external perturbation of the frequency of the first vowel formant during production of “uh” (/ə/), lowering F1 in response to an externally‐imposed F1 increase. Together, these results provide a qualitative match to human speech behavior and suggest that the structure of the FACTS model may provide a way to understand sensory feedback use in the human motor control system.

Citation: Parrell, Benjamin, Vikram Ramanarayanan, Srikantan Nagarajan, and John Houde. “The FACTS Model of Speech Motor Control: Fusing State Estimation and Task‐Based Control.” BioRxiv, February 8, 2019, 543728. https://doi.org/10.1101/543728.

Abstract: We present a new computational model of speech motor control: the Feedback-Aware Control of Tasks in Speech or FACTS model. This model is based on a state feedback control architecture, which is widely accepted in non-speech motor domains. The FACTS model employs a hierarchical observer-based architecture, with a distinct higher-level controller of speech tasks and a lower-level controller of speech articulators. The task controller is modeled as a dynamical system governing the creation of desired constrictions in the vocal tract, based on the Task Dynamics model. Critically, both the task and articulatory controllers rely on an internal estimate of the current state of the vocal tract to generate motor commands. This internal state estimate is derived from initial predictions based on efference copy of applied controls. The resulting state estimate is then used to generate predictions of expected auditory and somatosensory feedback, and a comparison between predicted feedback and actual feedback is used to update the internal state prediction. We show that the FACTS model is able to qualitatively replicate many characteristics of the human speech system: the model is robust to noise in both the sensory and motor pathways, is relatively unaffected by a loss of auditory feedback but is more significantly impacted by the loss of somatosensory feedback, and responds appropriately to externally-imposed alterations of auditory and somatosensory feedback. The model also replicates previously hypothesized trade-offs between reliance on auditory and somatosensory feedback in speech motor control and shows for the first time how this relationship may be mediated by acuity in each sensory domain. These results have important implications for our understanding of the speech motor control system in humans.

About the Lab: The Speech Motor Action + Control Lab investigates the human capacity to produce speech using behavioral, computation, and neurological methods. Our current projects focus on the role of the cerebellum in speech motor control and speech disorders associated with cerebellar damage by using computational models to understand the architecture of the speech motor system and investigating how speech motor control is updated and altered through various types of learning.

Slide of the Week Archives