PROJECTS of the Speech Production Laboratory (2013-present)


SPL Logo 4 - Red











Embodied Phonetics

Most of what we know about the physical production of speech sounds is based on data collected from healthy young adults sitting in a chair in front of a microphone, while completing unchallenging speaking tasks in a quiet environment. Most of the time, most of us speak and converse under very different conditions. Embodied phonetics is a new research program that seeks to use cutting-edge research methods, instruments, and analyses to understand speech in the variety of embodied conditions in which we find ourselves. Embodiment includes not only gender, race, age, and native language, but also posture, health, background noise, attentional demands, and so much more. Embodied phonetics is an umbrella for the more specific research projects outlined below.


Children’s Speech: Production, Development & Technology

We have been carrying out federally-funded research on the speech of children between kindergarten and 5th grade since 2015. We specialize in vocal tract imaging, articulatory-acoustic modeling, and longitudinal research design. This project also contributes to the development of novel software tools for collecting and analyzing multimodal data sets. In collaboration with the UCLA Speech Processing and Auditory Perception Laboratory, we develop cutting-edge speech technologies for children. We also collaborate with the IU Voice Physiology and Imaging Laboratory to investigate voice therapies for children with voice disorders.


We are currently recruiting children to participate in this study.  Feel free to have a look at the Informed Consent Document and reach out to us if you have any questions or are interested in getting your child involved.


This project has been funded by NSF Grants #1551131 and #2006818.

The WASL software has been developed in part through this project.


Peripheral Speech Motor Control

Since 2023 we have begun working to understand peripheral speech motor control, which is how signals in the brainstem, cranial nerves, and vocal tract musculature work together to produce speech sounds. In collaboration with Dr. Daniel Aalto (from the University of Alberta) and Dr. Hu Cheng (from the Indiana University Imaging Research Facility), we are developing brainstem fMRI, muscle fMRI, and muscle fiber tractography (from diffusion weighted MRI) techniques and protocols that will help us better understand the relationship between speech motor control and articulatory phonetics.


Articulatory and Acoustic Phonetics of the World’s Languages

Since the founding of the Speech Production Laboratory in 2013, we have collaborated with faculty and students in the IU Department of Linguistics, as well as with colleagues at universities around the world, to study the articulatory and acoustic phonetics of the world’s languages.


Breath Support, Pulmonary Function, and Subglottal Acoustics

The lungs play a critical role in speech production – and in music – both by providing the airflow that generates the sound source for most speech sounds – and for singing or woodwind instrument performance – and by creating an acoustic impedance that affects voice production, speech acoustics, and musical acoustics. We investigate breath support, pulmonary function, and subglottal acoustics in speech, singing, and clarinet performance.



The vocal tract and the upper GI tract are the same space, and make use of the same structures for both speaking and swallowing, namely the jaw, lips, tongue, soft palate, pharyx, and larynx. We specialize in real-time three-dimensional imaging of the tongue during swallowing.