
By Shannon Turgeon
Photography by Joshua Franzos
Jason Bohland, assistant professor of communication science and disorders at the University of Pittsburgh’s School of Health and Rehabilitation Sciences, is committed to understanding how the brain’s motor and auditory systems interact during speech production.
However, his path to neuroscience was not linear.
He initially studied computer engineering and worked as a programmer during his undergraduate years, but a master’s degree program in electrical engineering presented opportunities for Bohland to immerse himself in research. He studied the role of connectivity patterns in artificial neural networks and eventually took a course on brain modeling.
“I was like, ‘Wait, this is it!’ I want to understand human cognition in a way that I can mechanistically explain it,” he said.
While pursuing his PhD in cognitive and neural systems, the study of speech unexpectedly caught his attention. He was drawn to the concept of interfacing language—an abstract system—with the brain’s analog and continuous motor system.
“Thinking about speech as a fundamental human behavior that allows us to take this incredible capacity for language, and turn it into something our motor system can control so rapidly and effortlessly—thinking about those problems made me realize how much amazing complexity is there, and I got hooked,” he said.
On Friday, March 6, Bohland will present “A Neurobiological Model of Speaking-Induced Sensory Modulation” as part of the 2026 Senior Vice Chancellor’s Research Seminar Series. (Join the lecture here.)
Bohland leads Pitt’s Speech and Neural Systems Lab to study speech motor control and a mechanism that occurs during speech called predictive coding.
“As we make movements with our tongue and lips and jaw, we’re also predicting what the sensory consequences of those movements are. So, we’re sending information to our auditory system, effectively saying, ‘Hey, this is what’s about to come out.’ That allows us to monitor our speech for errors, but it also allows us to even more finely steer our speech,” he said.
His team conducts behavioral experiments using techniques such as delayed auditory feedback (DAF) to disrupt the predictive coding process and observe how the speech motor system reacts.
“With DAF, we simply take what you are producing and introduce a delay of, say, 100 or 200 milliseconds, so you’re hearing exactly what you said, but with a delay. This, for most speakers, will cause severe disfluencies,” Bohland said.
“The pattern of what happens to your speech tells us a lot about how that control system is working, and what the nature of that predictive coding actually might be.”
The findings have been surprising.
“Not all sounds are created equally,” he said. “We find very specific patterns of errors when we introduce these feedback delays. For example, at least under certain speaking conditions, consonants are almost entirely unaffected by delayed auditory feedback, whereas vowels are much more severely affected.”
Bohland’s lab also uses functional magnetic resonance imaging (fMRI) to learn how the activation of the auditory system differs under various experimental conditions. His team is among the first to use ultra-high-field 7 Tesla fMRI to study the control of speech.
“What we’re doing is aimed at understanding that control system for speech and how it can be a little bit different from person to person. If we’re successful, we will understand that individual variability, both behaviorally and neurally, at a much finer level than has ever been done before,” he said.
“If we get there, that sets us on a course to taking somebody who has a diagnosed problem with that control system, and really thinking about what is mechanistically different and what might be some avenues for intervention.”