Projects
Neural mechanisms of hierarchical temporal integration
Natural sounds like speech and music are structured across an enormous range of timescales spanning tens of milliseconds (phonemes), hundreds of milliseconds (syllables, and words), seconds (phrases), and minutes (narrative structures). Somehow the brain must rapidly recognize, remember, and synthesize information across these different timescales. We are developing a variety of methods and models to understand the nature of temporal integration across the human auditory cortex and in higher-order cognitive and language regions. We are also interested in how the neural mechanisms of temporal integration are shaped by the statistics of the natural environment.
Representation of speech and music in non-primary auditory cortex
Human auditory cortex contains distinct neural populations that respond selectively to speech, music, and singing. These neural populations are located at the highest stages of auditory cortical processing (“non-primary belt/parabelt”) and their responses likely underlie the perception of speech and music. We are using fMRI, intracranial recordings, latent variable modeling, and sound synthesis methods to understand how these neural populations code speech and music.
Testing computational models of human auditory cortex
Hearing in real-world environments is remarkably challenging and the human auditory system does a remarkable job of deciphering useful information from the complex and chaotic waveform that reaches the ear. These abilities are made possible by a cascade of neuronal processing stages that culminate in a complex, nonlinear representation of sound in non-primary auditory cortex. A key goal of our lab is to develop models that can replicate these nonlinear computations and accurately predict non-primary cortical responses. An important component of this work involves developing better methods for comparing brain responses with those from complex models, like deep neural networks, to reveal model successes and failures.