How does neural activity in the human cortex create our sense of visual perception? The Gardner lab uses a combination of functional magnetic resonance imaging, computational modeling and analysis, and psychophysical measurements to link human perception to cortical brain activity.
Vision is a fabrication of our minds. Sensory information from our eyes is often ambiguous or limited, yet vision is remarkably robust and surprisingly able to correctly interpret impoverished sensory signals. What cortical computations make this possible? In the framework of Bayesian statistical decision theory; how does the cortex combine sensory evidence from the eyes with priors or expectations to form percepts? Priors may be short term and signaled by the task at hand - a particular spatial location may be more likely to contain information that is needed. Or priors may be long-term and developed over extended exposure to the natural statistics of the visual world - objects may tend to move slowly rather than quickly. While much is known about the encoding of sensory evidence, comparatively little is known about priors. Where do priors interact with sensory signals and how do they modify and augment perception? The Gardner lab uses psychophysics to make precise behavioral measurements of how priors bias sensory decisions while concurrently measuring cortical activity with functional magnetic resonance imaging. Using knowledge of the visual system and decision theoretical models of how behavior is linked to cortical activity, they seek to understand the cortical computations that construct human vision.