Ari Rosenberg, PhD
Position title: Associate Professor, Computational Neuroscience
PhD, University of Chicago
1111 Highland Avenue
Room 5505 WIMR-II
Madison, WI 53705
We study the neural computations underlying 3D vision, multisensory integration, and the neural basis of autism.
How do we perceive the three-dimensional (3D) structure of the world when our eyes only sense 2D projections like a movie on a screen? Estimating the 3D scene structure of our environment from a pair of 2D images (like those on our retinae) is mathematically an ill-posed inverse problem plagued by ambiguities and noise, and involving highly nonlinear constraints imposed by multi-view geometry. Given these complexities, it is quite impressive that the visual system is able to construct 3D representations that are accurate enough for us to successfully interact with our surroundings. A major area of research in the lab is devoted to understanding how the brain achieves accurate and reliable 3D representations of the world. A critical aspect of 3D vision is the encoding of 3D object orientation (e.g., the slant and tilt of a planar surface). By adapting mathematical tools used to analyze geomagnetic data (Bingham functions), we developed the first methods for quantifying the selectivity of visual neurons for 3D object orientation. Our work on this topic employs a synergistic, multifaceted approach combining computational modeling, neurophysiological studies, and human psychophysical experiments.
Our visual system first encodes the environment in egocentric coordinates defined by our eyes. Such representations are inherently unstable in that they shift and rotate as we move our eyes or head.However, visual perception of the world is largely unaffected by such movements, a phenomenon known as spatial constancy. Perception is instead anchored to gravity, which is why buildings are seen as vertically oriented even if you tilt your head to the side. This stability of visual perception is a consequence of multisensory processing in which the brain uses gravitational signals detected by the vestibular and proprioceptive systems to re-express egocentrically encoded visual signals in gravity-centered coordinates. Vestibular deficits can thus compromise visual stability, and the absence of gravity in space can cause astronauts to experience disorienting jumps in the perceived visual orientation of their surroundings. A second area of research in the lab investigates where and how the brain combines visual information with vestibular and proprioceptive signals in order to achieve a stable, gravity-centered representation of the world. Our work on this topic relies on a combination of computational modeling and neurophysiological studies.
Neural basis of autism
The prevalence of autism is growing at a dramatic rate, with a proportionate response from the scientific community. The extent to which this rise in prevalence reflects growing awareness, over diagnosis, or a genuine increase in incidence is currently unclear. Considering that last year alone saw almost 4000 publications related to autism, it is perhaps not surprising that a number of controversies are emerging within the field. A third area of research in the lab aims to lift the fog surrounding this complex disorder. Given the heterogeneity of the genetic and environmental factors that may give rise to autism, as well as its phenotypic diversity, our approach takes the perspective that we can better understand autism by studying how the disorder affects neural computation. We are chiefly interested in determining if the behavioral consequences of autism reflect alterations in canonical neural computations that occur throughout the brain, such as divisive normalization. To test this hypothesis, we are conducting psychophysical studies based on predictions of our recent computational work on the disorder. Our hope is that identifying where and how neural computations are altered in autism will provide a unique window of understanding into the disorder and may provide important insights into its treatment.
Elmore LC, Rosenberg A, DeAngelis GC, Angelaki DE. (2019). Choice-Related Activity during Visual Slant Discrimination in Macaque CIP But Not V3A. eNeuro, 26;6(2). pii: ENEURO.0248-18.2019. doi:10.1523/ENEURO.0248-18.2019.
Thompson L, Ji M, Rokers B, Rosenberg A. (2019). Contributions of binocular and monocular cues to motion-in-depth perception. Journal of Vision, 19(3):2. doi:10.1167/19.3.2.
Kim B, Kenchappa SC, Sunkara A, Chang TY, Thompson L, Doudlah R, Rosenberg A. (2019). Real-time experimental control using network-based parallel processing. Elife. 2019 Feb 7;8. pii: e40231. doi: 10.7554/eLife.40231.
Rosenberg A, Sunkara A. (2018). Does attenuated divisive normalization affect gaze processing in autism spectrum disorder? A commentary on Palmer et al. (2018). Cortex, 111:316-318. doi: 10.1016/j.cortex.2018.06.017.
Dakin CJ, Rosenberg A. (2018). Gravity estimation and verticality perception. Handbook of Clinical Neurology, 159(3): 43-59.
Rosenberg A, Patterson JS, and Angelaki DE (2015) A computational perspective on autism. Proceedings of the National Academy of Sciences, 112(30): 9158-9165.
Rosenberg A, Patterson JS, and Angelaki DE (2015) A synergistic approach to mental health research. Proceedings of the National Academy of Sciences, 112(38): E5227.