PhD Seminars IX
May 27, 2019 –
Jenny Kartsaki (BIOVISION)
Probing retinal function with a multi-layered simulator
Our brain can recreate images from interpreting a stream of information emitted by one million parallel channels in the retina. This ability is partly due to the astonishing functional and anatomical diversity of the retinal ganglion cells (RGCs), each interpreting a different feature of the visual scene. In addition, RGCs “speak” to each other during complex tasks (especially motion handling), via amacrine cells (ACs - lateral connectivity). To decipher their role, we study an experimental setting that allows us to switch on or off RGCs and/or ACs using the drug CNO. This may not only impact the RGCs individual response but also their concerted activity to different stimuli, thus allowing us to understand how they contribute to the encoding of complex visual scenes. However, it is difficult to distinguish on pure experimental grounds the effect of CNO when both cell types are excited or inhibited, as these cells “antagonise” each other. Contrarily, numerical simulation can afford it. Here, we propose a novel simulation platform that can reflect normal and impaired retinal function (from single-cell to large-scale level). It is able to handle different visual processing circuits and allows us to visualise responses to visual scenes (movies). In addition, the platform allows simulation of retinal responses where we can silence or excite cell subclasses with CNO.
Anticipation in the retina and the primary visual cortex : towards an integrated retino-cortical model for motion processing
The retina is able to perform complex tasks and general feature extraction, allowing the visual cortex to process visual stimuli with more efficiency. With regards to motion processing, an interesting and useful task performed by the retina is anticipation and trajectory extrapolation. The first contribution of our work lies in the development of a generalized 2D model of the retina with three layers of ganglion cells : Fast OFF cells with gain control accounting for anticipation, direction selective cells connected via gap junctions, and Y-cells connected through amacrine cells, accounting for motion extrapolation. The second contribution is the use of the output of our retina model as an input to a mean field model of the primary visual cortex to reproduce motion anticipation as observed in VSDI recordings of V1 . We present results of the integrated retino-cortical model for motion processing, and study how anticipation and extrapolation depend on stimuli parameters such as speed, shape and trajectory. Through the integrated retina-cortical model we emphasize the mechanisms defining motion anticipation, due to the cooperation of gain control and lateral connectivity at the level of the retina and lateral connectivity in the cortex.