My initial motivation for switching from computer science to neuroscience had been to draw inspiration from neural circuits to improve AI and machine learning techniques. The two fields have had a long intertwined history, since the original perceptron or convolutional networks, till modern attentional mechanisms and experience replay memories. However, since I started working with a neuroethology group, I have come to appreciate the questions and challenges in systems neuroscience itself, that can benefit from rigorous computational analysis. In particular, I am interested in studying how the brain combines information from different sensory modalities to form mixed representations, make decisions and drive behavior. As a model, I will be studying the mouse superior colliculus because of its unique functional repertoire that involves processing inputs from visual, auditory and somatosensory modalities and integrating them into a motor response. This provides a great testbed to study questions like how different sensory inputs cooperate in case of consistent information or antagonize each other in case of opposing inputs from the environment, or which inputs are enough to elicit a behavioral response on their own and which only work in conjunction, perhaps in a modulatory role. My hope is that this study of multi-modal integration in the SC can help us not only in a better understanding of how animals solve the binding problem of forming a coherent environmental representation from multiple senses, but also in applying these principles to the problem of sensor integration in robotics and multi-modal learning in neural networks.
Large-Scale Optical Retina Recordings