3:30–4:30 pm
Maria Goeppert-Mayer Lecture Hall
Kersten Physics Teaching Center
Room 106
5720 S. Ellis Avenue
Universality and individuality in neural dynamics across large populations of recurrent networks
David Sussillo, Google
Abstract:
Currently neuroscience is undergoing a data revolution, where many thousands of neurons can be measured at once. These new data are extremely complex and will require a major conceptual advance in order to infer the underlying brain computations from them. In order to handle this complexity, systems neuroscientists have begun training deep networks, in particular recurrent neural networks (RNNs), in order to make sense of these newly collected, high-dimensional data. These RNN models are often assessed by quantitatively comparing neural dynamics of the model with the brain. However, the nature of the detailed neurobiological inferences one can draw from such comparisons remains elusive. For example, to what extent does training RNNs to solve simple tasks, prevalent in neuroscientific studies, uniquely determine the low-dimensional dynamics independent of neural architectures? Or alternatively, are the learned dynamics highly sensitive to different neural architectures? Knowing the answer to these questions has strong implications on whether and how to use task-based RNN modeling to understand brain dynamics. To address these foundational questions, we study populations of thousands of RNN architectures commonly used to solve neuroscientifically motivated tasks and characterize their dynamics. We find the geometry of the dynamics can be highly sensitive to different network architectures. Moreover, we find that while the geometry of neural dynamics can vary greatly across architectures, the underlying computational scaffold: the topological structure of fixed points, transitions between them, limit cycles, and aspects of the linearized dynamics, often appears universal across all architectures. Overall, this analysis of universality and individuality across large populations of RNNs provides a much needed foundation for interpreting quantitative measures of dynamical similarity between RNN and brain dynamics.