BI 052 Andrew Saxe: Deep Learning Theory
Andrew and I discuss his work exploring how various facets of deep networks contribute to their function, i. e. deep network theory. We talk about what hes learned by studying linear deep networks and asking how depth and initial weights affect learning dynamics, when replay is appropriate (and when its not), how semantics develop, and what it all might tell us about deep learning in brains. Show notes: Visit Andrew s website. The papers we discuss or mention: Are Efficient Deep Representations Learn
|
|