This discussion with Arthur Jacot was based on two of his papers “Implicit Bias of Large Depth Networks: a Notion of Rank for Nonlinear Functions” (notable-top-25% at ICLR2023) and “Bottleneck Structure in Learned Features: Low-Dimension vs Regularity Tradeoff”. The interesting implications of the work are concerned with the complexity of the functions learned by deep neural networks and also emphasize the difference between layers representations depending on the depth. Theoretical results and practical evaluation indicates that very deep neural networks learn extremely simple function in the middle layers therefore demonstrating bottleneck structure.

You can find the presentation that was held here.