In this blog we collect information about sessions that we had in the reading group on Mathematics of Deep Learning. Maybe we will start adding some blogposts that are not connected to meetings as well, stay tuned!

Our scope is (but not limited to):

  • Loss surface understanding
  • Generalization bounds
  • Information theory for Deep Learning
  • Non-convex optimization
  • Bayesian perspective on Deep Learning
  • Adversarial learning
  • Double descent(s)
  • Fairness and interpretability
  • Certifiable adversarial robustness
  • NTK
  • Unsupervised Domain Adaptation
  • Geometric Deep Learning

We are trying to balance papers we consider to be not too theoretical and not purely applicational as well.