Information-theoretic generalization bounds for black-box learning algorithms
Hrayr Harutyunyan presented the information-theoretic perspective on generalization bounds based on the two papers: NeurIPS21 “Information-theoretic generalization bounds for black-box learning algorithms.” and IEEE ITW22 “Formal limitations of sample-wise information-theoretic generalization bounds.. We talked about the original ideas of bounding generalization gap for learning algorithms through mutual information of dataset and parameters of the learned model and how they were developed in the presented works. Further we discussed the limitations and possible workaround for further improvement of such bounds for the state-of-the-art machine learning models.
Presentation can be found here.