We had a very interesting discussion of memorization in neural networks, in particular, about how to identify that memorization is happening and if the common belief that memorization is concentrated in the final layers is true. The presented by Pratyush Maini research about “Can Neural Network Memorization Be Localized?” shows that simple concentration of memorization in a particular layer is not really true. Further we discussed approach for getting rid of memorizing noisy labels via special type of dropout.

You can find the presentation that was held here.