BMS-ANed seminar: Demystifying Double Descent, Benign Overfitting and Other Surprises Surrounding the Bias-Variance Tradeoff - VVSOR - VVSOR

03 March 2026

BMS-ANed seminar: Demystifying Double Descent, Benign Overfitting and Other Surprises Surrounding the Bias-Variance Tradeoff

From: A U-turn on Double Descent: Rethinking Parameter Counting in Statistical Learning (Alicia Curth, Alan Jeffares, Mihaela van der Schaar)

We are very happy to announce our 13th online seminar in the Biostatistics Seminar Series on Tuesday, March 3rd, 16h-17h (CET)

This series of Biostatistics seminars targets a broad (bio)statistical audience, in particular PhD-students. Specialists discuss a topic of their interest, paying particular attention to concepts relevant and accessible to a non-specialist audience as well.

Speaker
Alicia Curth
Senior Researcher in Machine Learning at Microsoft Research Cambridge
More details are available on her own webpage

Title
Demystifying Double Descent, Benign Overfitting and Other Surprises Surrounding the Bias-Variance Tradeoff

Abstract
Despite their now widespread use in practice, there remain notable gaps in our theoretical understanding of why some heavily overparameterized machine learning (ML) methods generalize well despite appearing highly overfitted. Recently observed empirical phenomena – like double descent and benign overfitting – seem to contradict fundamental statistics textbook intuitions – like the classical U-shaped trade-off relating a model’s size to its performance. In this talk, with special focus on demystifying the double descent phenomenon, I will show that some observed behaviours of overparametrized ML methods can be reconciled with statistical intuition in surprisingly simple ways.

This talk is based on:

  • Curth, A., Jeffares, A., & van der Schaar, M. (2023). A u-turn on double descent: Rethinking parameter counting in statistical learning. Advances in Neural Information Processing Systems, 36.
  • Curth, A. (2024). Classical Statistical (In-Sample) Intuitions Don’t Generalize Well: A Note on Bias-Variance Tradeoffs, Overfitting and Moving from Fixed to Random Designs. arXiv preprint arXiv:2409.18842.
  • Curth, A., Jeffares, A. & van der Schaar, M. (2024). Why do Random Forests Work? Understanding Tree Ensembles as Self-Regularizing Adaptive Smoothers. Arxiv Preprint.

Teams link
will follow