[Statlist] Reminder: ETH Young Data Science Researcher Seminar Zurich, Virtual Seminar by Christos Thrampoulidis, University of British Columbia

Maurer Letizia |et|z|@m@urer @end|ng |rom ethz@ch
Thu Mar 11 08:02:55 CET 2021


Dear all

We are glad to announce the following talk in the virtual ETH Young Data Science Researcher Seminar Zurich

"Blessings and Curses of Overparameterization: A Precise High-​dimensional Approach"  
by Christos Thrampoulidis, University of British Columbia

Time: Friday, 12 March 2021, 16:30 -​17:30
Place: Zoom at https://ethz.zoom.us/j/92367940258

Abstract: State-​of-the-art deep neural networks generalize well despite being exceedingly overparameterized and trained without explicit regularization. Understanding the principles behind this phenomenon —termed benign overfitting or double descent— poses a new challenge to modern learning theory as it contradicts classical statistical wisdoms. Key questions include: What are the fundamental mechanics behind double descent? How do its features, such as the transition threshold and global minima, depend on the training data and on the algorithms used for training? While increasing overparameterization can improve classification accuracy, it also comes with larger, thus slower and computationally more expensive, architectures that can be prohibitive in resource-​constrained applications. Is then overparameterization only relevant in training large networks or can it also benefit training smaller models when combined with appropriate model pruning techniques? What are the generalization dynamics of pruning overparameterized models? Finally, while overparameterization leads to lower misclassification error, what is its effect on fairness performance metrics, such as balanced error and equal opportunity? Can we design better loss functions, compared to standard losses such as cross entropy, which provably improve fairness performance of large models in the presence of label-​imbalanced and/or group-​sensitive datasets? This talk will shed light on the questions raised above. At the heart of the results presented lies a powerful analysis framework for precise high-​dimensional statistical analysis. This so-​called convex Gaussian min-​max Theorem framework builds on Gordon’s Gaussian comparison inequality and is rooted in the study of sharp phase-​transitions in Compressed Sensing.

Best wishes,

M. Azadkia, Y. Chen, G. Chinot, M. Löffler, A. Taeb

Seminar website: https://math.ethz.ch/sfs/news-and-events/young-data-science.html


More information about the Statlist mailing list