ZüKoSt: Seminar on Applied Statistics

Would you like to be notified about these presentations via e-mail? Please subscribe here.

×

Modal title

Modal content

Spring Semester 2024

Date / Time Speaker Title Location
1 February 2024
15:30-16:30
Andrew Vickers
Memorial Sloan Kettering Cancer Center, New York
Event Details

ZüKoSt Zürcher Kolloquium über Statistik

Title If calibration, discrimination, Brier, lift gain, precision recall, F1, Youden, AUC, and 27 other accuracy metrics can’t tell you if a prediction model (or diagnostic test, or marker) is of clinical value, what should you use instead?
Speaker, Affiliation Andrew Vickers, Memorial Sloan Kettering Cancer Center, New York
Date, Time 1 February 2024, 15:30-16:30
Location AKI Lecture Hall 1&2, Hirschengraben 86, 8001 Zürich
Abstract A typical paper on a prediction model (or diagnostic test or marker) presents some accuracy metrics - say, an AUC of 0.75 and a calibration plot that doesn’t look too bad – and then recommends that the model (or test or marker) can be used in clinical practice. But how high an AUC (or Brier or F1 score) is high enough? What level of miscalibration would be too much? The problem is redoubled when comparing two different models (or tests or markers). What if one prediction model has better discrimination but the other has better calibration? What if one diagnostic test has better sensitivity but worse specificity? Note that it doesn’t help to state a general preference, such as “if we think sensitivity is more important, we should take the test with the higher sensitivity” because this does not allow to evaluate trade-offs (e.g. test A with sensitivity of 80% and specificity of 70% vs. test B with sensitivity of 81% and specificity of 30%). The talk will start by showing a series of everyday examples of prognostic models, demonstrating that it is difficult to tell which is the better model, or whether to use a model at all, on the basis of routinely reported accuracy metrics such as AUC, Brier or calibration. We then give the background to decision curve analysis, a net benefit approach first introduced about 15 years ago, and show how this methodology gives clear answers about whether to use a model (or test or marker) and which is best. Decision curve analysis has been recommended in editorials in many major journals, including JAMA, JCO and the Annals of Internal Medicine, and is very widely used in the medical literature, with well over 1000 empirical uses a year. We are pleased to invite you - see you there!
If calibration, discrimination, Brier, lift gain, precision recall, F1, Youden, AUC, and 27 other accuracy metrics can’t tell you if a prediction model (or diagnostic test, or marker) is of clinical value, what should you use instead?read_more
AKI Lecture Hall 1&2, Hirschengraben 86, 8001 Zürich
29 February 2024
15:15-16:15
Manuela Brunner
WSL Institute for Snow and Avalanche Research SLF
Event Details

ZüKoSt Zürcher Kolloquium über Statistik

Title Exceptional flood events: insights from three simulation approaches
Speaker, Affiliation Manuela Brunner, WSL Institute for Snow and Avalanche Research SLF
Date, Time 29 February 2024, 15:15-16:15
Location HG G 19.2
Abstract Exceptional floods, i.e. flood events with magnitudes or spatial extents occurring only once or twice a century, are rare by definition. Therefore, it is challenging to estimate their frequency, magnitude, and future changes. In this talk, I discuss three methods that enable us to study exceptional extreme events absent in observational records thanks to increasing sample size: stationary and non-stationary stochastic simulation, reanalysis ensemble pooling, and single-model initialized large ensembles. I apply these techniques to (1) study the frequency of widespread floods, (2) quantify future changes in spatial flood extents, (3) estimate the magnitude of floods happening once or twice a century, and (4) shed light on the relationship between future increases in extreme precipitation and flooding. These applications suggest that simulation approaches that substantially increase sample size provide a better picture of flood variability and help to increase our understanding of the characteristics, drivers, and changes of exceptional extreme events.
Exceptional flood events: insights from three simulation approachesread_more
HG G 19.2
11 April 2024
15:15-16:15
Arnoldo Frigessi
University of Oslo
Event Details

ZüKoSt Zürcher Kolloquium über Statistik

Title Probabilistic preference learning from incomplete rank data
Speaker, Affiliation Arnoldo Frigessi, University of Oslo
Date, Time 11 April 2024, 15:15-16:15
Location HG G 19.1
Abstract Ranking data are ubiquitous: we rank items as citizens, as consumers, as scientists, and we are collectively characterised, individually classified and recommended, based on estimates of our preferences. Preference data occur when we express comparative opinions about a set of items, by rating, ranking, pair comparing, liking, choosing or clicking, usually in an incomplete and possibly inconsistent way. The purpose of preference learning is to i) infer on the shared consensus preference of a group of users, or ii) estimate for each user their individual ranking of the items, when the user indicates only incomplete preferences; the latter is an important part of recommender systems. I present a Bayesian preference-learning framework based on the Mallows rank model with any right-invariant distance, to infer on the consensus ranking of a group of users, and to estimate the complete ranking of the items for each user. MCMC based inference is possible, by importance-sampling approximation of the normalising function, but mixing can be slow. We propose a Variational Bayes approach to performing posterior inference, based on a pseudo-marginal approximating distribution on the set of permutations of the items. The approach scales well and has useful theoretical properties. Partial rankings and non-transitive pair-comparisons are solved by Bayesian augmentation. The Bayes-Mallows approach produces well-calibrated uncertainty quantification of estimated preferences, which are useful for recommendation, leading to excellent accuracy and increased diversity, compared for example to matrix factorisation. Simulations and real-world applications help illustrate the method. This talk is based on joint work with Elja Arjas, Marta Crispino, Qinghua Liu, Ida Scheel, Øystein Sørensen, and Valeria Vitelli.
Probabilistic preference learning from incomplete rank dataread_more
HG G 19.1
19 April 2024
15:15-16:15
Joshua Warren
Yale University
Event Details

ZüKoSt Zürcher Kolloquium über Statistik

Title Title T.B.A.
Speaker, Affiliation Joshua Warren , Yale University
Date, Time 19 April 2024, 15:15-16:15
Location HG G 19.1
Abstract tba
Title T.B.A.read_more
HG G 19.1

Notes: the highlighted event marks the next occurring event and if you want you can subscribe to the iCal/ics Calender.

JavaScript has been disabled in your browser