[Statlist] Next talks on Friday, November 30, 2012 with Sebastian Reich, Universität Potsdam and Aad van der Vaart, University of Leiden

Cecilia Rey rey @end|ng |rom @t@t@m@th@ethz@ch
Mon Nov 26 11:29:54 CET 2012


ETH and University of Zurich

Proff. P. Buehlmann -  L. Held - H.R. Kuensch -
M. Maathuis -  S. van de Geer - M. Wolf

***********************************************************
We are glad to announce the following talks
Friday, November 30, 2012, HG G 19.1

**********************************************************
by Sebastian Reich, Universität Potsdam
from 15.15h to  16.00h

- 16.00 to 16.15 short break

and by Aad van der Vaart, University of Leiden
from 16.15h to 17.00h


Title and abstract of Sebastian Reich:

Bayesian inference and sequential filtering: An optimal coupling of measures perspective

Sequential filtering relies on the propagation of uncertainty under a given model dynamics within a Monte Carlo (MC) setting combined with an assimilation of observations using Bayes' theorem. The recursive application of Bayes' theorem within a dynamic MC framework poses major computational challenges. The popular class of sequential Monte Carlo methods (SMCMs) relies on a proposal step and an importance resampling step.
However, SMCMs are subject to the curse of dimensionality and alternative methods are needed for filtering in high-dimension. The ensemble Kalman filter (EnKF) has emerged as a promising alternative to SMCMs but is also known to lead to asymptotically inconsistent results. 
Following an introduction to sequential filtering, I will discuss a McKean approach to Bayesian inference and its implementation using optimal couplings. Applying this approach to the sequential filtering leads to new perspectives on both SMCMs and EnKFs as well as to novel filter algorithms.

Title and abstract of Aad van der Vaart:

Nonparametric Credible Sets

Bayesian nonparametric procedures for function estimation (densities, regression functions, drift functions, etc.) have been shown to perform well, if some care is taken in the choice of the prior. Many nonparametric priors do not "wash out" as the number of data points increases, unlike for finite-dimensional parameters, but by introducing hyperparameters they can give reconstructions that adapts to the properties of large classes of true underlying functions, similar to the best non-Bayesian procedures for function estimation. Besides a reconstruction a posterior distribution also gives a sense of remaining uncertainty about the true parameter, through its spread. In practice "credible sets, which are central sets of prescribed posterior probability, are often treated as if they are confidence sets. We present some results that show that this practice can be justified, but also results that show that it can be extremely misleading. The situation is particularly delicate if the prior is adapted through hyperparameters (by either empirical of hierarchical Bayes). General, non-Bayesian, difficulties with nonparametric confidence sets play an important role in the resulting difficulties. Although the message of the results is thought to be general, our talk will be limited to the special case of prior distributions furnished by Gaussian processes.


The abstracts can also be found here:  http://stat.ethz.ch/events/research_seminar
_______________________________________________
Statlist mailing list
Statlist using stat.ch
https://stat.ethz.ch/mailman/listinfo/statlist
==========================================================================




More information about the Statlist mailing list