ETH Zuerich - Homepage
Seminar for Statistics

Research Seminar


Time/Place: every Friday at 3.15 pm in the Main Building of ETH, HG G 19.1

Spring Semester 2014


Date Speaker Title Time Location
21-feb-2014 (fri)
Valen Johnson
Uniformly most powerful Bayesian tests and the reproducibility of scientific research 15:15-16:00 HG G 19.1
Abstract: Uniformly most powerful Bayesian tests are defined and compared to
classical uniformly most powerful tests. By equating the rejection
regions of these two tests, an equivalence between Bayes factors based on these tests and frequentist p-values is illustrated. The implications of this equivalence for the reproducibility of scientific research are examined. Approximately uniformly most powerful Bayesian tests are described for t tests, and the power of these tests are compared to ideal Bayes factors (defined by determining the best test alternative for each true value of the parameter), as well as to Bayes
factors obtained using the true parameter value as the alternative.
Interpretations and asymptotic properties of these Bayes factors are also discussed.

Valen Johnson (Texas A&M University)

28-feb-2014 (fri)
Tom Claassen
FCI+ or Why learning sparse causal models is not NP-hard 15:15-16:00 HG G 19.1
Abstract: Causal discovery lies at the heart of most scientific research today. It is the science of identifying presence or absence of cause-effect relations between certain variables in a model. Building up such a causal model from (purely) observational data can be hard, especially when latent confounders (unobserved common causes) may be present. For example, it is well-known that learning a minimal Bayesian network (BN) model over a (sub)set of variables from an underlying causal DAG is NP-hard, even for sparse networks with node degree bounded by k. Given that finding a minimal causal model is more complicated than finding a minimal DAG it was often tacitly assumed that causal discovery in general was NP-hard as well. Indeed the famous FCI algorithm, long the only provably sound and complete algorithm in the presence of latent confounders and selection bias, has worst-case running time that is exponential in the number of nodes N, even for sparse graphs.
Perhaps surprisingly then it turns out that we can exploit the structure in the problem to reconstruct the correct causal model in worst case N^(2k+4) independence tests, i.e. polynomial in the number of nodes. In this talk I will present the FCI+ algorithm as the first sound and complete causal discovery algorithm that implements this approach. It does not solve an NP-hard problem, and does not contradict any known hardness results: it just shows that causal discovery is perhaps more complicated, but not as hard as learning a minimal BN. In practice the running time remains close to the PC limit (without latent confounders, order k*N^(k+2), similar to RFCI). Current research aims to tighten complexity bounds and further optimize the algorithm.

Tom Claassen (Radboud University Nijmegen, The Netherlands)

21-mar-2014 (fri)
Eric Gautier
Uniform confidence sets for high dimensional regression and instrumental regression via linear programming 15:15-16:00 HG G 19.1
Abstract: In this talk we present a one-stage method to compute joint confidence sets for the coefficients of a high-dimensional regression with random design under sparsity. The confidence sets have finite sample validity and are robust to non-Gaussian errors of unknown variance and heteroscedastic errors. Nonzero coefficients can be arbitrarily close to zero. This extends previous work with Alexandre Tsybakov where we rely on a conic program to obtain joint confidence sets and estimation for this pivotal linear programming procedure. The method we present only relies on linear programming which is important for dealing with high-dimensional models. We will explain how this method extends to linear models with regressors that are correlated with the error term (called endogenous regressors) as is often the case in econometrics. The procedure relies on the use of so-called instrumental variables. The method is then robust to identification and weak instruments.

Eric Gautier (ENSAE-CREST, Paris)

4-apr-2014 (fri)
Alessio Sancetta
A Nonparametric Estimator for the Covariance Function of Functional Data 15:15-16:00 HG G 19.1
Abstract: Many quantities of interest in economics and finance can be represented as partially observed functional data. Examples include structural business cycles estimation, implied volatility smile, the yield curve. Having embedded these quantities into continuous random curves, estimation of the covariance function is needed to extract factors, perform dimensionality reduction, and conduct inference on the factor scores. A series expansion for the covariance function is considered. Under summability restrictions on the absolute values of the coefficients in the series expansion, an estimation procedure that is resilient to overfitting is proposed. Under certain conditions, the rate of consistency for the resulting estimator is nearly the parametric rate when the observations are weakly dependent. When the domain of the functional data is K(> 1) dimensional, the absolute summability restriction of the coefficients avoids the so called curse of dimensionality. As an application, a Box-Pierce statistic to test independence of partially observed functional data is derived. Simulation results and an empirical investigation of the efficiency of the Eurodollar futures contracts on the Chicago Mercantile Exchange are included.


Alessio Sancetta (Royal Holloway, University of London)

9-may-2014 (fri)
Iain Currie
GLAM: Generalized Linear Array Models 15:15-16:00 HG G 19.1
Abstract: A Generalized Linear Array Model (GLAM) is a generalized linear model where the data lie on an array and the model matrix can be expressed as a Kronecker product. GLAM is conceptually attractive since its high-speed, low-footprint algorithms exploit the structure of both the data and the model. GLAMs have been applied in mortality studies, density estimation, spatial-temporal smoothing,variety trials, etc. In this talk we

(1) describe the GLAM ideas and algorithms in the setting of the original motivating example, two-dimensional smooth model of mortality,

(2) give an extended discussion of a recent application to the Lee-Carter model, an important model in the forecasting of mortality.

Currie, I. D., Durban, M. and Eilers, P. H. C. (2006) Generalized linear array models with applications to multidimensional smoothing.
Journal of the Royal Statistical Society, Series B, 68, 259-280.
DOI reference: http://doi:10.1111/j.1467-9868.2006.00543.x

Currie, I. D. (2013) Smoothing constrained generalized linear models with an application to the Lee-Carter model. Statistical Modelling, 13,69-93.
DOI reference: http://doi:10.1177/1471082X12471373

Iain Currie (Heriot-Watt University, Edinburgh )

23-may-2014 (fri)
Gassiat Elisabeth
tba 15:15-16:00 HG G 19.1
Abstract: tba

Gassiat Elisabeth (Université Paris-Sud)

23-may-2014 (fri)
Jane L. Hutton
Chain Event Graphs for Informative Missingness 16:30-17:15 HG G 19.1

Jane L. Hutton (Department of Statistics, University of Warwick, UK.)

Further information:

Mailinglist: Would you like to receive notice of these presentations via e-mail? Please subscribe here:


Wichtiger Hinweis:
Diese Website wird in älteren Versionen von Netscape ohne graphische Elemente dargestellt. Die Funktionalität der Website ist aber trotzdem gewährleistet. Wenn Sie diese Website regelmässig benutzen, empfehlen wir Ihnen, auf Ihrem Computer einen aktuellen Browser zu installieren. Weitere Informationen finden Sie auf
folgender Seite.

Important Note:
The content in this site is accessible to any browser or Internet device, however, some graphics will display correctly only in the newer versions of Netscape. To get the most out of our site we suggest you upgrade to a newer browser.
More information

© 2014 Mathematics Department | Imprint | Disclaimer | 12 October 2012