ZüKoSt: Seminar on Applied Statistics
Time/Place: every Thursday at 4.15 pm at the Main Building of ETH, HG G 19.1
Spring Semester 2014
Date 
Speaker 
Title 
Time 
Location 
20feb2014 (thu) 
Felix Franke

The curse of dimensionality in neuroscience. How to extract single neuronal activity from multielectrode recordings.

16:1517:00 
HG G 19.1 
Abstract: 
One of the workhorses of neuroscience are extracellular recordings. Here, one or multiple electrodes are brought close to the cell bodies of neurons to measure their electrical activity. Neural activity can then be related to behavioral parameters or external stimuli to infer the function and mechanics of the underlying neural networks. However, despite its importance and considerable amount of research directed towards it, extracting neural activity from extracellular recordings, a process called "spike sorting", remains one of the bottlenecks of neuroscience and many laboratories still rely on the use custom made software with a large human component in the analysis. This not only costs expensive human resources but manual spike sorting was shown to lead to high error rates and, dependent on who did the analysis, idiosyncratic biases in the resulting data. Furthermore, since the amount of recorded data is increasing dramatically, in the near future, manual spike sorting will not be an viable option anymore. I will discuss the approaches to solve this problem taken by our lab, highlight their problems and hint at a potential better solution. 
Speakers: 
Felix Franke
(BSSE, ETH Zürich)


6mar2014 (thu) 
David Rossell

RNAseq and alternative splicing. A highdimensional estimation, model selection & experimental design problem.

16:1517:00 
HG G 19.1 
Abstract: 
Gene expression in general, and alternative splicing (AS) in particular, is a phenomenon of great biomedical relevance. For instance, AS differentiates humans from simpler organisms and is involved in multiple diseases such as cancer and malfunctions at the cellular level. Although now highthroughput sequencing allows to study AS at full resolution, having adequate statistical methods to design and analyze such experiments remains a challenge.
We propose a Bayesian model to estimate the expression of known variants (i.e. an estimation problem), finding variants de novo (i.e. a model selection problem) and designing RNAseq experiments. The model captures several experimental biases and uses novel data summaries that preserve more information than the current standard. Regarding model selection, a critical challenge is that the number of possible models increases superexponentially with gene complexity (measured by the number of exons). It is therefore paramount to elicit prior distributions that are effective at inducing parsimony. We use nonlocal priors on modelspecific parameters, which improve both parameter estimation and model selection. The model space prior is derived from the available genome annotations, so that it represents the current state of knowledge.
Compared to three popular methods, our approach reduces MSE by several fold, increases the correlation between experimental replicates and is efficient at finding previously unknown variants. By using posterior predictive simulation, we compare several experimental setups and sequencing depths to indicate how to best continue experimentation. Overall, the framework illustrates the value of incorporating careful statistical considerations when analyzing RNAsequencing data.

Speakers: 
David Rossell
(University of Warwick)


13mar2014 (thu) 
Michael Amrein

The Backfiller: simulation based imputation in multivariate financial time series

16:1517:00 
HG G 19.1 
Abstract: 
UBS predicts the risk measure 1day ValueatRisk (VaR) using historical simulation. For this, the profitandloss per day (PnL) of a financial asset is represented by a function of current market data and daily risk factor returns, e.g. returns of equity prices, interest rates or foreign exchange rates. The distribution of these risk factor returns for the next day is assumed to follow the empirical multivariate distribution of these daily risk factor returns over a window of the past 1305 trading days. The VaR of a portfolio consisting of several assets is then given by a quantile of the resulting portfolio PnL distribution.
Missing values (singlets or short runs) are common in historical data of risk factors due to foreign holidays or improper data collection, and for some risk factors only a limited data history exists. The calculation of the portfolio PnL's usually involves many risk factors. If just one of these risk factors is unobserved at a specific day in the window, the corresponding portfolio PnL can not be evaluated.
To address the problem, we use a Monte Carlo method to impute ("backfill") the missing risk factor returns. First, a statistical model featuring timevarying volatility and correlation across assets is fitted. Second, the missing values are simulated conditional on the observed values and based on the estimated model. As a result, the imputed values are consistent with the data history. Further, an adapted version of the method allows to detect outliers in the data.
In the talk, we will discuss the main features of the method and show some applications which of course are not limited to VaR calculation due to the generic nature of the imputation / detection problem. 
Speakers: 
Michael Amrein
(UBS AG, Zürich)


27mar2014 (thu) 
Kaspar Rufibach

Update Bayesian predictive power after a blinded interim analysis

16:1517:00 
HG G 19.1 
Abstract: 
Bayesian predictive power is the expectation of the probability to meet the primary endpoint of a clinical trial, or any statistical test, at the final analysis. Expectation is computed with respect to a distribution over the true underlying effect and Bayesian predictive power is a way of quantifying the success probability for the trialsponsor while the trial is still running. The existing framework typically assumes that once the trial is not stopped at an interim analysis, Bayesian predictive power is updated with the resulting interim estimate. However, in blinded Phase III trials, typically an independent committee looks at the data and no effect estimate is revealed to the sponsor after passing the interim analysis. Instead, the sponsor only knows that the effect estimate was between predefined futility and efficacy boundaries. We show how Bayesian predictive power can be updated based on such knowledge only and illustrate potential pitfalls of the concept.
This is joint work with Markus Abt und Paul Jordan, both Roche Biostatistics, Basel. 
Speakers: 
Kaspar Rufibach
(Roche Biostatistics Oncology, Basel)


15may2014 (thu) 
Andreas Krause

tba

16:1517:00 
HG G 19.1 
Abstract: 
tba 
Speakers: 
Andreas Krause
(Department of Computer Science, ETH Zürich)


Further information: sekretariat@stat.math.ethz.ch
Mailinglist: Would you like to receive notice of these presentations via email? Register here