[R-sig-ME] Poor man's metaregression with lmer ?

Emmanuel Charpentier charpent at bacbuc.dyndns.org
Fri May 22 17:09:58 CEST 2009


Dear List,

I have a curious problem : I want to aggregate results from
*observational* studies (the only ones available on my particular
subject), attempt to assess the possible effect of a few (boolean)
factors and compare a single (new) study results, which gives results
extremely different from what is published, to this aggregation.

The study object is the failure rate of a certain medical device. The
current literature on the subject is extremely poor : what can be (more
or less) assessed on each study is participant's sex ratio, presence or
not of patient of three vaguely defined age groups, gross device
characteristics (small/medium/large, titanium/alloy), number of
patients, number of devices (more than one/pt is quite likely), number
of failures.

The first goal (aggregation) isn't very difficult, but...

One obvious solution is mima : transform each failure rate f in logit
(log(f/(1-f)) with estimated variance 1/(n*f+0.5)+1/(n*(1-f)+0.5). With
a hiccup : three of the series have no failure or no success (yes, both
can happen ...), which gives an infinite log(OR). Ouch ! Excluding them
is not acceptable, and adding 0.5 to all counts (aka "continuity
correction") will anyway be judged passé by the panel ...

Another (ignorable ?) problem with this approach is that, while most of
the effects are indiscutably fixed (e. g. device material), others might
be considered random (device design...). I do not know how to specify
this in the (current) mima() function.

Another obvious solution is, of course, (g)lmer : fitting the null model
is as simple as :
null.mod<-lmer(cbind(Fail, NonFail)~1+(1|StudyId), data=MyData,
family=binomial, subset=StudyId!="MyStudy")

The (Intercept) estimator of this model is, IMHO, the estimator of the
logit of the "true" (i. e. thought of as universal) failure rate of this
kind of device. Which would solve my first problem.

And, for example, formally testing for an effect of device material (my
second problem) could be (at least approximately) with
Mat.mod<-update(null.mod, .~.+Mat)
anova(null.mod, Mat.mod, test="Chisq")

Do you see problems with this approach ?

The last problem is, IMHO, not really a problem : the "new" study
reports a failure rate more than twice what can be estimated from
previous studies. The "naive" (binomial) confidence intervals do not
even overlap ! But I'm quite certain that in the panel, some dumbass
playing Sunday statistician (their name is legion in the medical field)
will *require* a p-value.

What about :

MyStud.null.mod<-null.mod<-lmer(cbind(Fail, NonFail)~1+(1|StudyId),
data=MyData, family=binomial)
MyStud.mod<-update(MyStud.null.mod,.~.+I(StudyId=="MyStudy")
anova(MyStudNull,MyStud, test="Chisq")

Again : do you see problems with that ?

Thanks in advance

					Emmanuel Charpentier

PS : Note to Wolfgang Viechtbauer, if he reads this : your new version
of mima() is, as you can see, eagerly awaited...




More information about the R-sig-mixed-models mailing list