[R-sig-ME] Questions on migrating from nlme to lme4
Martin Maechler
maechler at stat.math.ethz.ch
Fri Jun 22 18:05:29 CEST 2007
>>>>> "DM" == Dieter Menne <dieter.menne at menne-biomed.de>
>>>>> on Fri, 22 Jun 2007 09:28:33 +0000 (UTC) writes:
DM> Douglas Bates wrote:
>> Generally I recommend using mcmcsamp to produce "confidence intervals"
>> on the variance components. Approximate (and symmetric) confidence
>> intervals on the fixed effects are reasonably accurate but such
>> intervals for the random effects can be poor approximations.
DM> Problem is that referees who don't read the regular Douglas B columns tend to
DM> say "mcmc ... ha?", and, after explanation, 'we do not publish poker games' (<-
DM> slightly paraphrased from the original comment).
Hmm, we are getting off-topic here, but I think these referees
are not quite fit for the 21st century.
MANY modern statistical procedures depend on random numbers to
some extent:
- neural nets solve a high dimensional minimization and the
solution depends on the random starting values.
{Some silly people would therefore always use the same random
seed before starting the nnet}
- The good old K-Means algorithm very often starts with random
centers {and again: people use versions of the algorithm that, e.g.,
always start with same indices of observations as starting
values ==> their algorithm depends on the *ordering*
of the observations --- which I think is worse}
- All high-breakdown robust statistics procedures ...
- All K-fold cross-validation ...
- All bootstrapping / bagging / bragging / ...
depend on random (sub)sampling.
If referees ask researchers to refrain from all such methods,
a good journal editor should switch referees,
or a good author should switch to a better journal :-)
Martin
More information about the R-sig-mixed-models
mailing list