[R] maximum-likelihood-estimation with mle()
Ben Bolker
bbolker at gmail.com
Sun Sep 13 00:46:04 CEST 2015
peter dalgaard <pdalgd <at> gmail.com> writes:
>
> You are being over-optimistic with your starting values, and/or
> with constrains on the parameter space.
> Your fit is diverging in sigma for some reason known
> only to nonlinear-optimizer gurus...
>
> For me, it works either to put in an explicit
> constraint or to reparametrize with log(sigma).
[snip]
Another few strategies:
set.seed(101)
x <- 1:10
y <- 3*x - 1 + rnorm(length(x), mean=0, sd=0.5)
library("stats4")
nLL <- function(a, b, sigma) {
-sum(dnorm(y, mean=a*x+b, sd=sigma, log=TRUE))
}
fit <- mle(nLL, start=list(a=0, b=0, sigma=1), nobs=length(y))
Nelder-Mead is generally more robust than BFGS, although slower:
fit2 <- mle(nLL, start=list(a=0, b=0, sigma=1),
method="Nelder-Mead",nobs=length(y))
bbmle can be used to simplify things slightly (this repeats
Peter Dalgaard's transformation of sigma):
library("bbmle")
fit3 <- mle2(y~dnorm(mean=mu,sd=exp(logsigma)),
parameters=list(mu~x),
data=data.frame(x,y),
start=list(mu=0, logsigma=0))
You can also profile out the sigma:
## dnorm with sd profiled out
dnorm2 <- function(x,mean,log=FALSE) {
ssq <- sum((x-mean)^2)
dnorm(x,mean,sd=sqrt(ssq/length(x)),log=log)
}
fit4 <- mle2(y~dnorm2(mean=mu),
parameters=list(mu~x),
data=data.frame(x,y),
start=list(mu=0))
More information about the R-help
mailing list