[R] evaluating function over seq of values
Evan Cooch
evan.cooch at gmail.com
Mon Feb 13 17:41:10 CET 2017
The MWE (below) shows what I'm hoping to get some help with:
step 1\ specify the likelihood expression I want to evaluate using a
brute-force grid search (not that I would do this in practice, but it is
a convenient approach to explain the idea in a class...).
step 2\ want to evaluate the likelihood at each of a sequence of values
of N (for this example, seq(80,200,1)).
step 3\ take all of the values of the likelihood at each value for N,
and (say) plot them
I'm sure there is a clever way to vectorize all this, but my token
attempts at wrestling sapply into submission have failed me here. In my
MWE, I use a simple loop, which has the advantages of working, and being
completely transparent as to what it is doing. For teaching purposes,
this is perhaps fine, but I'm curious as to how I could accomplish the
same thing avoiding the loop.
Thanks in advance...
-----<MWE code below>-----
# ML estimation by simple grid search
rm(list=ls())
library("optimx")
# set up likelihood function
f_like <- function(par) {
p1 <- par[1];
p2 <- par[2];
p3 <- par[3];
p4 <- par[4];
lfactorial(N)-lfactorial(N-79) +
(30*log(p1)+(N-30)*log(1-p1)) +
(15*log(p2)+(N-15)*log(1-p2)) +
(22*log(p3)+(N-22)*log(1-p3)) +
(45*log(p4)+(N-45)*log(1-p4)) }
# do the otimization using optimx nested in a loop (works, but guessing
there is an
# easier way using lapply or some such...)
count <- 1;
dat <- matrix(c(0,0,0),length(seq(80,200,1)),3)
for (N in seq(80,200,1)) {
results_optx <- optim(c(0.2,0.2,0.2,0.2), f_like,
method = "L-BFGS-B", lower=c(0.005,0.005,0.005,0.005),
upper=c(0.990,0.995,0.995,0.995),
hessian = TRUE,control=list(fnscale=-1))
like_mod <- results_optx$value;
dat[count,1] <- count;
dat[count,2] <- N;
dat[count,3] <- like_mod;
count=count+1;
}
plot(dat[,2],dat[,3],type="l",bty="n", xlim=c(79,205), yaxs =
"i",main="likelihood profile",xlab="N", ylab="Likelihood")
More information about the R-help
mailing list