[R] fitting lognormal censored data
Berend Hasselman
bhh at xs4all.nl
Mon Sep 3 11:21:04 CEST 2012
You should keep your replies on the R-help list by always cc-ing to that list.
As you were asked to do in a previous thread you started.
As I demonstrated in a previous reply, the R function optim() is perfectly capable of minimizing or maximizing your likelihood function.
Your attempts at using the Newton method for finding a maximum will only work if the starting point is (very) close to the solution.
In addition using Newton to solve gradient(f) = 0 to find a minimum/maximum is not the best way to go about this.
There is no guarantee that the Hessian at the initial point is positive definite (for a minimum).
You will be better off using functions like optim() or nlminb()
I do not know much about the EM algorithm and I cannot judge whether what you are trying to do is correct or sensible.
For speeding up EM iterations you could have a look the SQUAREM packages.
I feel that you should consult an expert on this method.
The code you have provided is not complete in the sense that you must be using packages such as numDeriv but I cannot see any library statements.
From the limited experiments I have done with your code I get the impression that there is something wrong with the way parameter theta is used/implemented.
Berend
On 01-09-2012, at 14:11, Salma Wafi wrote:
> Dear Berend,
> Thanks alot for your response and help. really your help is very appreciated. Sorry if my code was unreadable and I hope you give me excuse in this , since I am new R user and there is no one in my area know about R. Dear, I am trying to estimate my parameters by calculating derivatives of loglikelihood function and using Newton raphson method because I need to estimate another parameter, which will be added to my likelihood function, and the information that I have about this parameter are partially missing. so I want to estimate the parameters using MLE via EM algorithm, which is composed of two steps; an Expectation (E) step which computes the expectation of the log likelihood function using the observed data set followed by a maximization (M) step which computes the parameters that maximize the expected log likelihood function found in the E-step. Then, I need to get estimators using Newton raphson method at one step and then use them to compute the E step, and so on till convergence has been met. But because I face "overestimation" problem, I tried to realize my mistake by estimating only MLE of lognormal censored data. Furthermore, I tried to write my code as what you suggest , using grad and hessian of log likelihood function and use them to calculate the Newton rapson method but also I faced some problem in this, where there is no convergence in my estimations . Dear, do you think the reason of this can be because of hessian matrix is not positive point? And in this case how we can avoid it?.
> Dear, I am sorry to disturb you. Really, I work individually and lost more than 3 months trying to solve this problem. The following are, my trying to calculate Newton raphson using the grad and hessian of log likelihood instead of the derivatives and my original code that I am trying to estimate the parameters mu, s and theta using MLE via EM algorithm.
> ################################### caculating Newton Rahson using grad and hessian of log likelihood ####################
> cur=curd=cen=cens=array(1,100)
> Cur=RealCur=realcensoring=realcured=array(1,20)
> ExpCure=Bias=RealCure=array(1,21)
>
> dat1<-data.frame(time=rlnorm( 100,2,0.8),Censored=rbinom(100,1,0.9),Cured=rbinom(100,1,.3))
>
> dat2<-dat1[order(dat1[,1]),] # order the data #
> for (i in 1:10)
> {
> dat2$Cured[i+90]=0 #Long term survivors/10 individuals#
> dat2$Censored[i+90]=0
> dat2$time[i+90]=dat2$time[90]
> }
> cens<-c(dat2$Censored) #censored status #
> curd<-c(dat2$Cured) #cured status #
> tim<-c(dat2$time) #lifetimes #
> L1<-length(cens) #number of censored#
> for (j in 1:L1)
> {
> if ((cens[j]==1)&(curd[j]==0)) {(cen[j]=1)&(cur[j]=1)}
>
> else {(cen[j]=cens[j])&(cur[j]=curd[j])}
> }
> ####### My Data only with uncensored and right censored ####################
> data=data.frame(Ti=dat1$time,Censored=cen)
>
> ########### Seperating the data for simply using optim and Newton Raphson ##
>
> dat2<-c(split(data[1:2],data$Censored==1)) # Split the data(cen/uncen) #
> tun<-c((dat2$'TRUE')$Ti) # Life times data set for uncensored #
> tcen<-c((dat2$'FALSE')$Ti) # Life times data set for censored #
> outm=outs=array()
> #################### loglikelihood function ############################
> ml= function(par){mu=par[1]
> s=par[2]
> +sum(dlnorm(tun,mu,s,log=TRUE))+sum(log(1-plnorm(tcen,mu,s)))}
> ######################### intial values ######################
> mu=0.5
> s=0.5
> ################# grad & hessian matrix of ml ###################
> ml.grad <- function(par) grad(ml2,par)
> ml.hessian <- function(par) hessian(ml2,par)
> pstart <- c(mu,s)
> ml(pstart)
> ml.grad(pstart)
> ml.hessian(pstart)
> for (i in 1:10)
> {
> v=solve(ml.hessian(pstart),ml.grad(pstart))
> pstart=pstart-v
> mu[i+1]=pstart[1]
> s[i+1]=pstart[2]
> }
> outm=c(mu)
> outs=c(s) ##### there is no convergence ############
> #############################################################################
>
> Original Code " estimating by using MLE via EM algorithm "
> ############################ under same data ##############################
> outmu=outs1=outtheta=array()
> g=pc=array()
> ######################### intial values ######################
> mu=0.5
> s=0.5
> theta=0.5
> ##############################################
> for(w in 1:10) ### loop for EM algorithm ###
> {
> for (i in 1:length(tcen)) ## loop for computing the E step ##
> {
>
> pc[i]=((1-exp(-theta[w]))*
> (1-plnorm(tcen[i],mu[w],s[w])))/
> (exp(-theta[w])+((1-exp(-theta[w]))*(1-plnorm(tcen[i],mu[w],s[w]))))
> }
> g=c(pc)
> ################################## M step "maximizing log likelihood" ######################
> ml2= function(par){mu=par[1]
> s=par[2]
> theta=par[3]
> +sum(dlnorm(tun,mu,s,log=TRUE)+log(1-exp(-theta)))+sum(g*log(1-exp(-theta))+g*log(1-plnorm(tcen,mu,s))-(1-g)*theta)}
>
> ################# grad & hessian matrix of ml ###################
> ml2.grad <- function(par) grad(ml2,par) ### the problem here, why the grad and hessian have number of rows
> and columns more than the number of parameters ###
> ml2.hessian <- function(par) hessian(ml2,par)
> pstart <- c(mu,s,theta)
> ml2(pstart)
> ml2.grad(pstart)
> ml2.hessian(pstart)
> for (i in 1:1) ####### compute newton rahson at one step #########
> {
> v=solve(ml2.hessian(pstart),ml2.grad(pstart))
> pstart=pstart-v ### the problem is the number of the estimators exceeds number of parameters ###
> }
> mu[w+1]=pstart[1]
> s[w+1]=pstart[2]
> theta[w+1]=pstart[3]
> } ## end loop w ###
> outmu=c(mu)
> outs1=c(s)
> outtheta=c(theta) #### No convergence ####
>
> Finally, I am very thankful to you and all people who work on Nabble website.
> Best regards.
> <computing newton rahson><estimationg using MLE via EM algorithm>
More information about the R-help
mailing list