[R] Error in nlsModel

ProfJCNash pro|jcn@@h @end|ng |rom gm@||@com
Mon Oct 8 23:31:40 CEST 2018


I'm not the author of nlsModel, so would prefer not to tinker with it.

But "singular gradient" is a VERY common problem with nls() that is used
by nlsModel as I understand it. The issue is actually a singular
Jacobian matrix resulting from a rather weak approximation of the
derivatives (a simple forward approximation as far as I can determine,
and using a fairly large step (1e-7), though a choice I'd probably make
too for this approach.

Duncan Murdoch and I wrote nlsr to do analytic derivatives where
possible. If you can use that (i.e., extract the modeling part of
nlsModel and call nlxb() from nlsr directly), I suspect you will have
better luck. If you still get singularity, it likely means that you
really have parameters that are some combination of each other.

JN

On 2018-10-08 05:14 AM, Belinda Hum Bei Lin wrote:
> Hello,
> 
> It is my first time using R studio and I am facing the error of
> "Error in nlsModel(formula, mf, start, wts) :
>   singular gradient matrix at initial parameter estimates"
> when I try to run my script. From what I read online, I understand that the
> error might be due to the parameters. However, I do not know how to choose
> the right set of parameters. Is there anyone who could advice me on how to
> do this?
> 
> Below are my script details:
> rm(list=ls()) #remove ALL objects
> cat("\014") # clear console window prior to new run
> Sys.setenv(LANG = "en") #Let's keep stuff in English
> Sys.setlocale("LC_ALL","English")
> 
> ##########
> #import necessary packages
> #########
> 
> ##To install the packages use the function install.packages. Installing is
> done once.
> #install.packages("ggplot2")
> #install.packages("minpack.lm")
> #install.packages("nlstools")
> 
> ##Activate the packages. This needs to be done everytime before running the
> script.
> require(ggplot2)
> require(minpack.lm)
> require(nlstools)
> 
> 
> 
> #########
> #define the Weibull function
> #########
> Weibull<-function(tet1, tet2,x){
>   1-exp(-exp(tet1+tet2*log10(x)))
> }
> 
> #########
> ##define the inverse of the Weibull function. put in effect and get
> concentration as output
> #########
> iWeibull<-function(tet1,tet2,x){
>   10^((log(-log(1-x))-tet1)/tet2)
> }
> 
> 
> #########
> #define the Logit function
> #########
> Logit<-function(tet1, tet2,x){
>   1/(1+exp(-tet1-tet2*log10(x)))
> }
> 
> #########
> ##define the inverse of the Logit function
> #########
> iLogit<-function(tet1,tet2,x){
>   10^(-(log(1/x-1)+tet1)/tet2)
> }
> 
> #########
> #define the Probit function
> #########
> Probit<-function(tet1, tet2, x){
>   pnorm(tet1+tet2*(log10(x)))
> }
> 
> #########
> ##define the inverse of the Probit function
> #########
> iProbit<-function(tet1,tet2,x){
>   10^((qnorm(x)-tet1)/tet2)
> }
> 
> #########
> # Establish data to fit
> # data given here are the data for Diuron from the example datasets
> #
> # Of course one could also import an external datafile via e.g.
> # read.table, read.csv functions
> 
> ### example to choose a file for import with the read.csv function, where
> "," is seperating the columns,
> # header=TURE tells R that the first row contains the titles of the
> columns, and
> # stringsAsFactors = FALSE specify that the characters should not be
> converted to factors. For more info run ?read.csv
> effectdata<-read.csv(file.choose(),sep=",",stringsAsFactors = FALSE,header
> = TRUE)
> ?read.csv
> ###
> 
> #########
> conc<-c(0,
>         0,
>         0,
>         0,
>         0,
>         0,
>         0.000135696,
>         0.000135696,
>         0.000135696,
>         0.000152971,
>         0.000152971,
>         0.000152971,
>         0.000172445,
>         0.000172445,
>         0.000172445,
>         0.000194398,
>         0.000194398,
>         0.000194398,
>         0.000219146,
>         0.000219146,
>         0.000219146,
>         0.000247044,
>         0.000247044,
>         0.000247044
> )
> 
> effect<-c(5.342014355,
>           13.46249176,
>           -9.249022885,
>           -6.666486351,
>           1.00292152,
>           -3.891918402,
>           12.63136345,
>           -2.372582186,
>           8.601073479,
>           1.309926638,
>           0.772728968,
>           -7.01067202,
>           30.65306236,
>           28.10819667,
>           17.94875421,
>           73.00440617,
>           71.33593917,
>           62.23994217,
>           99.18897648,
>           99.05982514,
>           99.2325145,
>           100.2402872,
>           100.1276669,
>           100.1501468
> )
> 
> #build input dataframe
> effectdata<-data.frame(conc,effect)
> 
> #plot the data just to get a first glance of the data
> ggplot()+
>   geom_point(data=effectdata,aes(x=conc,y=effect), size = 5)+
>   scale_x_log10("conc")
> 
> 
> #delete controls
> effectdata_without_controls<-subset(effectdata,effectdata$conc>0)
> 
> 
> #save controls in a seperate dataframe called effectdata_control, which
> will be added to the ggplot in the end.
> #since you can't have 0 on a logscale we will give the controls a very very
> low concentration 0.00001 (not 100% correct, but will not be seen in the
> final plot)
> effectdata_controls<-subset(effectdata,effectdata$conc==0)
> effectdata_controls$conc<-effectdata_controls$conc+0.0001
> 
> 
> 
> ########
> #fit data (without controls) using ordinary least squares
> #ordinary least squares is a method for estimating unknown parameters in
> statistics. The aim of the method is to minimize
> #the difference between the observed responses and the responses predicted
> by the approximation of the data.
> #nlsLM is from the minpack.lm package
> #nls=non-linear lest squares
> ########
> nlsLM_result_Weibull<-nlsLM(effect~Weibull(tet1,tet2,conc),
> data=effectdata_without_controls, start=list(tet1=1,tet2=1))
> nlsLM_result_Logit<-nlsLM(effect~Logit(tet1,tet2,conc),
> data=effectdata_without_controls, start=list(tet1=1,tet2=1))
> nlsLM_result_Probit<-nlsLM(effect~Probit(tet1,tet2,conc),
> data=effectdata_without_controls, start=list(tet1=1,tet2=1))
> 
> Thanks a bunch!
> 
> Best Regards,
> Belinda
> Belinda *Hum* Bei Lin (Ms)
> National University of Singapore
> (e): belindahbl using gmail.com
> (c): +6581136079
> <+65%208113%206079>
> 
> 	[[alternative HTML version deleted]]
> 
> ______________________________________________
> R-help using r-project.org mailing list -- To UNSUBSCRIBE and more, see
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>




More information about the R-help mailing list