[R-sig-ME] Assessing linearity

Andrew Kosydar drewdogy at uw.edu
Sun Oct 24 19:33:42 CEST 2010


Hi Mike,

Sounds as though you are debating about model comparison techniques.
I would suggest using multimodel inference if you do not have any one
model that is strongly supported by AIC values.  Burnham & Anderson
published a book on the technique and they have two papers (2000 &
2002) that lay out the theoretical framework and methodology pretty
clearly.

Good luck!

Andrew



-- 
Andrew Kosydar, PhD
drewdogy at uw.edu







On Sun, Oct 24, 2010 at 12:43 PM, Mike Lawrence <Mike.Lawrence at dal.ca> wrote:
> Ah, this looks promising! So how does this sound:
>
> I typically assess the evidence for a relationship between the
> predictor and response variables by comparing the AIC values for a
> model including the predictor to a model without it. In the case of
> grade_as_numeric, I'd do:
>
> fit_null = lmer(
>    formula = response ~ (1|individual)
>    data = my_data
> )
> fit_null_AIC = AIC(fit_null)
>
> fit_alt = lmer(
>    formula = response ~ (1|individual) + grade_as_numeric
>    data = my_data
> )
> fit_alt_AIC = AIC(fit_alt)
>
> grade_loglikratio = fit_null_AIC - fit_alt_AIC
>
> Now, if I wanted to check whether there is a quadratic component to
> the grade effect, I'd first compute an analogous likelihood ratio for
> the quadratic fit compared to the null:
> fit_alt_quad = lmer(
>    formula = response ~ (1|individual) + poly(grade_as_numeric)^2
>    data = my_data
> )
> fit_alt_quad_AIC = AIC(fit_alt_quad)
> grade_quad_loglikratio = fit_null_AIC - fit_alt_quad_AIC
>
> Then compute a final log likelihood ratio between the improvement over
> the null caused by grade versus the improvement over the null caused
> by grade as a quadratic:
>
> grade_lin_vs_quad_LLR = grade_loglikratio - grade_quad_loglikratio
>
> I could repeat this for higher-order polynomials of grade, each
> compared to the order directly below it, to develop a series of
> likelihoood ratios that describe the relative improvement of the fit.
>
> Does this sound appropriate?
>
> Cheers,
>
> Mike
>
> --
> Mike Lawrence
> Graduate Student
> Department of Psychology
> Dalhousie University
>
> Looking to arrange a meeting? Check my public calendar:
> http://tr.im/mikes_public_calendar
>
> ~ Certainty is folly... I think. ~
>
>
>
> On Sun, Oct 24, 2010 at 6:41 AM, Jarrod Hadfield <j.hadfield at ed.ac.uk> wrote:
>> Hi Mike,
>>
>> You would be better off trying out something like polynomials or splines,
>> For example:
>>
>>  fit1 = lmer(
>>     formula = response ~ (1|individual)+poly(grade_as_numeric,n),
>>     , data = my_data
>>     , family = gaussian
>>  )
>>
>> where n is the order of the polynomial. n=1 would fit the same model as your
>> original fit1, although the covariate (and the regression parameter) would
>> be scaled by some number. When n=6 the model would be a reparameterised
>> version of your model fit2. When 1<n<6 you would be working with a
>> non-linear relationship in between these two extremes, although the model is
>> still linear in the parameters.
>>
>> Cheers,
>>
>> Jarrod
>>
>>
>>
>>
>>
>>
>>
>>
>> Quoting Mike Lawrence <Mike.Lawrence at dal.ca>:
>>
>>> Hi folks,
>>>
>>> I have developmental data collected across several grades (1-6). I
>>> would like to be able to assess whether there are any linear or
>>> non-linear trends across grade. Does it make sense to run a first lmer
>>> treating grade as continuous, obtain the residuals, then run a second
>>> lmer treating grade as a factor? That is:
>>>
>>> fit1 = lmer(
>>>    formula = response ~ (1|individual)+grade_as_numeric
>>>    , data = my_data
>>>    , family = gaussian
>>> )
>>> my_data$resid = residuals(fit1)
>>> fit2 = lmer(
>>>    formula = resid ~ (1|individual)+grade_as_factor
>>>    , data = my_data
>>>    , family = gaussian
>>> )
>>>
>>>
>>> As I understand it, fit1 will tell me if there are any linear trends
>>> in the data, while fit2 will tell me if there are any non-linear
>>> trends in the data in addition to the linear trends obtained in fit1.
>>>
>>> If this is sensible, how might I apply it to a second binomial
>>> response variable given that the residuals from a binomial model are
>>> not 0/1?
>>>
>>> Cheers,
>>>
>>> Mike
>>>
>>> --
>>> Mike Lawrence
>>> Graduate Student
>>> Department of Psychology
>>> Dalhousie University
>>>
>>> Looking to arrange a meeting? Check my public calendar:
>>> http://tr.im/mikes_public_calendar
>>>
>>> ~ Certainty is folly... I think. ~
>>>
>>> _______________________________________________
>>> R-sig-mixed-models at r-project.org mailing list
>>> https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models
>>>
>>>
>>
>>
>>
>> --
>> The University of Edinburgh is a charitable body, registered in
>> Scotland, with registration number SC005336.
>>
>> _______________________________________________
>> R-sig-mixed-models at r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models
>>
>
> _______________________________________________
> R-sig-mixed-models at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models
>




More information about the R-sig-mixed-models mailing list