[R-sig-ME] [External] Re: Post hoc on glmer for specific hypotheses
Lenth, Russell V
ru@@e||-|enth @end|ng |rom u|ow@@edu
Wed Mar 9 20:48:35 CET 2022
I believe that this doesn't do what you think it does. Specifying 'type = "response"' does not change the way the tests are done, it just changes the scale upon which the results are presented. If you really want those contrasts computed on the response scale, then (using the setup I showed previously) do:
contrast(regrid(EMM), CON)
... which will convert everything to the response scale before computing the contrasts.
Russ
-----Original Message-----
From: Timothy MacKenzie <fswfswt using gmail.com>
Sent: Wednesday, March 9, 2022 1:21 PM
To: Ben Bolker <bbolker using gmail.com>
Cc: r-sig-mixed-models <r-sig-mixed-models using r-project.org>; Lenth, Russell V <russell-lenth using uiowa.edu>
Subject: [External] Re: [R-sig-ME] Post hoc on glmer for specific hypotheses
Ben, type = "response" exponentiates the link-scale results. It just seems that it doesn't work inside the pairs(...) call. Russ can better speak to that.
emmeans provides the capability to convert the results from the link scale into the odds scale (which for comparisons would be odds ratios).
On Wed, Mar 9, 2022 at 1:14 PM Ben Bolker <bbolker using gmail.com> wrote:
>
> Hmm. Why would you want to test on the response scale? It
> almost always makes sense to test contrasts on the link scale, where
> the sampling distributions of the parameters are more likely to be
> approximately Gaussian.
> I haven't gone to look at your data/problem in detail, but I'm a
> little bit surprised that `type="response"` doesn't exponentiate, i.e.
> convert the estimated values from the log-odds to the odds scale?
> (Converting to the probability scale is not really feasible because of
> the way the math works out ...)
>
> On Wed, Mar 9, 2022 at 1:46 PM Timothy MacKenzie <fswfswt using gmail.com> wrote:
> >
> > Thanks, Ben! Apparently, the 7th coefficient in the glmer() output
> > tests the first hypothesis that I'm looking for.
> >
> > For the second hypothesis, I was able to use emmeans::emmeans() to
> > get what I want by running:
> >
> > ems <- emmeans(m2, pairwise ~ time*item_type, type =
> > "response",infer = c(FALSE, TRUE), adjust = "tukey")[[2]] vv <-
> > pairs(ems, simple = "each", infer = c(FALSE, TRUE), type =
> > "response") # on log-odds scale
> >
> > In this set up, vv[288, ] tests the second hypothesis. But on the
> > Log odds scale. Is there any way to convert it back to the response scale?
> >
> > Thanks,
> > Tim M
> >
> > On Wed, Mar 9, 2022 at 12:22 PM Ben Bolker <bbolker using gmail.com> wrote:
> > >
> > > One way to do this would be via `multcomp::glht`, which allows
> > > you to specify any linear combination(s) of parameters to test.
> > > You do have to figure out the association between the parameters
> > > you have and the contrasts you want. I wrote some (possibly
> > > scrutable) stuff about that problem here:
> > > http://bbolker.github.io/bbmisc/mgreen_contrasts.html
> > >
> > > by hand:
> > >
> > > assuming factor levels are {baseline, post} and {MC, prod}
> > >
> > > baseline_MC = intercept
> > > post_MC = intercept + time_post
> > > baseline_prod = intercept + gram_prod post_prod = intercept +
> > > time_post + gram_prod + interax
> > >
> > > so your first contrast would be {(intercept - (intercept +
> > > time_post))
> > > - (intercept + gram_prod - (intercept + time_post + gram_prod +
> > > interax))
> > > = (-time_post -time_post - interax) = (-2*time_post - interax)
> > >
> > > so the contrast would be (0, -2, 0, -1) assuming that the
> > > parameters are ordered (intercept, time_post, gram_prod, interax)
> > >
> > > (1) You should definitely check my algebra; (2) there may be a
> > > quicker (if less transparent) way to do this by setting up
> > > appropriate contrasts from the beginning; (3) I noticed that I left the "vocab"
> > > levels out of the analysis, but I don't think that changes
> > > anything important.
> > >
> > >
> > > On Wed, Mar 9, 2022 at 12:59 PM Timothy MacKenzie <fswfswt using gmail.com> wrote:
> > > >
> > > > Hello All,
> > > >
> > > > My glmer model below analyzes the performance of a single group
> > > > of subjects on a test at two time points. The test has 4 item types.
> > > >
> > > > Data and code are below.
> > > >
> > > > Is there a way to test only the following two hypotheses?
> > > >
> > > > 1- ((Baseline multiple-choice_grammar) - (Post-test
> > > > multiple-choice_grammar)) - (Baseline production_grammar -
> > > > (Post-test
> > > > production_grammar))
> > > >
> > > > 2- ((Baseline multiple-choice_vocabulary) - (Post-test
> > > > multiple-choice_vocabulary)) - (Baseline production_vocabulary -
> > > > (Post-test production_vocabulary))
> > > >
> > > > dat <-
> > > > read.csv("https://raw.githubusercontent.com/fpqq/w/main/d.csv")
> > > >
> > > > form2 <- y ~ item_type*time + (1 | user_id)
> > > >
> > > > m2 <- glmer(form2, family = binomial, data = dat,
> > > > control =
> > > > glmerControl(optimizer = "bobyqa"))
> > > >
> > > > Sincerely,
> > > > Tim M
> > > >
> > > > _______________________________________________
> > > > R-sig-mixed-models using r-project.org mailing list
> > > > https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models
More information about the R-sig-mixed-models
mailing list