[R-sig-ME] [FORGED] Re: logistic regression on posttest (0, 1) with pretest(0, 1)*Group(Treatment, Ctrl) interaction

Souheyla GHEBGHOUB @ouhey|@@ghebghoub @end|ng |rom gm@||@com
Mon Apr 29 13:25:48 CEST 2019


Ouh Rene, thank you so much.

This is very helpful. I did not know all of this before. Thank you again.

Best regards,
Souheyla

On Mon, 29 Apr 2019, 12:20 René, <bimonosom using gmail.com> wrote:

> Both packages work the same way with respect to the coding/scripting
> (which was intended), even for emmeans both give the same. However, I would
> also suggest considering afex::mixed (in case you stay with the frequentist
> approach), because it enhances the way of testing your models towards
> hierarchical likelihood ratio tests (which is not the same as an anova
> output from a single glm fit).
>
> Thus, the way the effects and design differences are 'constructed' is the
> same, but in lme4 you obtain p-values (i.e. how likely are the data given
> you assume a null-effect), and in brm you obtain posterior parameter
> estimates (i.e. what is the range of parameters that are considered to be
> likely). A lot of people find the latter much more informative in terms of
> statistical decision making, which is layed out in basically every
> introductory paper on Bayes Factors etc (see work by Wagenmakers, Kruschke,
> or Michael Lee and colleagues). But there is a debate, of course, of what
> should be preferred, which I am not able to sum up in a few sentences. Just
> to name a few arguments... 1) people are used to p-values, and like the
> idea of having a 'decisive' criterion. 2) but p-values are not reliable
> (strong argument actually) 3) signigifance testing relies on the quality of
> the research methods (including replications, like Fisher said); 4) What
> the researcher wants to say is something about the likelihood of a theory,
> which is achieved by Bayesian statistics, but not by frequentist statistics
> (e.g. with p-values, you "decide" to reject a null-hypothesis, which does
> not tell you anything about whether or something is true or not; but in
> Bayesian statistics, you tell the model what you intially believe - i.e.
> parameter priors - and then you get results (evidence) which tells you to
> which degree you should change your beliefs.)
>
>
> Best, René
>
> Am Mo., 29. Apr. 2019 um 12:25 Uhr schrieb Souheyla GHEBGHOUB <
> souheyla.ghebghoub using gmail.com>:
>
>> Dear Rene,
>>
>> Thank youbforbyour feedback.
>> I will look into this. But before I do, I would like to ask how much
>> difference would it make if I am using lme4 package (glmer).
>>
>> I keep switching between both and havent decided yet. But is it easy to
>> implement the aforesaid if its glmer and not brms?
>>
>> Thank you
>> Souheyla
>>
>>
>> On Mon, 29 Apr 2019, 11:20 René, <bimonosom using gmail.com> wrote:
>>
>>> Hi Souheyla,
>>>
>>> coming back to the topic (I was busy lately).
>>>
>>> The interpretation is always a bit of a problem in regressions with
>>> categorical interactions. There are two ways to deal with this, one would
>>> be to prefer effect coding (search for contrast sum coding online) over
>>> dummy coding. In short, with effect coding, you model the deviation of
>>> each
>>> group from a grand mean. With dummy coding, you start with the intercept
>>> parameter and then add up the design cells to have the actual mean
>>> estimate
>>> of it... I actually do not like both, because first I have no idea how to
>>> ideally tell this bro, and also... there is a second and much much easier
>>> way:
>>>
>>> Try this:
>>> ## if this is your model: mod2 <- brm(posttest ~ pretest*Group +...)
>>> library(emmeans)
>>> emmip(mod2,~pretest|Group,type="response",CIs=TRUE)
>>> Et voila :)
>>> This gives you the posterior marginal estimates from the model for your
>>> interaction, predicting the cell specific response probability, including
>>> highest density intervals (or Bayesian credible intervals). The option
>>> type="response" gives you the predicted probability of post =1; if you
>>> delete this option, the marginal estimates will be given on the log
>>> scale.
>>> In short, this tells you whether something is better remembered post, if
>>> it
>>> was already known pre (or not), depending on the group.
>>>
>>> You can get also the marginal main effects like this, using:
>>> emmip(mod2,~pretest,type="response",CIs=TRUE)
>>> Which would tell you whether something is better remembered post, if it
>>> was
>>> already known pre. Likewise for group.
>>>
>>> And if you want to simply get the summary statistics instead of the plot,
>>> use this:
>>> summod2<-emmip(mod2,~pretest|Group,type="response",CIs=TRUE)
>>> summod2$data
>>>
>>> Best, René
>>>
>>>
>>>
>>> Am Mo., 22. Apr. 2019 um 03:59 Uhr schrieb Jeff Newmiller <
>>> jdnewmil using dcn.davis.ca.us>:
>>>
>>> > There is no "formula" syntax other than it has to have at least one
>>> > tilde... there is "lm" formula syntax, and "lme" formula syntax, and
>>> "nls"
>>> > formula syntax, etc... and other model builders are not obligated to
>>> > adhere to the "lm" interpretation of formulas.
>>> >
>>> > I don't see why using * alone in an lm formula should be avoided, but
>>> > perhaps John's advice could be reframed as "watch out for the specific
>>> > syntax used by your model building function... it may not be the same
>>> as
>>> > that used by lm".
>>> >
>>> > On Mon, 22 Apr 2019, Rolf Turner wrote:
>>> >
>>> > >
>>> > > On 22/04/19 6:01 AM, Sorkin, John wrote:
>>> > >
>>> > >> Souheyla,
>>> > >>
>>> > >> It is both difficult and dangerous to add a comment to a thread that
>>> > >> one has not followed, and in doing so possibly making an
>>> > >> inappropriate suggestion. Please forgive what may be an not fully
>>> > >> informed thought.
>>> > >>
>>> > >> The model you suggest, posttest ~ pretest*Group  (ignoring random
>>> > >> effects) is unusual. In a model that contains an interaction,  I
>>> > >> would expect to see, in addition to the interaction, all main
>>> effects
>>> > >> included in the interaction, i.e. posttest ~
>>> > >> group+pretest+pretest*Group.
>>> > >
>>> > > As Souheyla has already indicated, in the R (and previously S/Splus)
>>> > formula
>>> > > syntax, interactions are indicated by a *colon* --- a:b.  The
>>> notation
>>> > "a*b"
>>> > > is a shorthand for
>>> > > a + b + a:b.
>>> > >
>>> > > So pretest*Group is the same as pretest + Group + pretest:Group,
>>> whence
>>> > it
>>> > > contains the main effects.
>>> > >
>>> > > I disagree with the advice that you gave Souheyla in a follow-up
>>> email.
>>> > > The construction pretest*Group is preferable, being compact and tidy.
>>> > Brevity
>>> > > is a virtue.
>>> > >
>>> > > cheers,
>>> > >
>>> > > Rolf
>>> > >
>>> > > --
>>> > > Honorary Research Fellow
>>> > > Department of Statistics
>>> > > University of Auckland
>>> > > Phone: +64-9-373-7599 ext. 88276
>>> > >
>>> > > _______________________________________________
>>> > > R-sig-mixed-models using r-project.org mailing list
>>> > > https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models
>>> > >
>>> >
>>> >
>>> ---------------------------------------------------------------------------
>>> > Jeff Newmiller                        The     .....       .....  Go
>>> Live...
>>> > DCN:<jdnewmil using dcn.davis.ca.us>        Basics: ##.#.       ##.#.  Live
>>> > Go...
>>> >                                        Live:   OO#.. Dead: OO#..
>>> Playing
>>> > Research Engineer (Solar/Batteries            O.O#.       #.O#.  with
>>> > /Software/Embedded Controllers)               .OO#.       .OO#.
>>> rocks...1k
>>> >
>>> > _______________________________________________
>>> > R-sig-mixed-models using r-project.org mailing list
>>> > https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models
>>> >
>>>
>>>         [[alternative HTML version deleted]]
>>>
>>> _______________________________________________
>>> R-sig-mixed-models using r-project.org mailing list
>>> https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models
>>>
>>

	[[alternative HTML version deleted]]



More information about the R-sig-mixed-models mailing list