[R-sig-ME] Random effects in multinomial regression in R?
Doran, Harold
HDor@n @end|ng |rom @|r@org
Sat Mar 23 11:03:39 CET 2019
No, the “right” statistical term to use here is that your parameter
estimates will be inconsistent. We’re deviating some from the purpose of
this list, but it is a helpful discussion. What James notes is also true,
this is often ignored in practice, which is a huge problem. I propose it’s
often ignored not because the issue is not well-known, but because there
is not (widely distributed) software that easily implements the types of
corrections needed in this scenario.
On 3/23/19, 5:11 AM, "Souheyla GHEBGHOUB" <souheyla.ghebghoub using gmail.com>
wrote:
>I read that in multinomial regression, all independent variables should be
>variables that we manipulate. Can I still have pretest as IV without
>skewing my results?
>
>Best,
>Souheyla
>
>On Fri, 22 Mar 2019, 23:31 Souheyla GHEBGHOUB,
><souheyla.ghebghoub using gmail.com>
>wrote:
>
>> Thank you both. I will look into this and see :)
>>
>> Best,
>> Souheyla
>>
>> On Fri, 22 Mar 2019, 22:02 Uanhoro, James,
>><uanhoro.1 using buckeyemail.osu.edu>
>> wrote:
>>
>>> In standard regression models, the assumption is predictor variables
>>>are
>>> measured without error. Test scores will have measurement error, hence
>>> Doran's comment when test scores are used as covariates. See: Hausman,
>>>J.
>>> (2001). Mismeasured Variables in Econometric Analysis: Problems from
>>>the
>>> Right and Problems from the Left. *Journal of Economic Perspectives*,
>>> *15*(4), 57–67. https://doi.org/10.1257/jep.15.4.57
>>> I will note that many practitioners ignore this issue, and it is quite
>>> common to use predictors measured with error. Consider the number of
>>>times
>>> people use polychotomized income measures, or SES measures as
>>>predictors,
>>> or some other "construct".
>>> On Mar 22 2019, at 5:39 pm, Souheyla GHEBGHOUB <
>>> souheyla.ghebghoub using gmail.com> wrote:
>>>
>>> Dear Doran,
>>>
>>> Could you explain more this point to me, please?
>>>
>>> Thank you,
>>> Souheyla
>>>
>>> On Fri, 22 Mar 2019, 21:19 Doran, Harold, <HDoran using air.org> wrote:
>>>
>>> Yes, but conditioning on the pre-test means you are using a variable
>>> measured with error and the estimates you obtain and now inconsistent,
>>>and
>>> that¹s a pretty big sin.
>>>
>>> On 3/22/19, 3:49 PM, "Souheyla GHEBGHOUB"
>>><souheyla.ghebghoub using gmail.com>
>>> wrote:
>>>
>>> Dear René,
>>>
>>> Thank you for your feedback to me. You are right, dropping the pretest
>>> from
>>> covariate if I predict change definitely makes sense to me! But the
>>>fact
>>> that i need to control for the starting levels of participants makes it
>>> obligatory for me to chose the second way, which is predicting posttest
>>> instead of change to have pretest scores controlled for.
>>>
>>> You also chose (1+group | word) , which is new to me. Does it intend to
>>> assume the effect of group to vary across words, which is something
>>> applicable to my data, right?
>>> I will discuss all this with my supervisor, and may reply here again in
>>> few
>>> days if you do not mind.
>>> Thank you very much
>>> Souheyla
>>> University of York
>>>
>>>
>>> On Fri, 22 Mar 2019 at 13:42, René <bimonosom using gmail.com> wrote:
>>>
>>> Hi Souheyla,
>>>
>>> it seems to me that you will run into problems with your coding of
>>> change
>>> (gain, no gain and decline) because the 'change' is by
>>> definition/calculation depending on the predictor pretest.
>>> See, according to your coding scheme:
>>> Change = decline can only occur if pretest=1 (not by pretest=0).
>>> Change = gain can only occur if pretest = 0 (not by pretest=1)
>>> Change = No Gain can occur if pretest= 1 or 0
>>> In other words:
>>> If pretest = 1 then the possible outcomes can be decline or no gain
>>> If pretest = 0 then the possible outcomes can be gain or no gain
>>>
>>> And if the model result shows you then that the pre-test is
>>> significantly
>>> related to p(change-outcome), I guess there is no surprise in it, is
>>>it?
>>>
>>> So the first solution to this would be simply kicking the pre-test
>>> predictor out of the model completely, and predict:
>>> mod1 <- brm(Change ~ Group + (1|Subject) + (1+Group|Word),...)
>>> (Btw.: actually the first Hierarchical Bayes Model question I see on
>>>the
>>> mixed-effects mailing list :))
>>>
>>> Attempt for a further clarification on which random slopes would
>>>reflect
>>> the model's design:
>>> If you have a within-subjects design, by-subject random slopes are
>>> possible for the within-subject variable (e.g. if there are two sets of
>>> words/lists [e.g. abstract vs. concrete words] for each participant,
>>>and
>>> you test whether there is a performance-difference between these
>>> word-lists, then you can implement by-subject random slopes for words,
>>> because each participant has seen both sets.) If each participant has
>>> seen
>>> only one list (i.e. between subjects design) by subject random slopes
>>> for
>>> words are not appropriate, because there is no 'slope' by participant
>>> (i.e.
>>> by definition, having a slope requires at least two observations...).
>>> This
>>> is always a good rule of thumb without thinking about it too heavily :)
>>> Ans as you see: you can define a random slope for words:
>>> (1+Group|Word),
>>> because each word has been presented in each group (i.e. there can be a
>>> slope for each word). And intuitively speaking the Treatment-effect can
>>> vary depending on the stimuli you use, and the slope makes sense. (You
>>> also
>>> see in this example that the treatment effect can also vary by
>>>subjects,
>>> but in fact, this subject effect variation IS EQUAL to the effect you
>>> want
>>> to test, and having by subject group random slopes would eliminate the
>>> fixed effect...)
>>>
>>> Anyway, there is a second possibility to define your model, depending
>>>on
>>> how you want to interpret it. In the previous model you can say
>>> something
>>> about the type-of-change likelihoods depending on the treatment group.
>>> But
>>> you could implement the model as binomial as well (i.e. logistic
>>> regression)
>>>
>>> mod2 <- brm(posttest ~ pretest*Group + (1|Subject) +
>>>(1+Group|Word),...)
>>>
>>> And what you would expect here would be an interaction between pre-test
>>> and Group. For instance; if pretest=0 & treatment 1 then posttest
>>>larger
>>> than with pretest=0 & treatment 2; but not when pretest=1 (because this
>>> is
>>> a plausible no gain situation). And so on...
>>> (And in this model there are no also no further random slopes hidden in
>>> your design :))
>>> Hope this helps.
>>>
>>> Best, René
>>>
>>>
>>> Am Do., 21. März 2019 um 14:01 Uhr schrieb Souheyla GHEBGHOUB <
>>> souheyla.ghebghoub using gmail.com>:
>>>
>>> Dear Philip,
>>>
>>> I understand , here is the structure of my data in case it could help.
>>>
>>> I have 3 groups of participants (control, treatment1, treatment2). Each
>>> group was tested twice, once before treatment (pretest) and once after
>>> treatment (posttest).
>>> In each test, they were tested on knowledge of 28 words, scores are
>>> dichotomous (0 = unknown , 1 = known). Tests are the same.
>>>
>>> I calculated change from pretest to posttest :
>>> if pretest 0 and posttest 0 = no gain
>>> if pretest 1 and posttest 1 = no gain
>>> if pretest 0 and posttest 1 = gain
>>> if pretest 1 and posttest 0 = decline
>>> So I ended up with a dependent variable called Change with 3 levels
>>> (no_gain, gain, decline) and I tried to predict it using Group and
>>> Pretest
>>> as covariates using multinomial logit model. mod0 <- brm(Change ~
>>> Pretest
>>> +
>>> Group) I would like to add random effects for subjects but don't know
>>> what's the best form when Time factor is absent.
>>>
>>> I hope other statisticians who read this could help
>>> Thank you
>>> Souheyla
>>>
>>> [[alternative HTML version deleted]]
>>>
>>> _______________________________________________
>>> R-sig-mixed-models using r-project.org mailing list
>>> https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models
>>>
>>>
>>>
>>> [[alternative HTML version deleted]]
>>>
>>> _______________________________________________
>>> R-sig-mixed-models using r-project.org mailing list
>>> https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models
>>>
>>>
>>>
>>>
>>> [[alternative HTML version deleted]]
>>>
>>> _______________________________________________
>>> R-sig-mixed-models using r-project.org mailing list
>>> https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models
>>>
>>>
>
> [[alternative HTML version deleted]]
>
>_______________________________________________
>R-sig-mixed-models using r-project.org mailing list
>https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models
>
More information about the R-sig-mixed-models
mailing list