[R-sig-ME] Random effects in multinomial regression in R?
René
b|mono@om @end|ng |rom gm@||@com
Sat Mar 23 23:00:50 CET 2019
Souheyla, it's a solvable problem :)
But does the model analyse at the level of every word or at the level of
sum of words?
it's both.
Yes, the model tries to predict general tendencies (means/proportions,
conditioned on factors). Observation sums are always involved.
The second answer is, if there is no factor conditioning, then that's it.
With factors, its measuring by item-differences if you want: And it seems
you want: Let 'word type' interact with 'treatment' to predict gain
proportions (0's and 1's for each participant). The relation between word
type and treatment is your domain, but your statistical issues imply the
following brms model: It measures differences between single
questions/words (pre-post on average/proportion on log-scale). If not, the
answer is still 'yes': Hierarchical Bayesian logistic regression can do
what you want :)). I assume, in terms of mixed-effect models (brm is not
mixed-effects least-mean squares, it's hierarchical Bayes) I would go with:
mod5<-brm(score~Group*Time*Stims+(1|subject)+(1+Time*Stims| words),
family=bernoulli(),...)
What you will get are Bayesian likelihood estimates for each
mean/proportion (and difference) across participants, for each Group by
every word. After estimation, you can apply traditional statistical
thinking like comparing mean estimates by word, or in general. These (both)
can be accessed via 'emmeans', or 'emmip', functions and you also can
compute Bayes Factors for a change vs no-change hypotheses (or others) for
each question/word (using the hypothesis() function, or prior_samples()
and posterior_samples() ), if you follow your own rules ;)
Best, René
Am Sa., 23. März 2019 um 18:58 Uhr schrieb Souheyla GHEBGHOUB <
souheyla.ghebghoub using gmail.com>:
> Dear René and any other member interested in this discussion,
>
> I appreciate the long feedback I received from you. But I can tell I could
> not well convey my concern.
>
> The aim of my analysis is to predict the odds of correct/incorrect
> responses based on some predictors: group (Treatment1/Treatment2/Control);
> Time (pretest/posttest), verbal frequency, concreteness vs.
> abstractness...etc. I have hypotheses such as, Time*Group interaction will
> show a significant effect as I expect students in Treatment Group 1 will
> outperform at the level of Posttest.
> I have a problem of predicting Response with Time as an Independent
> variable which could be summarised in this example of one participant:
>
> Subject word Group Pretest Posttest
> 1 1 control 0 1
> 1 2 control 0 1
> 1 3 control 1 1
> 1 4 control 1 1
> 1 5 control 1 0
> 1 6 control 0 0
> 1 7 control 0 0
> 1 8 control 0 0
> =3 =4
>
> Would the model compare the mean score of pretest against mean score of
> posttest? If yes, it will not truly reflect the data, because its not about
> the sum,
> its about every word on its own. So it should not be about 4 in posttest
> against 3 in pretest means 1 gain.
> In here, there are 2 gains in word 1,2 and one decline in word 5. But
> does the model analyse at the level of every word or at the level of sum of
> words?
> That’s the main problem I have not solved since 3 months now.
> Best,
> SOUHEYLA
>
> On Sat, 23 Mar 2019 at 12:46, René <bimonosom using gmail.com> wrote:
>
>> Hey, things clear up. Thanks for the picture. I want to challenge your
>> concerns:
>> First, you see that observing a main effect of PrePost (or "Time") in
>> mod3 can only mean two things: 1) performance goes up (positive effect); or
>> 2) performance goes down (negative effect), in general. Saying something
>> like this, is what the model is defined for. The conclusion will be this
>> "general increase" or "general decrease" or "no evidence here". If you have
>> a different question from that (i.e. on an item level), then you should
>> specify it in more detail (see below).
>> There are three issues in-between the lines of your questions:
>> 1. is it a statistical concern you have?
>> 2. is it an actual theoretical question you have?
>> 3. is it a matter of making a non-result to a result?
>>
>> 1. The statistical view:
>> Counter-question: who would ever assume that a 1 score for a word in a
>> pre-test will remain constant until eternity. And why should it? The answer
>> is: Nobody, and this is the reason statistics (probability theory) exists
>> :). So the first simple answer to your question is you do not need to test
>> whether observed gains from pre-to-post are 'genuine' (only from 0 to 1,
>> without decline cases) because 'nature' guarantees that there will be
>> decline somewhere. That's "randomness" :) But the question is, what's
>> stronger, gain or decline? See... and there is no problem in it: a general
>> main effect (e.g. an overall gain) still is an overall gain, even if some
>> cases decline.
>>
>> 2. Theoretical view:
>> If, however, such special item dynamics are theoretized in advance, then
>> simply test it :) For instance, the assumption whether the treatment
>> leads to gain on abstract words, but to decline on concrete words, then
>> should find into the model by coding the factor AbsCon (abstract vs.
>> concrete words) as fixed effect (assuming a within participants
>> manipulation).
>> mod4<-brm(score~PrePost*Group*AbsCon+(1+PrePost*AbsCon| subject) +
>> (1+group|words),...)
>> (Note: the 1+XX|subject just means random intercept for subject (1+)
>> plus slope for XX on subject; and writing (1+group|words) is the same as
>> writing (group|words), but you can estimate the slope without the (word)
>> intercept by writing (0+group|words))
>>
>> Without having such a theoretical account testing for can also be done
>> via:
>> mod4<-brm(score~PrePost*Group*words+(1+PrePost*words| subject),...)
>> But you will hardly be able to interpret the interactions in this model
>> because words alone has 28 levels.
>>
>> 3. But, your example seems to suggest a special case... (i.e. an actual
>> Null-Effect):
>> "If a participant has got 5 correct words out of 28 in both pretest and
>> posttest"
>> then there would be no general effect of PrePost in the model above
>> (generalizing to all participants now). And searching for "deeper" model
>> checks looks like rescuing all effects you can get (post hoc). But of
>> course, making an argument like there still is an effect, namely for 5
>> specific words, which is not observable because there is also a detriment
>> for other 5 words is possible, but requires a solid theory which explicitly
>> predicts this interaction, and an experiment which was explicitly designed
>> to test this interaction. (point 2)
>>
>> So... if it is point 2 you got... Then go ahead :) test it in a
>> meaningful way. Otherwise, simply treat this "effect by words" interaction
>> as random slope (1+group|words), or btw. (1+PrePost*Group|words) is also
>> possible..., which is basically 'statistically' integrating the idea that
>> the treatment*time effects vary (randomly) between stimuli. And doing this
>> in the random effects has the notion of "generalizing" estimation error in
>> the population, and should be preferred to implementing those in the fixed
>> effects, if the 28 words can be seen as a random (non-special) stimulus
>> sample. If this is not the case, then consider coding the "special" thing
>> about the words as fixed effects (e.g. if you want to use the same design
>> again, for testing something, while controlling for stimulus specifics).
>>
>> Best, René
>>
>> Am Sa., 23. März 2019 um 11:59 Uhr schrieb Souheyla GHEBGHOUB <
>> souheyla.ghebghoub using gmail.com>:
>>
>>> Dear René,
>>>
>>> Thank you for the feedback. Actually, my model was originally like you
>>> suggested now (except for slopes I had PrePost without 1 in both words and
>>> subjects. I called PrePost as "Time". I will read more about the 1+prepost
>>> form you mentioned.
>>>
>>> The reason why I gave up this model and looked for something else is the
>>> fact that each subject has 28 words tested twice (pre&post) and I was not
>>> too sure whether such model will take into consideration differences
>>> between pre & post at the level of every word of each participant (which is
>>> what I want) instead of merely comparing every participant's pre mean sores
>>> of 28 words against his post mean score (which is what i should avoid), here
>>> is a short example as to why:
>>>
>>> If a participant has got 5 correct words out of 28 in both pretest and
>>> posttest, there will be multiple interpretations: e.g. They could refer to
>>> the same words (i.e. 0 gain), or they could be totally new words (i.e. 5
>>> gains) ...etc , hence I am not sure whether such model of pretest vs
>>> posttest will compare each subject score of each word from pretest to
>>> posttest then base its analysis on these score changes instead of comparing
>>> the sum scores between pre and post and which likely skew results.
>>>
>>> I posted about this in stackexchange 3 months ago and was told that it
>>> does compare word to word for every participant, but I am still not
>>> confident enough to use it because all the accurateness of the results and
>>> discussion chapters of my PhD thesis will be based on this decision.
>>>
>>> I look forward to receive feedback from you and any member reading this,
>>> Souheyla
>>> University of York
>>>
>>> On Sat, 23 Mar 2019, 10:01 René, <bimonosom using gmail.com> wrote:
>>>
>>>> Hi Souheyla,
>>>>
>>>> Well, I guess in your case it is simply more elegant to leave the
>>>> measured predictor out of the fixed effects, because there is also another
>>>> implied question (i.e. about the strength of change between pre and post).
>>>>
>>>> So, another possibility to re-define your model (as logistic
>>>> regression) allowing for better interpretations:
>>>> mod3<-brm(score~PrePost*Group+(1+PrePost | subject)+(1+group |
>>>> words),...)
>>>>
>>>> score = 0 or 1 for a given testitem
>>>> PrePost = Pre vs. Post (basically just an indicator of the measurement
>>>> time point)
>>>> Thus, the PrePost main effect will tell, whether there is a change from
>>>> pre to post (e.g. a gaint), and you can also tell how strong it is (in odds
>>>> ratios).
>>>> And if PrePost interacts with Group, then the change (e.g. a gain) is
>>>> moderated by the treatment, which seems to be your main question.
>>>>
>>>> Now in this model, you can also have by-subject random slopes for
>>>> PrePost of course (because the fixed effect of PrePost is present for every
>>>> subject).
>>>>
>>>> Best, René
>>>>
>>>>
>>>> Am Sa., 23. März 2019 um 10:12 Uhr schrieb Souheyla GHEBGHOUB <
>>>> souheyla.ghebghoub using gmail.com>:
>>>>
>>>>> I read that in multinomial regression, all independent variables
>>>>> should be
>>>>> variables that we manipulate. Can I still have pretest as IV without
>>>>> skewing my results?
>>>>>
>>>>> Best,
>>>>> Souheyla
>>>>>
>>>>> On Fri, 22 Mar 2019, 23:31 Souheyla GHEBGHOUB, <
>>>>> souheyla.ghebghoub using gmail.com>
>>>>> wrote:
>>>>>
>>>>> > Thank you both. I will look into this and see :)
>>>>> >
>>>>> > Best,
>>>>> > Souheyla
>>>>> >
>>>>> > On Fri, 22 Mar 2019, 22:02 Uanhoro, James, <
>>>>> uanhoro.1 using buckeyemail.osu.edu>
>>>>> > wrote:
>>>>> >
>>>>> >> In standard regression models, the assumption is predictor
>>>>> variables are
>>>>> >> measured without error. Test scores will have measurement error,
>>>>> hence
>>>>> >> Doran's comment when test scores are used as covariates. See:
>>>>> Hausman, J.
>>>>> >> (2001). Mismeasured Variables in Econometric Analysis: Problems
>>>>> from the
>>>>> >> Right and Problems from the Left. *Journal of Economic
>>>>> Perspectives*,
>>>>> >> *15*(4), 57–67. https://doi.org/10.1257/jep.15.4.57
>>>>> >> I will note that many practitioners ignore this issue, and it is
>>>>> quite
>>>>> >> common to use predictors measured with error. Consider the number
>>>>> of times
>>>>> >> people use polychotomized income measures, or SES measures as
>>>>> predictors,
>>>>> >> or some other "construct".
>>>>> >> On Mar 22 2019, at 5:39 pm, Souheyla GHEBGHOUB <
>>>>> >> souheyla.ghebghoub using gmail.com> wrote:
>>>>> >>
>>>>> >> Dear Doran,
>>>>> >>
>>>>> >> Could you explain more this point to me, please?
>>>>> >>
>>>>> >> Thank you,
>>>>> >> Souheyla
>>>>> >>
>>>>> >> On Fri, 22 Mar 2019, 21:19 Doran, Harold, <HDoran using air.org> wrote:
>>>>> >>
>>>>> >> Yes, but conditioning on the pre-test means you are using a variable
>>>>> >> measured with error and the estimates you obtain and now
>>>>> inconsistent, and
>>>>> >> that¹s a pretty big sin.
>>>>> >>
>>>>> >> On 3/22/19, 3:49 PM, "Souheyla GHEBGHOUB" <
>>>>> souheyla.ghebghoub using gmail.com>
>>>>> >> wrote:
>>>>> >>
>>>>> >> Dear René,
>>>>> >>
>>>>> >> Thank you for your feedback to me. You are right, dropping the
>>>>> pretest
>>>>> >> from
>>>>> >> covariate if I predict change definitely makes sense to me! But the
>>>>> fact
>>>>> >> that i need to control for the starting levels of participants
>>>>> makes it
>>>>> >> obligatory for me to chose the second way, which is predicting
>>>>> posttest
>>>>> >> instead of change to have pretest scores controlled for.
>>>>> >>
>>>>> >> You also chose (1+group | word) , which is new to me. Does it
>>>>> intend to
>>>>> >> assume the effect of group to vary across words, which is something
>>>>> >> applicable to my data, right?
>>>>> >> I will discuss all this with my supervisor, and may reply here
>>>>> again in
>>>>> >> few
>>>>> >> days if you do not mind.
>>>>> >> Thank you very much
>>>>> >> Souheyla
>>>>> >> University of York
>>>>> >>
>>>>> >>
>>>>> >> On Fri, 22 Mar 2019 at 13:42, René <bimonosom using gmail.com> wrote:
>>>>> >>
>>>>> >> Hi Souheyla,
>>>>> >>
>>>>> >> it seems to me that you will run into problems with your coding of
>>>>> >> change
>>>>> >> (gain, no gain and decline) because the 'change' is by
>>>>> >> definition/calculation depending on the predictor pretest.
>>>>> >> See, according to your coding scheme:
>>>>> >> Change = decline can only occur if pretest=1 (not by pretest=0).
>>>>> >> Change = gain can only occur if pretest = 0 (not by pretest=1)
>>>>> >> Change = No Gain can occur if pretest= 1 or 0
>>>>> >> In other words:
>>>>> >> If pretest = 1 then the possible outcomes can be decline or no gain
>>>>> >> If pretest = 0 then the possible outcomes can be gain or no gain
>>>>> >>
>>>>> >> And if the model result shows you then that the pre-test is
>>>>> >> significantly
>>>>> >> related to p(change-outcome), I guess there is no surprise in it,
>>>>> is it?
>>>>> >>
>>>>> >> So the first solution to this would be simply kicking the pre-test
>>>>> >> predictor out of the model completely, and predict:
>>>>> >> mod1 <- brm(Change ~ Group + (1|Subject) + (1+Group|Word),...)
>>>>> >> (Btw.: actually the first Hierarchical Bayes Model question I see
>>>>> on the
>>>>> >> mixed-effects mailing list :))
>>>>> >>
>>>>> >> Attempt for a further clarification on which random slopes would
>>>>> reflect
>>>>> >> the model's design:
>>>>> >> If you have a within-subjects design, by-subject random slopes are
>>>>> >> possible for the within-subject variable (e.g. if there are two
>>>>> sets of
>>>>> >> words/lists [e.g. abstract vs. concrete words] for each
>>>>> participant, and
>>>>> >> you test whether there is a performance-difference between these
>>>>> >> word-lists, then you can implement by-subject random slopes for
>>>>> words,
>>>>> >> because each participant has seen both sets.) If each participant
>>>>> has
>>>>> >> seen
>>>>> >> only one list (i.e. between subjects design) by subject random
>>>>> slopes
>>>>> >> for
>>>>> >> words are not appropriate, because there is no 'slope' by
>>>>> participant
>>>>> >> (i.e.
>>>>> >> by definition, having a slope requires at least two
>>>>> observations...).
>>>>> >> This
>>>>> >> is always a good rule of thumb without thinking about it too
>>>>> heavily :)
>>>>> >> Ans as you see: you can define a random slope for words:
>>>>> >> (1+Group|Word),
>>>>> >> because each word has been presented in each group (i.e. there can
>>>>> be a
>>>>> >> slope for each word). And intuitively speaking the Treatment-effect
>>>>> can
>>>>> >> vary depending on the stimuli you use, and the slope makes sense.
>>>>> (You
>>>>> >> also
>>>>> >> see in this example that the treatment effect can also vary by
>>>>> subjects,
>>>>> >> but in fact, this subject effect variation IS EQUAL to the effect
>>>>> you
>>>>> >> want
>>>>> >> to test, and having by subject group random slopes would eliminate
>>>>> the
>>>>> >> fixed effect...)
>>>>> >>
>>>>> >> Anyway, there is a second possibility to define your model,
>>>>> depending on
>>>>> >> how you want to interpret it. In the previous model you can say
>>>>> >> something
>>>>> >> about the type-of-change likelihoods depending on the treatment
>>>>> group.
>>>>> >> But
>>>>> >> you could implement the model as binomial as well (i.e. logistic
>>>>> >> regression)
>>>>> >>
>>>>> >> mod2 <- brm(posttest ~ pretest*Group + (1|Subject) +
>>>>> (1+Group|Word),...)
>>>>> >>
>>>>> >> And what you would expect here would be an interaction between
>>>>> pre-test
>>>>> >> and Group. For instance; if pretest=0 & treatment 1 then posttest
>>>>> larger
>>>>> >> than with pretest=0 & treatment 2; but not when pretest=1 (because
>>>>> this
>>>>> >> is
>>>>> >> a plausible no gain situation). And so on...
>>>>> >> (And in this model there are no also no further random slopes
>>>>> hidden in
>>>>> >> your design :))
>>>>> >> Hope this helps.
>>>>> >>
>>>>> >> Best, René
>>>>> >>
>>>>> >>
>>>>> >> Am Do., 21. März 2019 um 14:01 Uhr schrieb Souheyla GHEBGHOUB <
>>>>> >> souheyla.ghebghoub using gmail.com>:
>>>>> >>
>>>>> >> Dear Philip,
>>>>> >>
>>>>> >> I understand , here is the structure of my data in case it could
>>>>> help.
>>>>> >>
>>>>> >> I have 3 groups of participants (control, treatment1, treatment2).
>>>>> Each
>>>>> >> group was tested twice, once before treatment (pretest) and once
>>>>> after
>>>>> >> treatment (posttest).
>>>>> >> In each test, they were tested on knowledge of 28 words, scores are
>>>>> >> dichotomous (0 = unknown , 1 = known). Tests are the same.
>>>>> >>
>>>>> >> I calculated change from pretest to posttest :
>>>>> >> if pretest 0 and posttest 0 = no gain
>>>>> >> if pretest 1 and posttest 1 = no gain
>>>>> >> if pretest 0 and posttest 1 = gain
>>>>> >> if pretest 1 and posttest 0 = decline
>>>>> >> So I ended up with a dependent variable called Change with 3 levels
>>>>> >> (no_gain, gain, decline) and I tried to predict it using Group and
>>>>> >> Pretest
>>>>> >> as covariates using multinomial logit model. mod0 <- brm(Change ~
>>>>> >> Pretest
>>>>> >> +
>>>>> >> Group) I would like to add random effects for subjects but don't
>>>>> know
>>>>> >> what's the best form when Time factor is absent.
>>>>> >>
>>>>> >> I hope other statisticians who read this could help
>>>>> >> Thank you
>>>>> >> Souheyla
>>>>> >>
>>>>> >> [[alternative HTML version deleted]]
>>>>> >>
>>>>> >> _______________________________________________
>>>>> >> R-sig-mixed-models using r-project.org mailing list
>>>>> >> https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models
>>>>> >>
>>>>> >>
>>>>> >>
>>>>> >> [[alternative HTML version deleted]]
>>>>> >>
>>>>> >> _______________________________________________
>>>>> >> R-sig-mixed-models using r-project.org mailing list
>>>>> >> https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models
>>>>> >>
>>>>> >>
>>>>> >>
>>>>> >>
>>>>> >> [[alternative HTML version deleted]]
>>>>> >>
>>>>> >> _______________________________________________
>>>>> >> R-sig-mixed-models using r-project.org mailing list
>>>>> >> https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models
>>>>> >>
>>>>> >>
>>>>>
>>>>> [[alternative HTML version deleted]]
>>>>>
>>>>> _______________________________________________
>>>>> R-sig-mixed-models using r-project.org mailing list
>>>>> https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models
>>>>>
>>>>
[[alternative HTML version deleted]]
More information about the R-sig-mixed-models
mailing list