[R-sig-ME] https://stats.stackexchange.com/questions/301763/using-random-effect-predictions-in-other-models?noredirect=1#comment573574_301763

Houslay, Tom T.Houslay at exeter.ac.uk
Wed Oct 4 19:45:20 CEST 2017


Hi Josh,


It sounds like you would be better off using a bivariate model (I'm assuming this was where you were headed with 'combining' the models?), where your response variables are something like 'ESM' and 'Outcome'. You could have these grouped by individual ID, and (having controlled for any fixed effects on either response) calculate the among-individual variation in each response and the covariance between them (which can be scaled to a correlation). As you're effectively modelling the relationship between two response variables then I think this makes sense, and avoids discarding the error associated with predictions from a previous model.


If you have repeated measures for both ESM and Outcome, but these were not measured at the same time, you might have data something like the following:


ID --- Repeat --- ESM --- Outcome

A --- 1 --- 12 --- NA

A --- 2 --- 19 --- NA

A --- 3 --- 14 --- NA

A --- 1 --- NA --- 15

A --- 2 --- NA --- 9

B ...


etc


In which case you could calculate both among-individual and residual variation for both traits, but only the among-individual covariance (as you don't have observations of both responses at the same time to estimate the residual/'within-individual' covariance).


Note that if you only had a single observation for 'Outcome', you would simply constrain the residual variation for this trait to be 0 (such that all variation is modelled as 'among-individual').


In case they're useful, we have a brief paper related to this topic in Behavioural Ecology here:

https://doi.org/10.1093/beheco/arx023


...and some tutorials for these kinds of models in MCMCglmm / ASreml-R here:

https://tomhouslay.com/tutorials/


This paper on comparing behaviours measured repeatedly in both short- and long-term sampling regimes might also be of interest:

https://link.springer.com/article/10.1007/s00265-014-1692-0


Good luck!


Tom



Date: Wed, 4 Oct 2017 11:37:44 -0400
From: Joshua Rosenberg <jmichaelrosenberg at gmail.com>
To: r-sig-mixed-models <r-sig-mixed-models at r-project.org>
Subject: [R-sig-ME]
        https://stats.stackexchange.com/questions/301763/using-random-effect-predictions-in-other-models?noredirect=1#comment573574_301763

Message-ID:
        <CANYHYTTdKauDXjgzA-Ghy=dnEUfgJG+UPWMMM4UqtLXxR13k7g at mail.gmail.com>
Content-Type: text/plain; charset="UTF-8"

Hi R-Sig-mixed-models,

My question is about the use of predictions of effects for specific units
(such as an individual in the case of repeated measures data) in other
models. I'm especially interested in whether the group thinks this is a
good / useful approach for using repeated measures data to predict a
longer-term outcome. I am also interested in whether the group has any
suggestions for betters way to do this (or to combine what now requires two
models).

For example, were individual-level predicted effects to be obtained from a
mixed effects model (through a null model, i.e. there is random intercept
for individuals and no fixed effect), could they be used to predict an
individual-level outcome?

I am thinking about this specifically in the context of repeated measures
data (collected using Experience Sampling Method, or ESM, whereby students
are asked every so often to respond to questions about their interest and
engagement) and pre- and post-survey measures, representing a longer-term
outcome, students' self-reported interest in a STEM career.

Here is how I am thinking about this, using lmer() and lm() to specify the
models:

m1 <- lmer(repeated_measures_outcome ~ 1 + (1 | participant), data)


Process the data to obtain the predicted intercept for each participant.

m2 <- lm(longer_term_outcome ~ prior_level_of_longer_term_outcome +
predicted_intercept_for_participant)


When I have shared this idea with others (i.e., in this Stack Overflow
question) I have received feedback that a) yes, you can do this and b) you
could / should combine the two models (m1 and m2 in this example) into one
model. This would (obviously?) require using a different approach - but I
do not have a clear idea of what this would require (MCMCglmm? brms?).

Any general or specific advice is welcomed. Thank you for your
consideration of this. If I can or need to provide more detail or
background, then please do not hesitate to tell me so!

Josh

--
Joshua Rosenberg, Ph.D. Candidate
Educational Psychology
&
 Educational Technology
Michigan State University
http://jmichaelrosenberg.com

        [[alternative HTML version deleted]]




	[[alternative HTML version deleted]]



More information about the R-sig-mixed-models mailing list