[R-sig-ME] Mixed models and mediation
datkins at u.washington.edu
Tue Dec 1 22:22:09 CET 2009
Not 100% sure what you're looking for, but a couple resources:
-- Krull and MacKinnon 2001 in Multivariate Behavioral Research describe
mediation in multilevel designs
-- Shrout and Bolger 2002 in Psychological Methods discuss use of
bootstrap to estimate SE of indirect effect
-- Kenny et al. 2004 in Evaluation Review present methods for mediation
in longitudinal treatment designs, in which they extend the classic
Baron and Kenny criteria, focusing on the c to c' change.
Hope that helps.
Dave Atkins, PhD
Research Associate Professor
Center for the Study of Health and Risk Behaviors
Department of Psychiatry and Behavioral Science
University of Washington
1100 NE 45th Street, Suite 300
Seattle, WA 98105
datkins at u.washington.edu
The following article might help:
MacKinnon, D. P., Lockwood, C. M., Hoffman, J. M., West, S. G., &
Sheets, V. (2002). A comparison of methods to test mediation and
other intervening variable effects. Psychological Methods, 7, 83-104.
It argues that you can test mediation very simply by taking the
maximum of two p-values. One is the regression of the mediator M on
the independent variable X. The other is the coefficient for M when
the dependent variable Y is regressed on M and X. This seems slightly
better than the Sobel test, and it may avoid the need for
bootstrapping. (I have also run some simulations, and, indeed, this
method is very good.)
Another article by Lois Gelfand (2008?, maybe 2009) in "Journal of
General Psychology" reviews the more recent literature but does not (I
think) change the main conclusion.
So far as I can tell, the use of mixed models should not change these
conclusions. You still have to get p-values, though.
On 11/30/09 22:30, Adam D. I. Kramer wrote:
> Could anyone recommend a document or resource for doing a mediation
> analysis for some glmer models? I've seen a few hints of "mediation
> mixed models" in general online (something akin to "do a sobel test
> estimates and standard errors, but bootstrap significance"), but no
> of anybody doing this in R.
> My research question is basically summarized like this: Does whether
> a person (subjID) chooses an option offered to them (chosen) depend
> value of that option (value) as well as how many options they've seen
> already (option)? Specifically, does adding "value" to the model
> mediate the role that option plays? There is also another nesting
> between-subjects condition (thisDist), in which values are nested.
> g <- glmer(chosen ~ option + value + (1|subjID) + (value|thisDist),
> ...my intuition would be to use boot() to randomly vary the levels of
> "value" within each subject and re-run glmer() a few thousand times to
> estimate a standard error for the fixed effect of "option" with something
> like "value" in the model, but I wanted to see whether anybody had
> analysis like this before I think too hard about reinventing the wheel.
> Many thanks,
> Adam D. I. Kramer
> Ph.D. Candidate, Social Psychology
> University of Oregon
> adik at uoregon.edu
> R-sig-mixed-models at r-project.org mailing list
Jonathan Baron, Professor of Psychology, University of Pennsylvania
Home page: http://www.sas.upenn.edu/~baron
Editor: Judgment and Decision Making (http://journal.sjdm.org)
More information about the R-sig-mixed-models