[R-meta] Announcing the "psychmeta" R package for psychometric meta-analysis
Jeffrey Dahlke
dahlk068 at umn.edu
Wed Oct 4 15:30:37 CEST 2017
Greetings, Meta-Analysts!
We are pleased to announce a new package for computing psychometric meta-analyses in R: psychmeta (available from CRAN; incremental package builds between CRAN releases are also available from https://github.com/jadahlke/psychmeta <https://github.com/jadahlke/psychmeta>). psychmeta provides R users with a robust set of tools to compute Hunter-Schmidt psychometric meta-analyses.
The psychmeta package includes tools for bare-bones, artifact-distribution, and individual-correction meta-analyses of correlations and Cohen’s d values. All of psychmeta’s psychometric meta-analysis functions support all currently known configurations of measurement-error corrections and range-restriction corrections (e.g., it supports direct and indirect range restriction in one or both variables). The package also supports both interactive and Taylor series approximation (TSA) artifact-distribution meta-analysis methods for all available psychometric corrections. Our individual-correction programs do truly individual corrections, such that each effect size can be corrected using whichever artifacts are appropriate (e.g., some coefficients can be corrected for direct range restriction, while others are corrected for indirect range restriction – a feature we’ve found is rather rare in meta-analysis programs).
We provide a number of different weighting methods and psychmeta interfaces with metafor to import the weights supported by the “rma” function (e.g., REML weights, DerSimonian-Laird weights). We've also implemented some refinements not typically available in psychometric meta-analysis programs, such as using the t distribution to compute confidence and credibility intervals and computing unbiased estimates of weighted variances. All of these refinements are optional and you can easily opt out of them and do a by-the-book Hunter-Schmidt meta-analysis by setting the “hs_override” argument to TRUE (you can, of course, also manipulate the settings individually).
All of psychmeta’s meta-analysis functions support any number of moderators and it’s very easy to perform hierarchical moderator analyses. Our “master” meta-analysis functions for correlations and d values (called “ma_r” and “ma_d”, respectively) also make it possible to meta-analyze all possible construct pairs contained within a database using your choice of meta-analysis method (i.e., bare-bones, artifact-distribution, or individual-correction) and obtain omnibus tables of results from all analyses (with moderator analyses, if requested). The master functions can also automate the consolidation of dependent samples - just specify a sample ID variable along with construct names and the meta-analysis functions will create composite (or average) values for effect sizes and artifacts.
We have also prepared some functions to perform follow-up analyses such as meta-regressions (our “metareg” function is a wrapper for metafor’s “rma” function that pulls in moderator information from a psychmeta-class meta-analysis object), heterogeneity analyses, and sensitivity analyses. Our collection of sensitivity analyses is slated to grow over time, but currently we support cumulative meta-analysis, leave-one-out analyses, and bootstrapped meta-analyses.
The psychmeta package does more than just meta-analysis: We also have a suite of simulation functions that allow users to generate mock meta-analytic databases of correlations and d values (with psychometric and range-restriction artifacts) and we have an array of functions to compute composite values for effect sizes and artifacts. Our simulation functions allow users to specify parameter distributions in a variety of formats, so it’s easy to generate data meeting any specifications for use in meta-analytic simulation studies. These simulation functions provide both the statistics and the analytically determined parameter values for the effect sizes and artifacts from each simulated sample, which allows users to compare the accuracy of meta-analytic estimates against the actual parameters that define databases of simulated samples.
We’re excited to share psychmeta with the R community and we hope you’ll check out what it can do! We’re continuing to build new functionality for the package and we welcome feedback from users about how we can make psychmeta even better.
Happy synthesizing!
Jeff Dahlke & Brenton Wiernik
[[alternative HTML version deleted]]
More information about the R-sig-meta-analysis
mailing list