[R-meta] Distinguishing between the design of longitudinal studies
Reza Norouzian
rnorouz|@n @end|ng |rom gm@||@com
Tue Aug 31 20:45:50 CEST 2021
Dear Stefanou,
In meta-analysis, we often look for features that are already present
in the published studies. The consequence of this is that we can't,
for example, code for a potential internal validity threat purely
based on the design of study ignoring the entire study that is already
in front of us.
For example, imagine I tell you that your third design has the highest
potential for introducing a "fatigue" factor simply by the way this
design is set up. And as you may know, one of the direct consequences
of such a factor is attrition. So, you would rank studies with this
type of design low on your fatigue moderator.
But how valid that rank-based moderator would be, if a careful
examination of those studies reveals that they have the smallest
amount of attrition among all the design types (Of course, this could
mean many other things, but we should even dig deeper to learn about
them e.g., were participants given any kind of incentives).
Things like carryover effect that you mentioned also require
considering the nature of the dependent variable under study (e.g.,
some cognitive variables are difficult to remember, others are easy to
remember), and how it is measured (e.g., a short test vs. an elaborate
test) and perhaps several other context- and phenomenon-specific
factors. So, even when looking into the details of a study, it's NOT
enough to say study X has had a higher potential for carryover because
the time interval between testing occasion(s) has been shorter than
others, it's not and should not be that simple.
If you don't intend to get into the details of each study to learn
more about these potential threats, then, I would suggest that you
stick with those study-level moderators that could perhaps act as some
sort of a control variable (e.g., # of treatments, or simply
distinguishing between the design types).
In summary, IMHO, your thinking model is very useful when planning for
a future study to guard against threats to internal validity like the
ones I mentioned. But in meta-analysis, we are not planning for any
future studies. We must delve deep into our studies if we intend to
argue for the presence or the lack of such threats (kind of like when
you review a paper submitted for publication).
Best,
Reza
On Tue, Aug 31, 2021 at 12:24 PM Stefanou Revesz
<stefanourevesz using gmail.com> wrote:
>
> Hi James,
>
> Thank you very much. I fully understand that the details of how each
> design was implemented could lead to the formation of a bunch of
> different moderators.
>
> But, we are wondering, *purely by the way the designs are set up*,
> what features (e.g., *in terms of threats to internal validity*,
> *ranking of design's face quality* etc.) could potentially distinguish
> between these three designs?
>
> As I'm writing this response, for example can we perhaps rank these
> designs based on how they each lend themselves to say carry-over
> effect/practice effect or fatigue? Any other threats to internal
> validity or design's face quality that can be coded for?
>
> Thank you,
> Stefanou
>
> R o x o o
> R o o o <-- control group
>
>
> R o x o x o o
> R o o o o <-- control group
>
>
> R o x x x o o
> R o o o <-- control group
>
> On Tue, Aug 31, 2021 at 11:51 AM James Pustejovsky <jepusto using gmail.com> wrote:
> >
> > Hi Stefanou,
> >
> > This is certainly an interesting question but I, for one, am at a loss as to what advice to give. What moderators to include in your model depends first and foremost on the research questions that you are investigating through your meta-analysis and, second, on the substantive and design-related features of the included studies. We on the listserv are not in a very good position to offer guidance here, since we don't have the context of or experience in your research area.
> >
> > All that said, if you have thoughts or ideas for how to proceed with your meta-analysis, you are of course certainly welcome to solicit feedback through the listserv.
> >
> > Kind Regards,
> > James
> >
> > On Mon, Aug 23, 2021 at 12:57 AM Stefanou Revesz <stefanourevesz using gmail.com> wrote:
> >>
> >> Dear List Members,
> >>
> >> We are meta-analyzing a number of longitudinal studies. But our
> >> studies have three general research designs (below).
> >>
> >> We are wondering, other than creating study-level moderators to
> >> distinguish between the designs or how many treatments each study uses
> >> etc., what *time-level* or *effect-size-level* moderators we should
> >> control for in our meta-analysis?
> >>
> >> First, we have studies that make an observation (o) prior to a
> >> treatment (x), and then, make follow-up observation(s):
> >>
> >> o x o o
> >> o o o <-- control group
> >>
> >> Second, we have studies that make an observation (o) prior to a
> >> treatment (x), then, make follow-up observation on that treatment, but
> >> then again introduce the treatment and make follow-up observation(s):
> >>
> >> o x o x o o
> >> o o o o <-- control group
> >>
> >> Third, we have studies that make an observation (o) prior to
> >> successive treatments (x), and then, make follow-up observation(s) on
> >> those treatments:
> >>
> >> o x x x o o
> >> o o o <-- control group
> >>
> >> Thank you!
> >> Stefanou
> >>
> >> _______________________________________________
> >> R-sig-meta-analysis mailing list
> >> R-sig-meta-analysis using r-project.org
> >> https://stat.ethz.ch/mailman/listinfo/r-sig-meta-analysis
>
> _______________________________________________
> R-sig-meta-analysis mailing list
> R-sig-meta-analysis using r-project.org
> https://stat.ethz.ch/mailman/listinfo/r-sig-meta-analysis
More information about the R-sig-meta-analysis
mailing list