[R-meta] Dealing with effect size dependance with a small number of studies

Viechtbauer, Wolfgang (SP) wo||g@ng@v|echtb@uer @end|ng |rom m@@@tr|chtun|ver@|ty@n|
Tue Feb 9 13:02:18 CET 2021


Indeed, the specific values used for the coding do not matter, just that they are unique for each row.

As a check, one can also examine the log likelihoods of the two models:

full <- rma.mv(ES_g, V, random = ~ 1 | IDpaper / IDstudy / IDsubsample / IDeffect, data=MA_dat)
reduced <- rma.mv(ES_g, V, random = ~ 1 | IDpaper / IDeffect, data=MA_dat)
fitstats(full, reduced)

You should find that they are the same (the AIC, BIC, and AICc will differ, since the full model has more parameters).

Best,
Wolfgang

>-----Original Message-----
>From: Danka Puric [mailto:djaguard using gmail.com]
>Sent: Tuesday, 09 February, 2021 12:57
>To: Viechtbauer, Wolfgang (SP)
>Cc: R meta
>Subject: Re: [R-meta] Dealing with effect size dependance with a small number of
>studies
>
>Dear Wolfgang,
>
>thanks a lot!
>
>We used a slightly different scheme for coding:
>IDstudy IDeffect
>1       11
>1       12
>2       21
>3       31
>3       32
>3       33
>4       41
>4       42
>but it's still explicit coding, so it's good to know that the two models are
>identical. Nevertheless, we will report the full model.
>
>All the best,
>Danka
>
>On Tue, Feb 9, 2021 at 12:41 PM Viechtbauer, Wolfgang (SP)
><wolfgang.viechtbauer using maastrichtuniversity.nl> wrote:
>Dear Danka,
>
>Indeed, when a variance component in such a model is estimated to be zero, then
>this is the same as dropping this particular random effect from the model. Whether
>your two models below are really identical though depends on how you coded the ID
>variables. There is what could be called implicit and explicit coding of the
>levels. Implicit coding would for example be:
>
>IDstudy IDeffect
>1       1
>1       3
>2       1
>3       1
>3       3
>3       4
>4       1
>4       2
>
>and then using 'random = ~ 1 | IDstudy / IDeffect'.
>
>Explicit coding would be:
>
>IDstudy IDeffect
>1       1
>1       2
>2       3
>3       4
>3       5
>3       6
>4       7
>4       8
>
>Then one can still use 'random = ~ 1 | IDstudy / IDeffect' or equivalently 'random
>= list(~ 1 | IDstudy, ~ 1 | IDeffect)'.
>
>If the IDstudy variance component is estimated to be 0, then this is identical to
>'random = ~ 1 | IDeffect' **only under explicit coding**. If implicit coding was
>used, then one would have to use, for example, 'random = ~ 1 |
>interaction(IDstudy, IDeffect)'.
>
>So, in your case, if you used implicit coding (so that IDeffect jumps back to 1
>when IDsubsample changes), then the two would not be the same.
>
>As for what to report: I would also report the results from the full model.
>
>Best,
>Wolfgang
>
>>-----Original Message-----
>>From: R-sig-meta-analysis [mailto:r-sig-meta-analysis-bounces using r-project.org] On
>>Behalf Of Danka Puric
>>Sent: Tuesday, 09 February, 2021 11:15
>>To: R meta
>>Subject: Re: [R-meta] Dealing with effect size dependance with a small number of
>>studies
>>
>>Hi everyone,
>>
>>I have a (hopefully short) additional question. I just recently
>>remembered that we have another level of potential effect size
>>dependence in our data - the level of the journal article / paper.
>>Therefore, the theoretically most complete model would be:
>>es <- rma.mv(ES_g, V, random = ~ 1 | IDpaper / IDstudy / IDsubsample /
>>IDeffect, data=MA_dat)
>>
>>For this model I'm getting zero variance (to four decimal places) for
>>IDstudy and IDsubsample random effects, which makes it (from what I
>>can tell) numerically identical to this simplified model:
>>es <- rma.mv(ES_g, V, random = ~ 1 | IDpaper / IDeffect, data=MA_dat)
>>
>>I was planning on reporting the full model in the manuscript, noting
>>that the variances at certain levels are zero. When testing for the
>>effects of moderators I would also include all levels. Is this the
>>right way to go about this?
>>
>>Thanks in advance,
>>Danka


More information about the R-sig-meta-analysis mailing list