# [R-meta] Question regarding Generalized Linear Mixed-effects Model for Meta-analysis

Akifumi Yanagisawa ayanagis at uwo.ca
Mon Jan 8 00:38:51 CET 2018

```Thank you so much for explaining the calculation for me, Wolfgang. This is not something I can come up with by myself. I will learn statistics more, so that I can understand the formula more clearly and deeply.

I cannot thank you enough for all the support. I have learned so much through asking and reading responses from this mailing-list.

Thank you again and I hope you have a great day,
Best regards,
Aki

> On Jan 7, 2018, at 6:03 PM, Viechtbauer Wolfgang (SP) <wolfgang.viechtbauer at maastrichtuniversity.nl> wrote:
>
> 1/(np(1 − p)) applies when you have a single proportion based on a binomial distribution, but this isn't what you have.
>
> In your case, you have p ~ N(P, sigma^2 / n) (asymptotically) and then I just use the delta method (https://en.wikipedia.org/wiki/Delta_method) to get ln(p/(1-p)) ~ N(ln(P/(1-P)), 1/(P(1-P))^2 * sigma^2 / n). Then substitute p for P and s^2 for sigma^2.
>
> 1/(np(1 − p)) is derived in the same way. For a 'binomial proportion', p ~ N(P, P(1-P)/n) asymptotically. Then ln(p/(1-p)) ~ N(ln(P/(1-P)), 1/(P(1-P))^2 * P(1-P)/n), which simplifies to 1/(nP(1-P) and then again substitute p for P.
>
> Best,
> Wolfgang
>
> -----Original Message-----
> From: Akifumi Yanagisawa [mailto:ayanagis at uwo.ca]
> Sent: Sunday, 07 January, 2018 23:41
> To: Viechtbauer Wolfgang (SP)
> Cc: r-sig-meta-analysis at r-project.org
> Subject: Re: [R-meta] Question regarding Generalized Linear Mixed-effects Model for Meta-analysis
>
> Thank you very much for clarifying my understanding, Wolfgang.
>
> If you would not mind me asking one more question, could you let me know if there is any publication which I can cite how to calculate the sampling variance in this case: 1/(p*(1-p))^2 * s^2 / n)? I was able to figure out ‘logit transformation’, and found the usual sample variance calculation for logit transformation for meta-analysis: 1/(np(1 − p)); however, I could not find '1/(p*(1-p))^2 * s^2 / n)' by myself.
>
> Thank you so much,
> Aki
>
>> On Jan 7, 2018, at 5:09 PM, Viechtbauer Wolfgang (SP) <wolfgang.viechtbauer at maastrichtuniversity.nl> wrote:
>>
>> To be precise, ln(p/(1-p)) doesn't limit the range of the response variable, it actually maps p (which is restricted to 0 to 1) to -Inf to +Inf. It is then via the back-transformation that the final estimate or predicted values become restricted to the 0 to 1 range.
>>
>> As for articles/books: Just search for 'logit transformation'.
>>
>> Best,
>> Wolfgang
>>
>> -----Original Message-----
>> From: Akifumi Yanagisawa [mailto:ayanagis at uwo.ca]
>> Sent: Friday, 05 January, 2018 15:54
>> To: Viechtbauer Wolfgang (SP)
>> Cc: James Pustejovsky; Michael Dewey; r-sig-meta-analysis at r-project.org
>> Subject: Re: [R-meta] Question regarding Generalized Linear Mixed-effects Model for Meta-analysis
>>
>> Thank you for you comments, Wolfgang, Michael, and James.
>>
>> Thank you very much for suggesting using ln(p/(1-p)) for response variables, Wolfgang. That’s really nice to hear that I can limit the range of response variables from 0 to 1 by using this function. I will try this approach with my data!
>>
>> I would like to learn more about this approach. So, If you know any, could you let me know some of the research articles or statistics textbooks that explain how to use this approach?
>>
>> Thank you very much.
>> Best regards,
>> Aki
>>
>> On Jan 3, 2018, at 9:59 AM, Viechtbauer Wolfgang (SP) <wolfgang.viechtbauer at maastrichtuniversity.nl> wrote:
>> Hi James,
>>
>> I tried to be clever and derived it myself. But now that I had a bit more time to think about this, I don't think it is applicable for these purposes. The equation gives an estimate of the sampling variance of p if we would repeatedly observe the performance of the same n individuals; that is, under repeated observations, their p_i values would differ, but it assumes that the underlying true probabilities stay the same across repeated observations. But the more appropriate sampling variance would be for repeated observations of n new individuals and their true probabilities would change across repeated observations. The latter type of sampling variance is indeed just estimated by s^2 / n.
>>
>> So, Aki, please ignore my previous mail. Well, except that you can still analyze ln(p/(1-p)). And the sampling variance of ln(p/(1-p)) would then be estimated with v = 1/(p*(1-p))^2 * s^2 / n.
>>
>> Best,
>> Wolfgang

```