# [R-sig-ME] much different results for random effect vs simple lm.

Brent Pedersen bpederse at gmail.com
Mon Jun 20 23:48:31 CEST 2011

```Hi, I have a model like this:

# for many y values/probes
formula = y ~ concordant + age.proband + age.sibling + sex.proband
+ sex.sibling

I run this model and get p-values with the formula:

model = lm(formula, data=df2)
s = summary(model)
p.cordant = s\$coefficients["concordantT", "Pr(>|t|)"]

But, an proband can have multiple siblings, so I want to account for
family structure:
So, I use:

library(lme4a)
# for many y values.
model = lmer(y ~ concordant + age.proband + age.other +
sex.proband + sex.proband + sex.other + (1| family_id.proband),
data=df)

degrees.of.freedom = length(unique(df\$family_id.proband)) - 1

p.from.t = function(t){
2*pt(-abs(t),df=degrees.of.freedom)
}
s = summary(model)
concordant.t.score = s\$coefficients['concordantT', 't value']
pcordant = p.from.t(concordant.t.score)

Everything else between the 2 runs is the same. For the simple case, I
have unique 80 pairs (since I only use each proband once),
and for the latter, I have 98 pairs. I'm doing this test for millions
of probes and looking for regions of where the concordant
parameter is significant, I find much different regions between the 2
models--very little overlap.
Is this to be expected? Intuitively, I'd figure that using
the random effect via lme4a would just give more power. Are my p-value
calculations correct?

Thanks for any feedback,
-Brent

```