[R-sig-ME] [R] understanding I() in lmer formula

Don Cohen don-r-help at isis.cs3-inc.com
Thu Jun 15 02:14:52 CEST 2017


Emmanuel Curis writes:
 > A tentative explaination for the correlated versus uncorrelated...
 > 
 > model Reaction = a + b * Reaction.
 => model Reaction = a + b * Day

 > you may suspect that this linear relation is different.  In other
 > word, each Subject has its own slope (b) and intercept (a).
 > 
 > Because Subject is sampled in a bigger population, taking a new
 > subject will give new values of a and b, so you will consider the a
 > and b values for the subjects you have has a (random) sample taken
 > from a population, that is as random effects. You may encounter other
 > explainations for this depending on the books you have, but the result
 > is the same: a and b for each subject are realisations of a random
 > couple (A, B).

ok, I understood this part (even before)

 > Since you have two random variables, A and B, the question arises
 > immediatly: are A and B independant (that is, knowing what is the
 > value taken by A does not give any information about what value will
 > take B, and reciprocally) or not (knowing what value took A gives an
 > insight about B - for instance, knowing that A is small means you have
 > greatest lucks to have negative B, or whatever you can imagine).  In
 > other words, are the slope and the intercept completely unrelated, or
 > can you somehow predict the intercept when knowing the slope? For a
 > given Subject, if you know its intercept (A), can you expect that it
 > has "preferential" values of slope (B) that you couldn't guess
 > otherwise?

I understand that the A,B values may be correlated (and did before)

 > In the usual mixed effects framework, (A, B) is assumed to be
 > Gaussian (normal).  So, dependance and correlation are equivalent
 >  this a special property of Gaussian distributions.  So you can read

I didn't know that the term (in)"dependent" had any standard technical
meaning other than (lack of) correlation.

 > "uncorrelated" as "independant" (this is the (Day||Subject) model, that
 > can also be wroten as (1|Subject) + (0+Day|Subject) ) and
 > "correlated" as "dependant" (this the (Day|Subject) model, that can
 > also be wroten as (1+Day|Subject).
 > 
 > If you know covariance matrices, "uncorrelated" means that the
 > covariance matrix of the (A, B) vector is diagonal; "correlated", that

My question is what does lmer do in each case.

If it estimates A,B for each subject based on the data for that subject,
then it can directly measure the correlation between the A's and B's.
Not only does it not need us to tell it, but I don't see why it should
do anything different if we claim (probably falsely) that they are 
independent.  Perhaps you're saying that for each subject it can estimate
A without estimating B and then separately estimate B without estimating A?
How would it do that?

I imagined the (1|Subject) is asking lmer to estimate intercept[i]
and the (0+Day|Subject) is asking it to estimate slope[i].
That means try to find values for intercept[i] and slope[i] that minimize
the sum over all points for subject i of the squares of residues
 (predicted - measured)
where predicted = intercept[i] + input * slope[i]

In my experiments I create a few data points for each of a few subjects,
and with | I see the estimates I expect for each subject - the ones that
minimize the sum of squares of residues.
When I use || I see results that I so far cannot explain at all.

If it will help I'll show you such an example (or many such).



More information about the R-sig-mixed-models mailing list