[R-sig-ME] Weird results after fitting a model with lme4
Michał Folwarczny
m|ch@|| @end|ng |rom tut@@|o
Mon Mar 21 17:53:39 CET 2022
Dear colleague(s),
Prof. Bolker suggested me contacting you regarding my question about the results of a model fitted via lme4. Recently I run a study whereby each participant evaluated 18 different foods in terms of these foods' calorie content. There were two conditions (variable levels: city, nature) and I wanted to test, whether there were differences in their ratings of calories in foods across conditions. So, using the lme4 package I fit a mixed model (see lmer.png), with random intercepts for participants and foods. However, this produced exactly the same results (for the effect of condition) as fitting a linear model (see lm.png) without random effects! What's even more weird is that when I remove random intercept for foods, then the output is the same (again, for the effects of condition; standard errors for intercepts do indeed change). It doesn't make much sense to me, as I expected standard errors for the effect of the binary variable--condition--would change.
These are the two models that produce the same results:
lm(caloriesIndex ~ condition, df)
lmer(calories ~ condition + (1|id) + (1|food), dfl)
As a dependent variable in the wide dataset used to running a simple linear model, I use the average of the 18 ratings, i.e.,
df$caloriesIndex <- df %>%
select(calories1:calories18) %>%
rowMeans
Do you have an idea why this could have happened? If you wanted to have a look at these data, then I attached data in wide format (df.csv) and long format (dfl.csv).
Thanks in advance for suggestions!
Best regards,
Michał Folwarczny,
Reykjavik University,
PS: When fitting an alternative model, that is, lmer(calories ~ condition + (1|food), dfl) I get the results that I expected, with lower standard errors for the effect of condition - to my understanding, this is desired and expected from using mixed models as compared to a simple linear regression. Is it possible that adding random intercepts for participants and foods (i.e., items in a study) evens out? If that's the case, then wouldn't using the alternative model with random intercepts for foods only be more appropriate approach to these data?
-------------- next part --------------
A non-text attachment was scrubbed...
Name: lm.png
Type: image/png
Size: 79493 bytes
Desc: not available
URL: <https://stat.ethz.ch/pipermail/r-sig-mixed-models/attachments/20220321/f3617ca6/attachment-0002.png>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: lmer.png
Type: image/png
Size: 120892 bytes
Desc: not available
URL: <https://stat.ethz.ch/pipermail/r-sig-mixed-models/attachments/20220321/f3617ca6/attachment-0003.png>
More information about the R-sig-mixed-models
mailing list