[R-sig-ME] Nested model variance/parameter value

John Maindonald john@m@|ndon@|d @end|ng |rom @nu@edu@@u
Sat Dec 11 02:03:31 CET 2021


My guess is that you should not be treating answers from different
questions as independent.  They are nested within individuals, and
a main effect is not sufficient to account for systematic differences.
There are shades of the story I heard of an experimenter whose blocks
were made up of plots that moved successively away from the river.
What do you get if you analyse a summary measure for the questionnaire
or individual questions?


John Maindonald             email: john.maindonald using anu.edu.au<mailto:john.maindonald using anu.edu.au>

On 11/12/2021, at 00:29, N o s t a l g i a <kenjiro using shoin.ac.jp<mailto:kenjiro using shoin.ac.jp>> wrote:

I am a novice in mixed models, and I am trying to fit a model to a survey data with an interval-scale dependent variable (hon), four fixed-effect variables (sex, age, schooling, and questions) and two random effects. The random effects are interviewer (intv) and interviewee (ID), and as such, they are in a nested relationship. Sex, age and questions are found to be in an interacting relationship.

A major question I am asking here is whether the interviewer effect is significant or not, so I tried the following intercept-only models, with model 1 using the nested model, model 2 only the interviewer effect, and model 3 only the interviewee effect:

model1 <- lmer(hon ~ sex * age * Question + schooling + (1|intv/ID)
model2 <- lmer(hon ~ sex * age * Question + schooling + (1|intv)
model3 <- lmer(hon ~ sex * age * Question + schooling + (1|ID)

The output from each model says the following:

model 1:
Random effects:
Groups   Name        Variance Std.Dev.
ID:intv  (Intercept) 0.03988  0.1997
intv     (Intercept) 0.00000  0.0000
Residual             0.16847  0.4105
Number of obs: 3283, groups:  ID:intv, 305; intv, 28

model 2:
Random effects:
Groups   Name        Variance Std.Dev.
intv     (Intercept) 0.002348 0.04846
Residual             0.205998 0.45387
Number of obs: 3283, groups:  intv, 28

model 3:
Random effects:
Groups   Name        Variance Std.Dev.
ID       (Intercept) 0.04107  0.2027
Residual             0.16894  0.4110
Number of obs: 3294, groups:  ID, 306

The respective Log likelihood and AIC values are:

model1 AIC = 4249.232  LL = -2076.616 (df=48)
model2 AIC = 4539.69   LL = -2222.845 (df=47)
model3 AIC = 4274.99   LL = -2090.495 (df=47)

Since I got an error message saying "models were not all fitted to the same size of dataset" while running anova(), I compared the AICs and concluded that model2 is the best model of the three.

Here I have three questions:

1. Why is the variance for the interviewer effect(intv) zero? Is it necessarily so because of the nested model, or is it simply because that there is no interviewer effect?

2. If intv is really zero, why does not the model 3 give a better AIC?

3. Am I allowed to compare the three models with AIC as I did above? Or should I use LL?

Thanks in advance,

Kenjiro Matsuda

_______________________________________________
R-sig-mixed-models using r-project.org<mailto:R-sig-mixed-models using r-project.org> mailing list
https://aus01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fstat.ethz.ch%2Fmailman%2Flistinfo%2Fr-sig-mixed-models&data=04%7C01%7Cjohn.maindonald%40anu.edu.au%7C5a76556ebeb544a5b77e08d9bbd07302%7Ce37d725cab5c46249ae5f0533e486437%7C0%7C0%7C637747797303366625%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&sdata=leBHp2TfI6mW1m4YIqeMw2czjyr%2FI7wrKiSWxFIAtO0%3D&reserved=0


	[[alternative HTML version deleted]]



More information about the R-sig-mixed-models mailing list