[R-sig-ME] Nested model variance/parameter value
N o s t a l g i a
kenj|ro @end|ng |rom @ho|n@@c@jp
Mon Dec 13 07:38:22 CET 2021
Thanks for pointing out my mistakes. Yes,I should have chosen model1
with the least AIC among the three, and I should not have compared the
three with different dataset to start with.
I went back the original dataset and deleted all the cases that
includes NAs manually (somehow "na.action = na.exclude, data = third2"
did not work). Now anova() works fine, and the best model turned out
to be (anova-wise as well asa AIC-wise) the one with only ID as the
random variable. Everything seems fine -- except that the variace for
intv remained zero in the model that incorporates both intv and ID as
a random variable. This probably I need to accept as it is: there is
absolutely no interviewer effect.
On 2021/12/11 19:55, Karl Ove Hufthammer wrote:
> N o s t a l g i a skreiv 10.12.2021 12:29:
>> Since I got an error message saying "models were not all fitted to
>> the same size of dataset" while running anova(), I compared the AICs
>> and concluded that model2 is the best model of the three.
> No, model 2 has the *highest* AIC, and based on AIC, it would be the
> *worst* model. The best model would be the one with the lowest AIC.
> (Also, it doesn’t seem realistic to assume no random effect for the
> interviewees, so I would also dismiss model 2 based on *theoretical*
> But in this case, comparing the AICs (or log likelihood) is actually
> *not* valid, as the data were not fitted to the same dataset
> (something which anova() warns you about). In model 3, you have 3294
> observations, but in model 1 and 2, you only have 3283 observations.
> The only difference between the models is that model 3 doesn’t include
> the ‘intv’ variable. In other words, for 11 responses, you don’t know
> who the interviewer was.
> So you have to refit the models to the *same* dataset, e.g., by
> removing the observation where ‘is.na(intv)’ before fitting the models.
More information about the R-sig-mixed-models