[R-sig-ME] Unacceptibly high autocorrelation in MCMCglmm

Stuart Luppescu slu at ccsr.uchicago.edu
Mon Mar 19 18:25:59 CET 2012


On Sat, 2012-03-17 at 10:29 +0000, Jarrod Hadfield wrote:
> HI,
> 
> It looks like the probit has underflowed/overflowed - you can check  
> this by saving the latent variables and looking to see whether the  
> range of the absolute values exceeds 7 (See Section 8.08 of  
> CourseNotes).
> 
> This can happen with weak priors and (near) complete separation and/or  
> with weak priors for effects that are heavily confounded.
> 
> I'm not sure how to proceed with underflow/overflow problems  
> generally.  I could terminate the procedure, or I could truncate the  
> latent variables at their overflow/underflow points. The latter is  
> used by some WinBUGS users, but then WinBUGS handles the fact that the  
> response is from a truncated normal not a normal - something which  
> would be hard to program in MCMCglmm. Any thoughts would be useful.
> 
> Cheers,
> 
> Jarrod
> 
> 
> 
> Quoting Stuart Luppescu <slu at ccsr.uchicago.edu> on Fri, 16 Mar 2012  
> 17:38:15 -0500:
> 
> > Hello, I'm running this ordered category outcome model:
> >
> > glme5.very.len <- MCMCglmm(very.len.summative.o ~ 1 ,
> >                    prior=list(R=list(V=1, fix=1), G=list(G1=list(V=1,
> > nu=0), G2=list(V=1, nu=0), G3=list(V=1, nu=0), G4=list(V=1, nu=0) )),
> >                    random = ~emplid + deptid + grade.f + subject.f ,
> >                    family = "ordinal",
> >                    nitt=300000,
> >                    data = summative.ratings.prin.yr1.full)

Hi Jarrod, I think I've figured out why this is not working. I hope you
or someone can suggest a fix.

I am analyzing ratings data of observations of teacher performance.
Teachers are rated on more than one occasion on a 1-4 scale on 10
components. The object is to calculate the ICC as a measure of
interrater reliablility (the percent of total variance attributed to
differences in teacher performance = variance in emplid/total variance).
This analysis worked perfectly
fine using MCMCglmm with the 10 components as fixed effects.

What I'm doing now (which is NOT working) is calculating one single
summative rating per teacher based on combinations of all the component
ratings a teacher received in a year. That means only one datum per
teacher per year: no separate components and no multiple observations.
So, including the teacher ID (emplid) as a random effect will screw
things up because there is only one datum per teacher and no
within-teacher variance. 

Do you have any idea how to get around this problem?

Thank you very much for your help.

-- 
Stuart Luppescu -=- slu .at. ccsr.uchicago.edu        
University of Chicago -=- CCSR 
才文と智奈美の父 -=-    Kernel 3.2.1-gentoo-r2                
Please do think hard before you tell other people
 what they 'should' do for you.    -- Brian D.
 Ripley       R-devel (January 2006)




More information about the R-sig-mixed-models mailing list