[R-sig-ME] lme4

Ben Bolker bbo|ker @end|ng |rom gm@||@com
Wed Jul 3 01:55:05 CEST 2019


  Please keep r-sig-mixed-models in the Cc: list ...
  Brief comments below.

On 2019-07-02 6:33 p.m., S.D. Silver wrote:
> On 2019-07-02 04:30, Ben Bolker wrote:
>> I'm not sure I understand all the details of your modeling framework,
>> but in general it's dangerous to compare REML for models with differing
>> fixed effects (which would probably? also include models with different
>> types of differencing).  It might help if you provided some more
>> background (what is REMLP,  is 'lmermod' a function or a package, what
>> is LDV, ... ?)
>>
>>  cheers
>>    Ben Bolker
>>
>> On 2019-07-01 6:34 p.m., S.D. Silver wrote:
>>> I am working with an r code procedure for a ARFIMA mutilevel model that
>>> estimates a
>>> linear mixed model fit by REMLP['lmermod']. I have now been asked to
>>> compare the model's results with alternatives that include ARFIMA-LDV.
>>> The only output diagnostics that the code provides in addition to
>>> parameter estimates is shown below :
>>>
>>>      " REML criterion at convergence: 1694929 Scaled residuals:
>>>      Min 1Q Median 3Q Max -3.4407 -0.7361 0.0482 0.7791 2.9853
>>>
>>>      Random effects: Groups Name Variance Std.Dev.  time
>>>      (Intercept) 42.5 6.519 Residual 1229.0 35.057 Number of
>>>      obs: 170217, groups: time, 363"
>>>
>>> I understand that REML is most directly about estimatingvariance
>>> components, but is it meaningful to consider it
>>>  as a measure of fit in comparing nested models. Here the alternatives
>>> are LDV and an MLM that is not fractionally differenced.
>>>
>>> Given the difference in estimation methodology, I do not think it is
>>> feasible to compare 'lmermod' with alternatives in OLS.Do any comparable
>>> model variants for comparison in the estimation procedure of lme4 come
>>> to mind ?
>>>
>>> Would be grateful for any observations that you could provide.
>>>
>>> Steven
>>>
>>> _______________________________________________
>>> R-sig-mixed-models using r-project.org mailing list
>>> https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models
>>
>> _______________________________________________
>> R-sig-mixed-models using r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models
> 
> Ben,
> ARFIMA-MLM is a multi-level Arfima model. You can review the package at
> https://pwkraft.github.io/resources/articles/ArfimaMLM-documentation.pdf.
> In the messaget that I posted, REMLP should not had the P. That is a
> typographical error. lmermod is basically a linear function for a mixed
> Model in R. LDV is a model with a lagged dependent variable. Our
> challenge here is to comply with a reviewer request to compare the
> results of our empirical estimation in ARFIMA-MLM to alternatives. I am
> more familiar with models in which we could use mean square error or its
> square root as a measure of fit and have post-estimation diagnostics.
> The output that ARFIMA-MLM provides only includes REML as summary
> statistics. Among alternative estimation procedures, MLM and MLM-LDV do
> not use fractional differencing. The ARFIMA results used lme4 and should
> generate whatever options are available in the R estimation procedure
> for lmer. I understand that REML is essentially a summary statistic that
> is most suitable for estimating the variance component in a random
> effects model.
> 
> My inquiry is really in two parts. First, is there a meaning full
> comparison across estimation models with an implementation in lmer.
> Second, what can be a suitable exercise with REML in variance
> decomposition in lmer?
> 

   The topic of model comparison/goodness-of-fit metrics for multilevel
models is a bit of a quagmire. The (googlable) GLMM FAQ gives some
suggestions for computing R^2 values for multilevel models, but there's
no simple one-size-fits-all answer -- and it will probably get even more
delicate if you try to extend to encompass more different model
structures.  My advice would be that if there is some concrete goal you
are trying to achieve (e.g. one-step-ahead forecasting) and you can come
up with a simple way to quantify your success, that will be the way to
compare the different approaches ...

  cheers
    Ben Bolker



More information about the R-sig-mixed-models mailing list