[R-sig-ME] spaMM::fitme() - a glmm for longitudinal data that accounts for spatial autocorrelation

Sarah Chisholm @ch|@023 @end|ng |rom uott@w@@c@
Tue Jul 14 16:55:12 CEST 2020


Hi Mollie, thank you for your suggestion. glmmTMB seems like a good option
for my needs as well. In your sample code above, can you explain what the
term 'group' does in matern(pos+0|group)? Does this allow the spatial
correlation structure to be applied to specific groupings in the data (in
my case, for example, by 'continent')?

Francois, thank you for this very clear answer. This is a very convenient
feature of the function! May I ask you a couple of other questions about
some issues that I've had with spaMM::fitme()?

In particular, when I try fitting this model to a large data set (~14 000
rows x 7 columns, ~2 MB), the model will run for an extended period of
time, to the point where I've had to terminate the computation. I've tried
applying the suggestions that are mentioned in the user guide, i.e.
setting init=list(lambda=0.1) and init=list(lambda=NaN). Implementing
init=list(lambda=0.1) returned an error suggesting that there was a lack of
memory, while running the model with init=list(lambda=NaN) also ran for an
extended period of time without completing. Is there something else I can
do to speed up the fit of these models?

I've had a similar problem with an even larger data set (~185 000 rows x 8
columns, ~21 MB), where, when I try running the model, this error is
returned immediately:

Error in ZA %*% xmatrix : Cholmod error 'problem too large' at file ../Core/
cholmod_dense.c, line 105

I've tried running this model on two devices, both with a 64-bit OS with
Windows 10, one with 32 GB of RAM and the other with 64 GB. I've gotten the
same error from both devices. Is there a way that fitme() can accommodate
these large data sets?

Thank you,

Sarah

	[[alternative HTML version deleted]]



More information about the R-sig-mixed-models mailing list