[R-sig-ME] How to use all the cores while running glmer on a piecewise exponential survival with
Ben Bolker
bbolker @ending from gm@il@com
Thu Aug 23 21:34:50 CEST 2018
I'd love to see what anyone else here has to say, but here are some thoughts.
1. There's no easy, pre-packaged way that I know of to scale things in
this way. What you can do will depend enormously on how much hacking
you're willing & able to do.
2. What Harold Doran said: The deepest level at which one *might*
multi-thread/core/parallelize the fitting process would be at the
level of the linear algebra. lme4 uses some pretty fancy linear
algebra, so I don't know if it will help, but it would definitely be
worth experimenting a little bit with Microsoft "Open R" (or whatever
it's called) and with the various optimized BLAS options (Dirk
Eddelbuettel had an article about this a while back). Might not help,
but if it does it's low-hanging fruit.
3. Depending on your random-effects structure (i.e. if your problem
decomposes into a moderate number of *conditionally* independent
chunks of data - that is, not a fully or strongly crossed design), it
wouldn't be too hard to write a top-level map-reduce-like operation
that, for a given set of parameters (random-effect var/cov + called
the separate workers to compute the deviance for each chunk of data,
then summed them to get the total deviance for that set of parameters,
then took another optimization step. I would love to see someone
implement something like this!
4. It might be worth experimenting with Doug Bates's MixedModels.jl
framework from Julia.
On Thu, Aug 23, 2018 at 3:18 PM Adam Mills-Campisi
<adammillscampisi using gmail.com> wrote:
>
> I am estimating a piecewise exponential, mixed-effects, survival model with
> recurrent events. Each individual in the dataset gets an individual
> interpret (where using a PWP approach). Our full dataset has 10 million
> individuals, with 180 million events. I am not sure that there is any
> framework which can accommodate data at that size, so we are going to
> sample. Our final sample size largely depends on how quickly we can
> estimate the model, which brings me to my question: Is there a way to
> mutli-thread/core the model? I tried to find some kind of instruction on
> the web and the best lead I could find was a reference to this list serve.
> Any help would be greatly appreciated.
>
> [[alternative HTML version deleted]]
>
> _______________________________________________
> R-sig-mixed-models using r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models
More information about the R-sig-mixed-models
mailing list