[R-sig-ME] Large mixed & crossed-effect model looking at educational spending on crime rates with error messages

Ades, James j@de@ @end|ng |rom uc@d@edu
Tue Oct 1 08:25:21 CEST 2019


Re default optimizer: haha...yes, that makes sense:)

Re multicollinearity with race: it’s not crucial that I include all races in the same model…I could run each race separately in different models and report those effects.

Re year-spending: when I look at correlation in a strict sense they are roughly .35 correlated. In a non-strict sense, it’s actually the other way around—including year in the model changes the effect of spending (which is really what I’m trying to measure). I see what you’re saying with regard to the actual source of variation, but can’t it be the case that one thing isn’t vaguely related to another, and that the actual source of variation is the two variables. In such a case, aren’t there ways to parse that covariance, such that you gain a better understanding of each variable’s effect on variance?

Also, just want to make sure: if you don’t have a dependent observation for a given condition, you would have to remove that entire row, correct? The mixed-model wouldn’t be able to work around that? This is what i learned in stats class, but if I’m doing this wrong, I think this might also be affecting correlation.

Thanks, Philip!

James



On Sep 29, 2019, at 3:06 AM, Phillip Alday <phillip.alday using mpi.nl<mailto:phillip.alday using mpi.nl>> wrote:

The default optimizer in lme4 is the default for a reason. :) While
there's no free lunch or single best optimizer for every situation, the
default was chosen based on our experience about which optimizer works
performs well across a wide range of models and datasets.

Multicollinearity in mixed-effects models works pretty much exactly the
same way as it does in fixed-effects (i.e. regular/not mixed) regression
and so the way it's addressed (converting to PC basis, residualization,
etc.) In your case, you could omit one race and then the remaining races
will be linearly independent, albeit still correlated with another. This
correlation isn't great and will inflate your standard errors, but then
at least your design matrix won't be rank deficient.

Regarding year-spending: Are you using 'correlated' in a strict sense,
e.g. that spending tends to go up year-by-year? Or do just mean that
including spending in the model changes the effect of year? (I think the
latter weakly implies the former, but it's a different perspective.)
Either way, the changing coefficient isn't terribly surprising. In
'human' terms: if you don't have the option of attributing something to
the actual source of variation, but you do have something that is
vaguely related to it, then you will attribute it to that. However, if
you're ever given the chance to attribute it to the actual source, you
will do that and your attribution to the vaguely-related thing will change.

Best,
Phillip

On 29/09/2019 03:20, Ades, James wrote:
Thanks, Ben and Philip!

So I think I was conflating having a continuous dependent variable,
which could then be broken up into different categories with dummy
variables (for instance, if I wanted to look at how wealth affects the
distribution of race in an area, I could create a model like lmer(total
people ~ race + per capita income + …) with creating something similar
with a fixed factor (which I guess can’t be done).

I did try running the variables independently, which worked, I just
thought there was a way to combine races, and then per that logic,
thought that since race variables repeated within place (city/town), I
could nest it within PLACE_ID. But realized that the percent race as a
fixed effect (as an output) didn’t really make sense…hence my confusion.
So I guess somewhere in there my logic was afoul.

Regarding Nelmed-Mead: that’s odd...I recall reading somewhere that it
was actually quicker and more likely to converge. Good to know. I read
through the lme4 package details here:
https://cran.r-project.org/web/packages/lme4/lme4.pdf Would you
recommend then optimx? Or Nloptr/bobyqa? (which I think is the default).

Regarding multicollinearity: is there an article you could send me on
dealing with multicollinearity in mixed-effect models? I’ve perused the
internet, but haven’t been able to find a great how to and dealing with
it, such that you can better parse the effects of different variables (I
know that one can use PCA, but that fundamentally alters the process,
and isn’t there a way of averaging variables such that you minimize
collinearity?).

One thing I’m currently dealing with in my model is that year as a fixed
effect is correlated with a district’s spending, such that if I remove
year, district spending has a negative effect on crime, but including
year as a fixed effect alters the spending regression coefficient to be
positive (just north of zero). Though here, specifically, I’m not sure
if this is technically collinearity, or if time as a fixed factor is
merely controlling, here, for crime change over time, where a model
without year as a fixed factor would be looking at the effect of
district spending on crime (similar to a model where years are averaged
together). Does that make sense? Is that interpretation accurate?

Thanks much!

James


On Sep 28, 2019, at 8:09 AM, Phillip Alday <phillip.alday using mpi.nl<mailto:phillip.alday using mpi.nl>
<mailto:phillip.alday using mpi.nl>> wrote:

ink the answer to your proximal question about per_race is that
you would need five *different* numerical varia


	[[alternative HTML version deleted]]



More information about the R-sig-mixed-models mailing list