[R-sig-ME] AIC and other IT indexes criteria for for backward, forward and stepwise regression

Daniel Lüdecke d@|uedecke @end|ng |rom uke@de
Thu Dec 19 08:02:08 CET 2019


I would second what Ben Bolker said, that model selection only based on some
fit indices should be done with care, and most often, theoretical
contemplation about your model / design is a more useful guide in selecting
predictors and models.

That said, you also might want to look at the "performance" package
(https://easystats.github.io/performance). There is a function,
"compare_performance()", which "ranks" your models when you use the argument
"rank = TRUE":
https://easystats.github.io/performance/reference/model_performance.html
The function looks at different fit indices (including AIC, but also R2 or
Bayes factors) that are shared by all models, and normalizes and averaged
those values to create a "ranking". As you may already guess from my
remarks, this is a rather exploratory or experimental method, which is
probably not better than just looking at the AIC, but maybe it's worth a
try.

The "see" package is our visualization tool, and I personally like plots
when exploring data or models. So, you can also create a "spiderweb" plot of
model comparisons:
https://easystats.github.io/see/articles/performance.html#compare-model-perf
ormances
(I think this function is currently only available in the GitHub version of
"see").

Maybe that gives you some inspiration...

Best
Daniel

-----Ursprüngliche Nachricht-----
Von: R-sig-mixed-models <r-sig-mixed-models-bounces using r-project.org> Im
Auftrag von Mario Garrido
Gesendet: Mittwoch, 18. Dezember 2019 12:07
An: r-sig-mixed-models using r-project.org
Betreff: [R-sig-ME] AIC and other IT indexes criteria for for backward,
forward and stepwise regression

Dear users,
Im currently exploring on the use of AIC and other I-T indexes criteria for
backward, forward and stepwise regression.
Usually, when applying IT indexes for Multimodal Inference, we choose a set
of 'good models' depending on different criteria, but mainly, all models
with delta AIC<2, and then we averaged the estimates between the set of
models or make conclusions based on the set of models, no need to average.
However, if Im not wrong, the goal of backward etc is to get to one 'best'
final model. I understand the use of AIC in this framework but, is there
any criteria to select the best model in this case? Do I simply have to
choose the model with the lowest AIC no matter whether there is another
model whose delta is less than 2? Does it depend on a personal criteria?
For example, if my 'maximal' or saturated model has the lowest AIC and the
model dropping one variable has a delta of 0.5, which model to choose?
I was looking on the web and I have found no answer to this. So, any
literature recommendation or advice will be welcome.
Thanks

-- 
Mario Garrido Escudero, PhD
Dr. Hadas Hawlena Lab
Mitrani Department of Desert Ecology
Jacob Blaustein Institutes for Desert Research
Ben-Gurion University of the Negev
Midreshet Ben-Gurion 84990 ISRAEL

gaiarrido using gmail.com; gaadio using post.bgu.ac.il
phone: (+972) 08-659-6854

	[[alternative HTML version deleted]]

_______________________________________________
R-sig-mixed-models using r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-sig-mixed-models

--

_____________________________________________________________________

Universitätsklinikum Hamburg-Eppendorf; Körperschaft des öffentlichen Rechts; Gerichtsstand: Hamburg | www.uke.de
Vorstandsmitglieder: Prof. Dr. Burkhard Göke (Vorsitzender), Prof. Dr. Dr. Uwe Koch-Gromus, Joachim Prölß, Marya Verdel
_____________________________________________________________________

SAVE PAPER - THINK BEFORE PRINTING



More information about the R-sig-mixed-models mailing list