---
title: "Using tidybayes with the posterior package"
author: "Matthew Kay"
date: "`r Sys.Date()`"
output:
rmarkdown::html_vignette:
toc: true
params:
EVAL: !r identical(Sys.getenv("NOT_CRAN"), "true")
vignette: >
%\VignetteIndexEntry{Using tidybayes with the posterior package}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{UTF-8}
---
```{r chunk_options, include=FALSE}
if (requireNamespace("pkgdown", quietly = TRUE) && pkgdown::in_pkgdown()) {
tiny_width = small_width = med_width = 6.75
tiny_height = small_height = med_height = 4.5
large_width = 8
large_height = 5.25
} else {
tiny_width = 5.5
tiny_height = 3 + 2/3
small_width = med_width = 6.75
small_height = med_height = 4.5
large_width = 8
large_height = 5.25
}
knitr::opts_chunk$set(
fig.width = small_width,
fig.height = small_height,
eval = if (isTRUE(exists("params"))) params$EVAL else FALSE
)
if (capabilities("cairo") && Sys.info()[['sysname']] != "Darwin") {
knitr::opts_chunk$set(
dev.args = list(png = list(type = "cairo"))
)
}
dir.create("models", showWarnings = FALSE)
```
## Introduction
This vignette describes how to use the `tidybayes` and `ggdist` packages along with the `posterior` package
(and particularly the `posterior::rvar()` datatype) to extract and visualize [tidy](https://dx.doi.org/10.18637/jss.v059.i10)
data frames of `rvar`s from posterior distributions of model variables, fits, and predictions.
This workflow is a "long-data-frame-of-`rvar`s" workflow, which is bit different from the "long-data-frame-of-draws" workflow
described in `vignette("tidybayes")` or `vignette("tidy-brms")`. The `rvar` based approach may be particularly useful on
larger models, as it is more memory efficient.
## Setup
The following libraries are required to run this vignette:
```{r setup, message = FALSE, warning = FALSE}
library(dplyr)
library(purrr)
library(modelr)
library(ggdist)
library(tidybayes)
library(ggplot2)
library(cowplot)
library(rstan)
library(brms)
library(ggrepel)
library(RColorBrewer)
library(posterior)
library(distributional)
theme_set(theme_tidybayes() + panel_border())
```
These options help Stan run faster:
```{r, eval=FALSE}
rstan_options(auto_write = TRUE)
options(mc.cores = parallel::detectCores())
```
```{r hidden_options, include=FALSE}
# While the previous code chunk is the actual recommended approach,
# CRAN vignette building policy limits us to 2 cores, so we use at most
# 2 to build this vignette (but show the previous chunk to
# the reader as a best pratice example)
rstan_options(auto_write = TRUE)
options(mc.cores = 1) #min(2, parallel::detectCores()))
options(width = 100)
```
## Example dataset
To demonstrate `tidybayes`, we will use a simple dataset with 10 observations from 5 conditions each:
```{r}
set.seed(5)
n = 10
n_condition = 5
ABC =
tibble(
condition = rep(c("A","B","C","D","E"), n),
response = rnorm(n * 5, c(0,1,2,1,-1), 0.5)
)
```
A snapshot of the data looks like this:
```{r}
head(ABC, 10)
```
This is a typical tidy format data frame: one observation per row. Graphically:
```{r fig.width = tiny_width, fig.height = tiny_height}
ABC %>%
ggplot(aes(y = condition, x = response)) +
geom_point()
```
## Model
Let's fit a hierarchical model with shrinkage towards a global mean:
```{r m_brm, cache = TRUE}
m = brm(
response ~ (1|condition),
data = ABC,
prior = c(
prior(normal(0, 1), class = Intercept),
prior(student_t(3, 0, 1), class = sd),
prior(student_t(3, 0, 1), class = sigma)
),
control = list(adapt_delta = .99),
file = "models/tidy-brms_m.rds" # cache model (can be removed)
)
```
The format returned by `tidybayes::tidy_draws()` is compatible with the `posterior::draws_df()` format, so
`posterior::summarise_draws()` supports it. Thus, we can use `posterior::summarise_draws()` to get a quick
look at draws from the model:
```{r}
summarise_draws(tidy_draws(m))
```
## Extracting draws from a fit in tidy-format using `spread_rvars`
Now that we have our results, the fun begins: getting the variables out in a tidy format! First, we'll use the `get_variables()` function to get a list of raw model variable names so that we know what variables we can extract from the model:
```{r}
get_variables(m)
```
Here, `b_Intercept` is the global mean, and the `r_condition[]` variables are offsets from that mean for each condition. Given these variables:
- `r_condition[A,Intercept]`
- `r_condition[B,Intercept]`
- `r_condition[C,Intercept]`
- `r_condition[D,Intercept]`
- `r_condition[E,Intercept]`
We might want a data frame where each row is a random variable (`rvar`) representing all draws from `r_condition[A,Intercept]`, `r_condition[B,Intercept]`, `...[C,...]`, `...[D,...]`, or `...[E,...]`. That would allow us to easily compute quantities grouped by condition, or generate plots by condition using ggplot, or even merge draws with the original data to plot data and posteriors simultaneously.
We can do this using the `spread_rvars()` function. It includes a simple specification format that we can use to extract variables and their indices into tidy-format data frames. This function is analogous to `spread_draws()` from the tidy-data-frames-of-draws workflow described in `vignette("tidy-brms")`.
### Gathering variable indices into a separate column in a tidy format data frame
Given a variable in the model like this:
`r_condition[D,Intercept]`
We can provide `spread_rvars()` with a column specification like this:
`r_condition[condition,term]`
Where `condition` corresponds to `D` and `term` corresponds to `Intercept`. There is nothing too magical about what `spread_rvars()` does with this specification: under the hood, it splits the variable indices by commas and spaces (you can split by other characters by changing the `sep` argument). It lets you assign columns to the resulting indices in order. So `r_condition[D,Intercept]` has indices `D` and `Intercept`, and `spread_rvars()` lets us extract these indices as columns in the resulting tidy data frame of draws from `r_condition`:
```{r}
m %>%
spread_rvars(r_condition[condition,term])
```
The `r_condition` column above is a `posterior::rvar()` datatype, which is an array-like datatype representing draws from a random variable:
```{r}
m %>%
spread_rvars(r_condition[condition,term]) %>%
pull(r_condition)
```
In this case, for each of the 5 elements of this `rvar` vector, we have 1000 draws from each of 4 chains in the model. For more on the `rvar` datatype, see `vignette("rvar", package = "posterior")`.
We can choose whatever names we want for the index columns; e.g.:
```{r}
m %>%
spread_rvars(r_condition[c,t])
```
But the more descriptive and less cryptic names from the previous example are probably preferable.
If we leave off the name for an index, it is left "nested" in the column. For example, we
could nest the `term` since it only has one value `"Intercept"` anyway:
```{r}
m %>%
spread_rvars(r_condition[condition,])
```
Or we could nest the `condition`, though this is probably not that useful practically:
```{r}
m %>%
spread_rvars(r_condition[,term])
```
## Point summaries and intervals
### With simple model variables
`tidybayes` provides a family of functions for generating point summaries and intervals from draws in a tidy format. These functions follow the naming scheme `[median|mean|mode]_[qi|hdi]`, for example, `median_qi()`, `mean_qi()`, `mode_hdi()`, and so on. The first name (before the `_`) indicates the type of point summary, and the second name indicates the type of interval. `qi` yields a quantile interval (a.k.a. equi-tailed interval, central interval, or percentile interval) and `hdi` yields a highest (posterior) density interval. Custom point summary or interval functions can also be applied using the `point_interval()` function.
For example, we might extract the draws corresponding to posterior distributions of the overall mean and standard deviation of observations:
```{r}
m %>%
spread_rvars(b_Intercept, sigma)
```
Like with `r_condition[condition,term]`, this gives us a tidy data frame. If we want the median and 95% quantile interval of the variables, we can apply `median_qi()`:
```{r}
m %>%
spread_rvars(b_Intercept, sigma) %>%
median_qi(b_Intercept, sigma)
```
We can specify the columns we want to get medians and intervals from, as above, or if we omit the list of columns, `median_qi()` will use every column that is not a grouping column. Thus in the above example, `b_Intercept` and `sigma` are redundant arguments to `median_qi()` because they are also the only columns we gathered from the model. So we can simplify this to:
```{r}
m %>%
spread_rvars(b_Intercept, sigma) %>%
median_qi()
```
If you would rather have a long-format list, use `gather_rvars()` instead:
```{r}
m %>%
gather_rvars(b_Intercept, sigma)
```
We could also use `median_qi()` here:
```{r}
m %>%
gather_rvars(b_Intercept, sigma) %>%
median_qi(.value)
```
### With indexed model variables
When we have a model variable with one or more indices, such as `r_condition`, we can apply `median_qi()` (or other functions in the `point_interval()` family) as we did before:
```{r}
m %>%
spread_rvars(r_condition[condition,]) %>%
median_qi(r_condition)
```
**Note for existing users of `spread_draws()`**: you may notice that `spread_rvars()` requires us to be a bit more
explicit in passing column names to `median_qi()`. This is because `spread_rvars()` does not return pre-grouped
data frames, unlike `spread_draws()` --- since every row would always be its own group in the output from `spread_rvars()`,
returning a pre-grouped data frame would be redundant.
You can also use `posterior::summarise_draws()` on an `rvar` column to generate summaries
with convergence diagnostics. That function returns a data frame, which can be passed
directly into the `dplyr::mutate()` function:
```{r}
m %>%
spread_rvars(r_condition[condition,]) %>%
mutate(summarise_draws(r_condition))
```
## Combining variables with different indices in a single tidy format data frame
`spread_rvars()` and `gather_rvars()` support extracting variables that have different indices into the same data frame. Indices with the same name are automatically matched up, and values are duplicated as necessary to produce one row for every combination of levels of all indices. For example, we might want to calculate the mean within each condition (call this `condition_mean`). In this model, that mean is the intercept (`b_Intercept`) plus the effect for a given condition (`r_condition`).
We can gather draws from `b_Intercept` and `r_condition` together in a single data frame:
```{r}
m %>%
spread_rvars(b_Intercept, r_condition[condition,])
```
Within each draw, `b_Intercept` is repeated as necessary to correspond to every index of `r_condition`. Thus, the `mutate` function from dplyr can be used to find their sum, `condition_mean` (which is the mean for each condition):
```{r}
m %>%
spread_rvars(`b_Intercept`, r_condition[condition,Intercept]) %>%
mutate(condition_mean = b_Intercept + r_condition)
```
## Plotting point summaries and intervals
Plotting point summaries and intervals is straightforward using `ggdist::stat_pointinterval()`, which will produce visualizations with 66% and 95% intervals by default (this can be changed using the `.width` parameter, the default is `.width = c(.66, .95)`):
```{r fig.width = tiny_width, fig.height = tiny_height}
m %>%
spread_rvars(b_Intercept, r_condition[condition,]) %>%
mutate(condition_mean = b_Intercept + r_condition) %>%
ggplot(aes(y = condition, xdist = condition_mean)) +
stat_pointinterval()
```
`median_qi()` and its sister functions can also produce an arbitrary number of probability intervals by setting the `.width =` argument:
```{r}
m %>%
spread_rvars(b_Intercept, r_condition[condition,]) %>%
median_qi(condition_mean = b_Intercept + r_condition, .width = c(.95, .8, .5))
```
The results are in a tidy format: one row per group and uncertainty interval width (`.width`). This facilitates plotting, and is essentially what `ggdist::stat_pointinterval()` is doing for you under the hood above. For example, assigning `-.width` to the `linewidth` aesthetic will show all intervals, making thicker lines correspond to smaller intervals.
## Intervals with densities
To see the density along with the intervals, we can use `ggdist::stat_eye()` ("eye plots", which combine intervals with violin plots), or `ggdist::stat_halfeye()` (interval + density plots):
```{r fig.width = tiny_width, fig.height = tiny_height}
m %>%
spread_rvars(b_Intercept, r_condition[condition,]) %>%
mutate(condition_mean = b_Intercept + r_condition) %>%
ggplot(aes(y = condition, xdist = condition_mean)) +
stat_halfeye()
```
Or say you want to annotate portions of the densities in color; the `fill` aesthetic can vary within a slab in all geoms and stats in the `ggdist::geom_slabinterval()` family, including `ggdist::stat_halfeye()`. For example, if you want to annotate a domain-specific region of practical equivalence (ROPE), you could do something like this:
```{r fig.width = tiny_width, fig.height = tiny_height}
m %>%
spread_rvars(b_Intercept, r_condition[condition,]) %>%
mutate(condition_mean = b_Intercept + r_condition) %>%
ggplot(aes(y = condition, xdist = condition_mean, fill = after_stat(abs(x) < .8))) +
stat_halfeye() +
geom_vline(xintercept = c(-.8, .8), linetype = "dashed") +
scale_fill_manual(values = c("gray80", "skyblue"))
```
## Other visualizations of distributions: `stat_slabinterval`
There are a variety of additional stats for visualizing distributions in the `ggdist::stat_slabinterval()` family of stats and geoms:
See `vignette("slabinterval", package = "ggdist")` for an overview. All geoms that start with `stat_...` support the use of `rvar` columns in the `xdist` and `ydist` aesthetics.
## Posterior means
Rather than calculating conditional means manually as in the previous example, we could use `add_epred_draws()`, which is analogous to `brms::posterior_epred()`, giving posterior draws from posterior distributions of the mean of the response (i.e. the distribution of the expected value of the posterior predictive). We can combine it with `modelr::data_grid()` to first generate a grid describing the fits we want, then populate that grid with `rvar`s representing draws from the posterior:
```{r}
ABC %>%
data_grid(condition) %>%
add_epred_rvars(m)
```
## Quantile dotplots
Intervals are nice if the alpha level happens to line up with whatever decision you are trying to make, but getting a shape of the posterior is better (hence eye plots, above). On the other hand, making inferences from density plots is imprecise (estimating the area of one shape as a proportion of another is a hard perceptual task). Reasoning about probability in frequency formats is easier, motivating [quantile dotplots](https://github.com/mjskay/when-ish-is-my-bus/blob/master/quantile-dotplots.md) ([Kay et al. 2016](https://doi.org/10.1145/2858036.2858558), [Fernandes et al. 2018](https://doi.org/10.1145/3173574.3173718)), which also allow precise estimation of arbitrary intervals (down to the dot resolution of the plot, 100 in the example below).
Within the slabinterval family of geoms in tidybayes is the `dots` and `dotsinterval` family, which automatically determine appropriate bin sizes for dotplots and can calculate quantiles from samples to construct quantile dotplots. `ggdist::stat_dots()` and `ggdist::stat_dotsinterval()` are the variants designed for use on `rvar`s:
```{r fig.width = tiny_width, fig.height = tiny_height}
ABC %>%
data_grid(condition) %>%
add_epred_rvars(m) %>%
ggplot(aes(xdist = .epred, y = condition)) +
stat_dotsinterval(quantiles = 100)
```
The idea is to get away from thinking about the posterior as indicating one canonical point or interval, but instead to represent it as (say) 100 approximately equally likely points.
## Posterior predictions
Where `add_epred_rvars()` is analogous to `brms::posterior_epred()`, `add_predicted_rvars()` is analogous to `brms::posterior_predict()`, giving draws from the posterior predictive distribution.
We could use `ggdist::stat_interval()` to plot predictive bands alongside the data:
```{r fig.width = tiny_width, fig.height = tiny_height}
ABC %>%
data_grid(condition) %>%
add_predicted_rvars(m) %>%
ggplot(aes(y = condition)) +
stat_interval(aes(xdist = .prediction), .width = c(.50, .80, .95, .99)) +
geom_point(aes(x = response), data = ABC) +
scale_color_brewer()
```
The `add_XXX_rvars()` functions can be chained together to add posterior predictions
(`predicted_rvars`) and the distribution of the mean of the posterior predictive (`epred_rvars`) to the
same data frame. This makes it easy to plot both together alongside the data:
```{r fig.width = tiny_width, fig.height = tiny_height}
ABC %>%
data_grid(condition) %>%
add_epred_rvars(m) %>%
add_predicted_rvars(m) %>%
ggplot(aes(y = condition)) +
stat_interval(aes(xdist = .prediction)) +
stat_pointinterval(aes(xdist = .epred), position = position_nudge(y = -0.3)) +
geom_point(aes(x = response), data = ABC) +
scale_color_brewer()
```
## Posterior predictions, Kruschke-style
The above approach to posterior predictions integrates over the parameter uncertainty to give a single posterior predictive distribution. Another approach, often used by John Kruschke in his book [Doing Bayesian Data Analysis](https://sites.google.com/site/doingbayesiandataanalysis/), is to attempt to show both the predictive uncertainty and the parameter uncertainty simultaneously by showing several possible predictive distributions implied by the posterior.
We can do this pretty easily by asking for the distributional parameters for a given prediction implied by the posterior. We'll do it explicitly here by setting `dpar = c("mu", "sigma")` in `add_epred_draws()`. Rather than specifying the parameters explicitly, you can also just set `dpar = TRUE` to get draws from all distributional parameters in a model, and this will work for any response distribution supported by `brms::brm()`:
```{r}
ABC %>%
data_grid(condition) %>%
add_epred_rvars(m, dpar = c("mu", "sigma"))
```
At this point, we will need to use the "long-data-frame-of-draws" format more typical of the standard
tidybayes workflow. What we want to do is select a small number of draws from the joint distribution of
`mu` and `sigma` to plot predictive densities from. We will use `unnest_rvars()` to "unnest" all the `rvar`s
in the above output into a long-format data frame, sample 30 of the draws using `sample_draws()`, and then use
`ggdist::stat_slab()` to visualize each predictive distribution implied by the values of `mu` and `sigma`:
```{r fig.width = tiny_width, fig.height = tiny_height}
ABC %>%
data_grid(condition) %>%
add_epred_rvars(m, dpar = c("mu", "sigma")) %>%
unnest_rvars() %>%
sample_draws(30) %>%
ggplot(aes(y = condition)) +
stat_slab(
aes(xdist = dist_normal(mu, sigma)),
color = "gray65", alpha = 1/10, fill = NA
) +
geom_point(aes(x = response), data = ABC, shape = 21, fill = "#9ECAE1", size = 2)
```
The use of `unnest_rvars()` after `add_epred_rvars()` is essentially equivalent to
just using the `_draws()` instead of `_rvars()` form of the prediction functions
(e.g. `add_epred_draws()`), which may be faster and/or more convenient depending
on what other data manipulation you need to do.
## Fit/prediction curves
To demonstrate drawing fit curves with uncertainty, let's fit a slightly naive model to part of the `mtcars` dataset:
```{r m_mpg_brm, results = "hide", message = FALSE, warning = FALSE, cache = TRUE}
m_mpg = brm(
mpg ~ hp * cyl,
data = mtcars,
file = "models/tidy-brms_m_mpg.rds" # cache model (can be removed)
)
```
We can draw fit curves (i.e., curves showing the uncertainty in the conditional
expectation, aka the expectation of the posterior predictive) with probability bands:
```{r fig.width = tiny_width, fig.height = tiny_height}
mtcars %>%
group_by(cyl) %>%
data_grid(hp = seq_range(hp, n = 51)) %>%
add_epred_rvars(m_mpg) %>%
ggplot(aes(x = hp, color = ordered(cyl))) +
stat_lineribbon(aes(ydist = .epred)) +
geom_point(aes(y = mpg), data = mtcars) +
scale_fill_brewer(palette = "Greys") +
scale_color_brewer(palette = "Set2")
```
Or we could plot posterior predictions (instead of means). For this example
we'll also use `alpha` to make it easier to see overlapping bands:
```{r fig.width = tiny_width, fig.height = tiny_height}
mtcars %>%
group_by(cyl) %>%
data_grid(hp = seq_range(hp, n = 101)) %>%
add_predicted_rvars(m_mpg) %>%
ggplot(aes(x = hp, color = ordered(cyl), fill = ordered(cyl))) +
stat_lineribbon(aes(ydist = .prediction), .width = c(.95, .80, .50), alpha = 1/4) +
geom_point(aes(y = mpg), data = mtcars) +
scale_fill_brewer(palette = "Set2") +
scale_color_brewer(palette = "Dark2")
```
See `vignette("tidy-brms")` for additional examples of fit lines, including
animated [hypothetical outcome plots (HOPs)](https://mucollective.northwestern.edu/project/hops-trends).
### Extracting distributional regression parameters
`brms::brm()` also allows us to set up submodels for parameters of the response distribution *other than* the location (e.g., mean). For example, we can allow a variance parameter, such as the standard deviation, to also be some function of the predictors.
This approach can be helpful in cases of non-constant variance (also called _heteroskedasticity_ by folks who like obfuscation via Latin). E.g., imagine two groups, each with different mean response _and variance_:
```{r fig.width = tiny_width, fig.height = tiny_height}
set.seed(1234)
AB = tibble(
group = rep(c("a", "b"), each = 20),
response = rnorm(40, mean = rep(c(1, 5), each = 20), sd = rep(c(1, 3), each = 20))
)
AB %>%
ggplot(aes(x = response, y = group)) +
geom_point()
```
Here is a model that lets the mean _and standard deviation_ of `response` be dependent on `group`:
```{r m_ab_brm, cache = TRUE}
m_ab = brm(
bf(
response ~ group,
sigma ~ group
),
data = AB,
file = "models/tidy-brms_m_ab.rds" # cache model (can be removed)
)
```
We can plot the posterior distribution of the mean `response` alongside posterior predictive intervals and the data:
```{r fig.width = tiny_width, fig.height = tiny_height}
AB %>%
data_grid(group) %>%
add_epred_rvars(m_ab) %>%
add_predicted_rvars(m_ab) %>%
ggplot(aes(y = group)) +
stat_halfeye(aes(xdist = .epred), scale = 0.6, position = position_nudge(y = 0.175)) +
stat_interval(aes(xdist = .prediction)) +
geom_point(aes(x = response), data = AB) +
scale_color_brewer()
```
This shows posteriors of the mean of each group (black intervals and the density plots) and posterior predictive intervals (blue).
The predictive intervals in group `b` are larger than in group `a` because the model fits a different standard deviation for each group. We can see how the corresponding distributional parameter, `sigma`, changes by extracting it using the `dpar` argument to `add_epred_rvars()`:
```{r fig.width = tiny_width, fig.height = tiny_height}
AB %>%
data_grid(group) %>%
add_epred_rvars(m_ab, dpar = TRUE) %>%
ggplot(aes(xdist = sigma, y = group)) +
stat_halfeye() +
geom_vline(xintercept = 0, linetype = "dashed")
```
By setting `dpar = TRUE`, all distributional parameters are added as additional columns in the result of `add_epred_rvars()`; if you only want a specific parameter, you can specify it (or a list of just the parameters you want). In the above model, `dpar = TRUE` is equivalent to `dpar = list("mu", "sigma")`.
## Comparing levels of a factor
If we wish compare the means from each condition, `compare_levels()` facilitates comparisons of the value of some variable across levels of a factor. By default it computes all pairwise differences.
Let's demonstrate `compare_levels()` with `ggdist::stat_halfeye()`. We'll also
re-order by the mean of the difference:
```{r fig.width = tiny_width, fig.height = tiny_height}
m %>%
spread_rvars(r_condition[condition,]) %>%
compare_levels(r_condition, by = condition) %>%
ungroup() %>%
mutate(condition = reorder(condition, r_condition)) %>%
ggplot(aes(y = condition, xdist = r_condition)) +
stat_halfeye() +
geom_vline(xintercept = 0, linetype = "dashed")
```
## Ordinal models
The `brms::posterior_epred()` function for ordinal and multinomial regression models in brms returns multidimensional variables for each draw, where an additional dimension to the result contains outcome categories. The philosophy of `tidybayes` is to tidy whatever format is output by a model, so in keeping with that philosophy, when applied to ordinal and multinomial `brms` models, `add_epred_draws()` outputs a nested `.epred` variable that has additional columns for each level of the response variable.
### Ordinal model with continuous predictor
We'll fit a model using the `mtcars` dataset that predicts the number of cylinders in a car given the car's mileage (in miles per gallon). While this is a little backwards causality-wise (presumably the number of cylinders causes the mileage, if anything), that does not mean this is not a fine prediction task (I could probably tell someone who knows something about cars the MPG of a car and they could do reasonably well at guessing the number of cylinders in the engine).
Before we fit the model, let's clean the dataset by making the `cyl` column an ordered factor (by default it is just a number):
```{r}
mtcars_clean = mtcars %>%
mutate(cyl = ordered(cyl))
head(mtcars_clean)
```
Then we'll fit an ordinal regression model:
```{r m_cyl_brm, cache = TRUE}
m_cyl = brm(
cyl ~ mpg,
data = mtcars_clean,
family = cumulative,
seed = 58393,
file = "models/tidy-brms_m_cyl.rds" # cache model (can be removed)
)
```
`add_epred_rvars()` will now return a matrix instead of a vector for the `.epred` column, where the nested columns of `.epred` are the probability that the response is in that category. For example, here is the fit for two values of `mpg` in the dataset:
```{r}
tibble(mpg = c(21,22)) %>%
add_epred_rvars(m_cyl)
```
This format can be useful in some cases, but for our immediate purposes it would be better to have the
prediction for each category be on a separate row. We can use the `columns_to` parameter of `add_epred_rvars()`
to move the nested column headers into values of a column (here `"cyl"`). This will also add a `.row` column
indexing which row of the input data frame each prediction came from:
```{r}
tibble(mpg = c(21,22)) %>%
add_epred_rvars(m_cyl, columns_to = "cyl")
```
Note: for the `cyl` variable to retain its original factor level names you
must be using `brms` greater than or equal to version 2.15.9.
We could plot fit lines for fitted probabilities against the dataset:
```{r fig.width = med_width, fig.height = med_height}
data_plot = mtcars_clean %>%
ggplot(aes(x = mpg, y = cyl, color = cyl)) +
geom_point() +
scale_color_brewer(palette = "Dark2", name = "cyl")
fit_plot = mtcars_clean %>%
data_grid(mpg = seq_range(mpg, n = 101)) %>%
add_epred_rvars(m_cyl, value = "P(cyl | mpg)", columns_to = "cyl") %>%
ggplot(aes(x = mpg, color = cyl)) +
stat_lineribbon(aes(ydist = `P(cyl | mpg)`, fill = cyl), alpha = 1/5) +
scale_color_brewer(palette = "Dark2") +
scale_fill_brewer(palette = "Dark2") +
labs(y = "P(cyl | mpg)")
plot_grid(ncol = 1, align = "v",
data_plot,
fit_plot
)
```
While talking about the mean for an ordinal distribution often does not make sense, in this particular case one could argue that the expected number of cylinders for a car given its miles per gallon is a meaningful quantity. We could plot the posterior distribution for the average number of cylinders for a car given a particular miles per gallon as follows:
$$
\textrm{E}[\textrm{cyl}|\textrm{mpg}=m] = \sum_{c \in \{4,6,8\}} c\cdot \textrm{P}(\textrm{cyl}=c|\textrm{mpg}=m)
$$
Given the matrix form of the output of `add_epred_rvars()` (i.e. when we do not use `columns_to`), this
quantity is just the dot product of `P(cyl|mpg)` with `c(4,6,8)`. Since the `rvar` format supports
math operations, including matrix multiplication (as the `%**%` operator), we can transform the prediction
column into an expectation easily. Here is an example on two rows:
```{r}
tibble(mpg = c(21,22)) %>%
# note we are *not* using `columns_to` anymore
add_epred_rvars(m_cyl, value = "P(cyl | mpg)") %>%
mutate(cyl = `P(cyl | mpg)` %**% c(4,6,8))
```
Altogether, followed by `unnest_rvars()` so we can create spaghetti plots:
```{r fig.width = med_width, fig.height = med_height}
label_data_function = . %>%
ungroup() %>%
filter(mpg == quantile(mpg, .47)) %>%
summarise_if(is.numeric, mean)
data_plot_with_mean = mtcars_clean %>%
data_grid(mpg = seq_range(mpg, n = 101)) %>%
# NOTE: use of ndraws = 100 here subsets draws for the creation of spaghetti plots;
# DOT NOT do this if you are making other chart types like intervals or densities
add_epred_rvars(m_cyl, value = "P(cyl | mpg)", ndraws = 100) %>%
# calculate expected cylinder value
mutate(cyl = drop(`P(cyl | mpg)` %**% c(4,6,8))) %>%
# transform in long-data-frame-of-draws format for making spaghetti plots
unnest_rvars() %>%
ggplot(aes(x = mpg, y = cyl)) +
geom_line(aes(group = .draw), alpha = 5/100) +
geom_point(aes(y = as.numeric(as.character(cyl)), fill = cyl), data = mtcars_clean, shape = 21, size = 2) +
geom_text(aes(x = mpg + 4), label = "E[cyl | mpg]", data = label_data_function, hjust = 0) +
geom_segment(aes(yend = cyl, xend = mpg + 3.9), data = label_data_function) +
scale_fill_brewer(palette = "Set2", name = "cyl")
plot_grid(ncol = 1, align = "v",
data_plot_with_mean,
fit_plot
)
```
Now let's add on a plot of the latent linear predictor against the thresholds used
to determine the probabilities for each category. We can use `posterior::as_draws_rvars()`
to get parameters from the model as `rvar` objects:
```{r}
draws_cyl = m_cyl %>%
tidy_draws() %>%
as_draws_rvars()
draws_cyl
```
We're really interested in the `b_Intercept` parameter, which represents thresholds
on the latent linear predictor:
```{r}
beta = draws_cyl$b_Intercept
beta
```
We're also going to want the positions where the linear predictor intercepts
those thresholds, which we can calculate using the thresholds and the slope (`b_mpg`):
```{r}
x_intercept = with(draws_cyl, b_Intercept / b_mpg)
x_intercept
```
We can use `add_linpred_rvars()` analogously to `add_epred_rvars()` to
get the latent linear predictor. We'll combine this with the thresholds in `beta`,
subtracting `beta[1]` from the linear predictor and from the other threshold, `beta[2]`,
as these values are all highly correlated (thus are hard to visualize with
uncertainty in a meaningful way without looking at their differences). We'll
also demonstrate the use of `.width = ppoints(XXX)` with `stat_lineribbon()` where `XXX` is a number
like `30` or `50`, which combined with a low `alpha` value produces gradient-like
lineribbons:
```{r, fig.width = med_width, fig.height = med_width}
beta_2_color = brewer.pal(n = 3, name = "Dark2")[[3]]
beta_1_color = brewer.pal(n = 3, name = "Dark2")[[1]]
# vertical lines we will use to show the relationship between the linear
# predictor and P(cyl | mpg)
x_intercept_lines = geom_vline(
# this works because `rvar`s define median() to take the median of the
# distribution of each element, see vignette("rvar", package = "posterior")
xintercept = median(x_intercept),
color = "gray50",
alpha = 0.2,
linewidth = 1
)
thresholds_plot = mtcars_clean %>%
data_grid(mpg = seq_range(mpg, n = 101)) %>%
add_linpred_rvars(m_cyl) %>%
ggplot(aes(x = mpg)) +
stat_lineribbon(
aes(ydist = beta[2] - beta[1]),
color = beta_2_color, fill = beta_2_color,
alpha = 1/30, .width = ppoints(30),
linewidth = 1, linetype = "21"
) +
geom_line(aes(y = 0), linewidth = 1, color = beta_1_color, linetype = "21") +
stat_lineribbon(
aes(ydist = .linpred - beta[1]),
fill = "black", color = "black",
alpha = 1/30, .width = ppoints(30)
) +
labs(y = expression("linear predictor" - beta[1])) +
annotate("label",
label = "beta[1]", parse = TRUE,
x = max(mtcars_clean$mpg), y = 0, hjust = 0.8,
color = beta_1_color
) +
annotate("label",
label = "beta[2] - beta[1]", parse = TRUE,
x = max(mtcars_clean$mpg), y = median(beta[2] - beta[1]), hjust = 0.9,
color = beta_2_color
) +
coord_cartesian(ylim = c(-10, 10))
plot_grid(ncol = 1, align = "v", axis = "lr",
data_plot_with_mean + x_intercept_lines,
fit_plot + x_intercept_lines,
thresholds_plot + x_intercept_lines
)
```
Note how when the linear predictor intersects the line for `beta[1]` categories
1 and 2 are equally likely, and when it intersects the line for `beta[2]`
categories 2 and 3 are equally likely.
For more examples with this model using the long-data-frame-of-draws workflow
(which can be easier for certain tasks), see the corresponding section of
`vignette("tidy-brms")`.