Field Guide to the R Mixed Model Wilderness
  1. 3  Mixed Model Background
  • Preface
  • 1  Introduction
  • 2  Zen and the Art of Statistical Analysis
  • 3  Mixed Model Background
  • 4  Model Prep & Workflow
  • Experiment designs
    • 5  Randomized Complete Block Design
    • 6  Factorial RCBD Design
    • 7  Split Plot Design
    • 8  Split-Split Plot Design
    • 9  Strip Plot Design
    • 10  Incomplete Block Design
    • 11  Latin Square Design
  • 12  Repeated Measures
  • 13  Marginal Means and Contrasts
  • 14  Variance and Variance Components
  • 15  Troubleshooting
  • 16  Additional Resources
  • References

Table of contents

  • 3.1 Mixed model terms
  • 3.2 Models
    • 3.2.1 Random slopes models
  • 3.3 R Formula Syntax for Random and Fixed Effects
  • View source
  • Report an issue

3  Mixed Model Background

3.1 Mixed model terms

Mixed-effects models are called “mixed” because they simultaneously model fixed and random effects. These are also called “multilevel or”hierarchical” models in reference to groups or clusters with a hierarchical structure where we expect the groups to have correlations between their groups members.

This tutorial concerns (general) linear mixed models (sometimes abbreviated as “LMM”) where the dependent variable, when conditioned on the independent variable(s), follows a normal distribution. These are a special case of generalized linear mixed models (sometimes abbreviated “GLMM”), which allow the dependent variable to follow non-normal distributions. We will only be discussing the former in this tutorial.

Fixed effects represent population-level average effects that should persist across experiments. We often view our treatments or experimental interventions as fixed effects. Fixed effects are parameters drawn the entire population and/or associated with particular levels of a treatment (e.g. levels of nitrogen fertilizer). Fixed effects are similar to the parameters found in traditional regression techniques like ordinary least squares.

Random effects are discrete units sampled from some population (e.g. plots, participants), and thus they are categorical. Random effects are associated with experimental units drawn at random from a population with a probability distribution. Random effects are useful in the cases when we have (1) multiple levels of experiment factors (e.g., many species or blocks), (2) smaller number of observations for a given level, or (3) uneven sampling across levels. The general thought is that random effects represent a sample of a population you want to make inference on.

Please read this section and refer back to if when you forget what these terms mean.

Table 3.1: Terms definitions
Term Definition
Random effect An independent variable where the levels being estimated compose a random sample from a population whose variance will be estimated
Fixed effect An independent variable with specific, predefined levels to estimate
Experimental unit The smallest unit being used for analysis. This could be an animal, a field plot, a person, a meat or muscle sample. The unit may be assessed multiple times or through multiple point in time. When the analysis is all said and done, the predictions occur at this level.

3.2 Models

Recall simple linear regression with intercept (\(\beta_0\)) and slope (\(\beta_1\)):

\[ Y_{i} = \beta_0 + \beta_1 x_i + \epsilon_i \]

\(Y_i\) is the dependent variable, and \(x_i\) is the independent variable. The value for \(\beta_0\) is the overall average for \(Y_i\) and \(\beta_1\) is the change in \(Y\) as \(X\) changes. The slope for \(\beta_1\) could represent the effects of increasing quantities of nitrogen fertilizer on crop growth.

The errors,1 \(\epsilon_i\), or residual are independently and identically distributed (sometimes called “iid”) following a normal distribution for general linear mixed models.

1 For the rest of this tutorial we will be using the term “residual” and not “error term” to reflect current practices. The residual is calculated as \(Y - XB\), the gap between the observed value for \(Y_i\) and the predicted value, \(\hat{Y_i}\)

\[e_i \sim N(0, \sigma I_n)\] These are important model assumptions that we will revisit frequently in this tutorial (and later on we will explore relaxing these assumptions).

There is also an expected conditional distribution for \(Y_i\), but we will not be discussing this again.

\[ Y_i|x_i \sim𝑁(\mathbf{X \beta}, \sigma^2/r_i) \]

In least squares estimation, the slope and intercept are chosen in a way so that the residual sum of squares is minimized. If we consider this model in a mixed model framework, \(\beta_0\) and \(\beta_1\) would be fixed effects (also known as the population-averaged values).

Extending this example to a mixed model, we can consider adding another term, \(r_{j}\) that reflects levels sampled from a population that represent a random subset of a larger population:

\[ Y_{ij} = \beta_0 + \beta_1 x_i + r_j + \epsilon_{ij} \] In an agronomic field trial, \(r_{j}\) could be a random effect for block (i.e. the spatial positioning of a group of treatments). The random effect can be thought of as each block’s deviation from the fixed intercept parameter (that is, \(\beta_0\)). Like the residuals term, \(r_i\) is independently and identically distributed (sometimes referred to as “iid”):

\[r_i \sim N(0, \sigma_b)\] While random effects do not have to be normally distributed, that is the most common and most easily estimated. These are considered “random intercepts” where they all share a common \(B_0\) and \(B_1\).

Illustration of a mixed model with random intercepts and same slopes

3.2.1 Random slopes models

Other alternatives to the random intercept model include modelling random slopes where the slope between X and Y changes according to the random effect:

\[ Y_{ij} = \beta_0 + (\beta_1 + r_j)X_{ij} + \epsilon_{ij}\]

In this model, the \(\beta\) is the overage effect of X on Y, and \(r_i\) is a random effect for block \(j\), and all observations share a common intercept, \(\beta_0\).

Illustration of a mixed model with a fixed intercept and random slopes.

The other alternative is including both a random slope and a random intercept:

\[ Y_{ij} = (\beta_0 + r1_j) + (\beta_1 + r1_j))X_{ij} + \epsilon_{ij}\]

In this model, \(r1_j\) and \(r1_j\) are random effects for subject \(i\) applied to the intercept and slope, respectively. Predictions would vary depending on each subject’s slope and intercept terms:

Illustration of a mixed Model with random intercept and slope

3.3 R Formula Syntax for Random and Fixed Effects

The formula notation used for the linear mixed models is as follows:

model_formula <- formula(Y ~ X + (1|R))

In this formula, \(Y\) is the response variable (or dependent variable); \(X\) represents the independent variable(s) such as treatment (fixed effect), and \(R\) denotes the random grouping effect (e.g. block).

Random effects are put in parentheses and a 1| is used to denote random intercepts (rather than random slopes). The table below provides several examples of random effects in mixed models. The names of random grouping factors are denoted r, r1, and r2, and covariates as x. The table presented below from Bates et al. (2015) provide several exmples for fitting random effect structures.

Formula Alternative Meaning
(1|r) 1 + (1|r) Random intercept with a fixed mean
(1|r1/r2) (1| 1) + (1|r1:r2) Intercept varying among r1 and r2 within r1
(1|r1) + (1|r2) 1 + (1|r1) + (1|r2) Intercept varying among r1 and r2
x + (x|r) 1 + x + (1 + x|r) Correlated random intercept and slope
x + (x||r) 1 + x + (1|r) + (0 + x|r) Uncorrelated random intercept and slope

The first example, (1|r) suffices for most mixed models and is the only structure used in this guide.

The main advantage of the LMMs is their flexibility to handle clustered data as well as data with repeated measures. This model can also handle both nested and crossed random effects. However, at the same time, these models can be easily mis-specified when you are a new LMM user.

Bates, Douglas, Martin Mächler, Ben Bolker, and Steve Walker. 2015. “Fitting Linear Mixed-Effects Models Using lme4.” Journal of Statistical Software 67 (1): 1–48. https://doi.org/10.18637/jss.v067.i01.
2  Zen and the Art of Statistical Analysis
4  Model Prep & Workflow

© 2025

Uidaho Logo

  • View source
  • Report an issue