Skip to contents

greta.gam lets you use mgcv’s smoother functions and formula syntax to define smooth terms for use in a greta model. You can then define your own likelihood to complete the model, and fit it by MCMC.

The design and architecture of the package was done by Nick Golding, and David L Miller.

Example

Here’s a simple example adapted from the mgcv ?gam help file. In mgcv:

library(mgcv)
set.seed(2024 - 12 - 12)

# simulate some data...
dat <- gamSim(1, n = 400, dist = "normal", scale = 0.3)
head(dat)
# fit a model using gam()
mgcv_fit <- gam(y ~ s(x2), data = dat)
mgcv_fit
summary(mgcv_fit)
## show partial residuals
plot(mgcv_fit, scheme = 1, shift = coef(mgcv_fit)[1])

Now fitting the same model in greta. We first start by setting up the linear predictor for the smooth. That is, the right hand side of the formula:

library(greta.gam)
set.seed(2024 - 02 - 09)
# setup the linear predictor for the smooth
linear_predictor <- smooths(~ s(x2), data = dat)
linear_predictor

Now we specify the distribution of the response:

dist_sd <- cauchy(0, 1, truncation = c(0, Inf))
distribution(dat$y) <- normal(mean = linear_predictor, sd = dist_sd)

Now let’s make some prediction data

pred_dat <- data.frame(
  x2 = seq(0, 1, length.out = 100)
  )

head(pred_dat)

We run evaluate_smooths on the linear predicting with the new prediction data

linear_preds <- evaluate_smooths(linear_predictor, newdata = pred_dat)
linear_preds

Now we specify that as a model object and then fit with MCMC as we do with greta normally:

# build model
m <- model(linear_preds)
m
# draw from the posterior
draws <- mcmc(m, n_samples = 200, verbose = FALSE)
class(draws)
# 4 chains
length(draws)

# 200 draws, 100 predictors
dim(draws[[1]])

# look at the top corner
draws[[1]][1:5, 1:5]

Now let’s compare the mgcv model fit to the greta.gam fit:

plot(mgcv_fit, scheme = 1, shift = coef(mgcv_fit)[1])

# add in a line for each posterior sample
apply(draws[[1]], 1, lines, x = pred_dat$x2, 
      col = adjustcolor("firebrick", alpha.f = 0.1))

# plot the data
points(dat$x2, dat$y, pch = 19, cex = 0.2)

The mgcv predictions are in the grey ribbon, and the greta.gam ones are in red - we can see that the greta predictions are within the range of the mgcv, which is good news!

Brief technical details

greta.gam uses a few tricks from the jagam (Wood, 2016) routine in mgcv to get things to work. Here are some brief details for those interested in the internal workings.

Bayesian interpretation of the GAM

GAMs are models with Bayesian interpretations (even when fitted using “frequentist” methods). One can think of the smoother penalty matrix as a prior precision matrix in a Bayesian random effects model. Design matrices are constructed exactly as in the frequentist case. See Miller (2021) for more background on this.

Penalty matrices

There is a slight difficulty in the Bayesian interpretation of the GAM in that, in their naïve form the priors are improper as the nullspace of the penalty (in the 1D case, usually the linear term). To get proper priors we can use one of the “tricks” employed in Marra & Wood (2011) – that is to somehow penalise the parts of the penalty that lead to the improper prior. We take the option provided by jagam and create an additional penalty matrix for these terms (from an eigen-decomposition of the penalty matrix; see Marra & Wood, 2011).

References

Marra, G and Wood, SN (2011) Practical variable selection for generalized additive models. Computational Statistics and Data Analysis, 55, 2372–2387.

Miller DL (2021). Bayesian views of generalized additive modelling. arXiv.

Wood, SN (2016) Just Another Gibbs Additive Modeler: Interfacing JAGS and mgcv. Journal of Statistical Software 75, no. 7