JAGS Reference
Free reference guide: JAGS Reference
About JAGS Reference
The JAGS Reference is a searchable cheat sheet for JAGS (Just Another Gibbs Sampler), the widely-used Bayesian statistical modeling tool built on the BUGS language. It covers the complete JAGS model syntax including the model block structure, deterministic and stochastic node definitions, for-loop constructs, and link functions such as logit, log, probit, and cloglog for generalized linear models.
This reference provides detailed coverage of JAGS probability distributions organized by use case: dnorm with precision parameterization (tau = 1/sigma^2), dbern and dbin for binary and count outcomes, dpois and dnegbin for overdispersed count data, dgamma and dbeta for positive and probability-bounded parameters, dmnorm with Wishart priors for multivariate models, and truncated distributions via T() and dinterval for censored data in survival analysis.
The reference also includes complete model templates for Bayesian linear regression, logistic regression, hierarchical/multilevel models, and finite mixture models with Dirichlet priors. R interface sections cover rjags for direct JAGS control, R2jags for simplified workflows, coda for MCMC diagnostics (Gelman-Rubin Rhat, effective sample size, autocorrelation), DIC for model comparison, and pyjags for Python users.
Key Features
- Complete BUGS model language syntax with model block, for loops, and deterministic (<-) vs. stochastic (~) node definitions
- All major distribution functions: dnorm, dbern, dbin, dpois, dnegbin, dgamma, dbeta, dunif, dt, dmnorm, dmulti, dcat with parameterization details
- GLM link functions including logit, log, probit, and cloglog with inverse function references
- Ready-to-use model templates for linear regression, logistic regression, hierarchical models, and mixture models
- R interface commands for rjags (jags.model, coda.samples), R2jags (jags function), and coda diagnostics
- MCMC configuration guidance: chain initialization, burn-in periods, thinning, and convergence assessment via Rhat and ESS
- DIC (Deviance Information Criterion) calculation for Bayesian model comparison
- Python integration via pyjags with equivalent model specification and posterior sampling workflows
Frequently Asked Questions
What is JAGS and how does it differ from Stan or WinBUGS?
JAGS (Just Another Gibbs Sampler) is a Bayesian inference engine that uses the BUGS model language with Gibbs sampling and slice sampling algorithms. Unlike Stan which uses Hamiltonian Monte Carlo and requires its own language, JAGS uses declarative BUGS syntax that is nearly identical to WinBUGS/OpenBUGS. JAGS is cross-platform (Linux, Mac, Windows), open-source under GPL, and does not require Windows like the original WinBUGS.
Why does JAGS use precision (tau) instead of standard deviation (sigma) for dnorm?
JAGS follows the BUGS convention where the normal distribution dnorm(mu, tau) uses precision tau = 1/sigma^2 rather than standard deviation. This is important when specifying priors: dnorm(0, 0.001) means N(0, variance=1000), a very wide prior. To recover sigma in your model, use the deterministic relationship sigma <- 1/sqrt(tau). Stan, by contrast, parameterizes with sigma directly.
How do I set up a non-informative prior in JAGS?
Common non-informative priors in JAGS include dnorm(0, 0.001) for regression coefficients (very wide normal with precision 0.001 = variance 1000), dgamma(0.001, 0.001) for precision parameters, dunif(0, 100) for standard deviations, and dbeta(1, 1) for probability parameters (equivalent to uniform on 0-1). For Bayesian analysis, Jeffreys priors like dbeta(0.5, 0.5) are also available.
What is the difference between rjags and R2jags in R?
rjags provides low-level control: you manually create the model with jags.model(), run burn-in with update(), and sample with coda.samples(). R2jags wraps this into a single jags() function call where you specify data, parameters to monitor, iterations, burn-in, and thinning all at once. R2jags is simpler for standard workflows, while rjags offers more flexibility for custom MCMC strategies.
How do I diagnose MCMC convergence in JAGS?
Use the coda package in R after sampling. Key diagnostics include: gelman.diag() for the Gelman-Rubin Rhat statistic (values near 1.0 indicate convergence across chains), effectiveSize() for ESS (should be at least 100-400 per parameter), autocorr.diag() to check autocorrelation decay, and visual traceplots via plot(). Run at least 3 chains with different initial values to enable multi-chain diagnostics.
How do I handle censored or truncated data in JAGS?
JAGS supports truncated distributions with T(lower, upper) syntax, for example y ~ dnorm(mu, tau)T(0,) for left-truncation at zero. For interval-censored survival data, use the dinterval distribution: declare is.censored[i] ~ dinterval(t[i], c[i]) where c[i] is the censoring time. Provide initial values for censored observations and set the observed indicator appropriately.
What is DIC and how should I interpret it for model comparison?
DIC (Deviance Information Criterion) is computed via dic.samples() in JAGS. It equals Dbar + pD, where Dbar is the posterior mean deviance and pD is the effective number of parameters. Lower DIC values indicate better model fit with complexity penalization. DIC differences of 5-10 are considered meaningful. Note that DIC can be unreliable for mixture models and is best used for comparing models with similar structures.
Can I run JAGS from Python instead of R?
Yes, the pyjags package provides a Python interface to JAGS. You define the model code as a string, pass data as a Python dictionary, and call pyjags.Model() with chains and adaptation parameters. Posterior samples are returned as numpy arrays. Install JAGS system-wide first, then pip install pyjags. The syntax mirrors the R workflow but uses Python data structures throughout.