In Bayesian inference we make a sort of deal with the devil: we commit to a strong model, and from this we get strong inferences. But, as the saying goes, with great power comes great responsibility. We need to vigilantly check the fit of our models, following this up with model improvement. As a result, Bayesian workflow does not involve fitting just one model to data. We typically fit multiple models, including some models that we know are too simple (to get a sense of what is lost by not including certain features in our analysis) and others that we suspect are too complex (to get a sense of the boundaries of what we can learn given the resolution of the our available data).
Model checking consists of the following steps:
Simulate fake data. Specify sizes and values for all predictors in a model and choose a set of values for all model parameters and generate the corresponding data values <span id="MathJax-Element-6-Frame" class="mjx-chtml MathJax_CHTML" tabindex="0" data-mathml="y" role="presentation" style="-webkit-font-smoothing: antialiased; box-sizing: border-box; -webkit-tap-highlight-color: transparent; text-size-adjust: none; display: inline-table; line-height: 0; font-size: 25.92px; letter-spacing: normal; overflow-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0px; min-height: 0px; border: 0px; padding-top: 1px; padding-bottom: 1px; position: relative;">yy.
Fit the model. Express the model in Stan, pass the simulated data into the program, and estimate the parameters.
Evaluate the fit. Compare the estimated parameters (or, more fully, the posterior distribution of the parameters) to their true values, which in this simulated-data scenario are known.