Teachers

Canvas

This course uses Canvas. See tips and guides for Canvas.

Log into Canvas

Key words this Week 2: stories (Odin, antidepressant); simulation for gamma posteriors and Dirichlet posteriors (and mixtures thereof); approximate normality things; prior elicitation. 

We have been through (more or less): Ch7 exercises 1, 2, 3, 4, 5, 9, 12, 13, 15 (note the mixture prior), 16; then Stories #41 (Odin's children, Bayesian part), #13 (antidrepressant, Bayesian part). 

I've placed more Nils-R-code on the site; check e.g. com351a, with mixture prior things for 7.15 (see also 7.23). 

For Week 3 onwards: Story #54 (Star Trek, Bayesian part), MCMC 17, 18, 19. Extra: Suppose people in a city have iqs following the \N(100,\sigma^2) distribution, with \sigma = 15. You visit a special school, with \N(\xi,0.5 \sigma^2), and with the normal above as prior. How do you change your view about the pupils of this school, when the first pupil you test has iq = 130? 

Nex story (after this): Story #31 (deer in forest, Ba...

Aug. 27, 2025 3:35 PM

Note: the *curriculum for the course* will essentially be the list of exercises and stories we work through (plus some bits from Chs 1 and 5), in the course of the course (!). 

We have been through (more or less): Ch7 exercises 1, 2, 3, 4, 5, 9, 12, 13, and Story #41 (Odin's children, Bayesian part).

I've placed more Nils-R-code on the site; check com351, with mixture prior things for 7.15.

For Wed Aug 27 and next week: first Story #13 (antidepressant, Bayesian part); then Ch 7 exercises 15, 16; then Story #54 (Star Trek, Bayesian part); then exercises 17, 18 (MCMC).

Aug. 25, 2025 3:56 PM

We'll do Exercise 7.15 during Wed Aiug 27. There's a little mistake in the text (which will duly be repaired): I first write outcomes (4, 6, 6, 7, 8, 9), and later in the same exercise  (2, 5, 3, 7, 5, 8). This is of minor importance, for this toy example; the point is to build and run Dirichlet-simulation based algorithms, then read off the answers. It should be very easy to re-run your code, for any new dataset (and with the same priors). But let's stick to the (4, 6, 6, 7, 8, 9).

Point (b), with the delta method for the ML estimate of \delta, requires a bit of machinery from Ch 5 (or Ch 2), and I will spend a couple of minutes on that as well -- but the main Bayesian things, at this stage, is to understand the posterior, run simulations, and read off answers. 

The recipe via independent gammas of point (a) is perfectly fine & insightful. You may however also do library("MCMCpack") and then their "rdirichlet", to save you from a couple...

Aug. 22, 2025 5:03 PM