Problem Set 2

Data analysis and machine learning

Due date: 2022-03-11 17:00 (Melbourne time) unless by prior arrangement

Your submission should be in the form of a PDF that includes relevant figures. The PDF can be compiled from \(\LaTeX\) or outputted by Jupyter notebook, or similar. You must also submit code scripts that reproduce your work in full.

Marks will depend on the results/figures that you produce, and the clarity and depth of your accompanying interpretation. Don't just submit figures and code! You must demonstrate understanding and justification of what you have submitted. Please ensure figures have appropriate axes, that you have adopted sensible mathematical nomenclature, et cetera.

There are 50 marks allocated to this problem set, but the problem set accounts for 12% of your final grade. For example, if you receive 72 marks on this problem set, then you will contribute 10.8% to your final grade.

\[ \newcommand{\transpose}{^{\scriptscriptstyle \top}} \newcommand{\vec}[1]{\mathbf{#1}} \]

In total there are five questions in this problem set, with a total of 50 marks available.

Question 1

(Total 10 marks available)

Rosenbrock's function is \[ f(x, y) = (1 - x)^2 + 100(y - x^2)^2 \quad . \]

Calculate \(\log{f(x,y)}\) on a grid of \(x\) and \(y\) values from \(x \in [-2, 2]\) and \(y \in [-2, 2]\) with 250 grid points in each direction. Plot the \(\log{f(x,y)}\) contours and include a colour bar to show the value of \(\log{f(x,y)}\).

Starting from an initial value of \((x,y) = (-1, -1)\), find the minima given by Rosenbrock's function using

  1. the BFGS (or L-BFGS-B) algorithm,
  2. the Nelder-Mead algorithm (also known as the simplex algorithm), and
  3. an algorithm of your choice.

For each algorithm, record the \((x,y)\) points trialled by the algorithm and plot these on the figure you have made. This will show the path of both optimisation algorithms. Be sure to include a legend so it is clear which path corresponds to which algorithm.

Hot Tip: You can record the positions trialled by an optimisation algorithm using an example like this: import numpy as np import scipy.optimize as op # Let's define a list to store the positions in. positions = [] # Define the objective function. def objective_function(theta): positions.append(theta) x, y = theta # Remember: you have to write your own `rosenbrocks` function return rosenbrocks(x, y) # or just rosenbrocks(*theta) op_result = op.minimize( objective_function, [-1, 1], method="Nelder-Mead" ) # Now let's use the recorded positions. positions = np.array(positions)

Question 2

(Total 5 marks available)

Use the following code to generate some fake data that are drawn from the model \[ y \sim \mathcal{N}\left(\theta_0 + \theta_1x + \theta_2x^2 + \theta_3x^3,\sigma_{y}\right) \]

import numpy as np np.random.seed(0) N = 30 x = np.random.uniform(0, 100, N) theta = np.random.uniform(-1e-3, 1e-3, size=(4, 1)) # Define the design matrix. A = np.vstack([ np.ones(N), x, x**2, x**3 ]).T y_true = (A @ theta).flatten() y_err_intrinsic = 10 # MAGIC number! y_err = y_err_intrinsic * np.random.randn(N) y = y_true + np.random.randn(N) * y_err y_err = np.abs(y_err)

Even though this model includes \(x^2\) and \(x^3\) terms, this is still a linear problem that you can solve with linear algebra! Do you see why that is? (This is the generalised representation of ordinary least-squares!)

Let us assume that the generated data have no uncertainties in the \(x\)-direction, and that the uncertainties in the \(y\)-direction are normally distributed. Cast this problem with matrices and find a point estimate of the best-fitting model parameters. Make a plot showing the data, the true model, and the model using the best-fitting model parameters.

Question 3

(Total 10 marks available)

In this question we will use a Leapfrog integrator to integrate the motion of a particle in a potential. This will be used in Question 4 when you implement a Hamiltonian Monte Carlo sampler. Recall that at time \(t\) a particle in some system with a location \(\vec{x}(t)\) and a momentum \(\vec{p}(t)\) can be fully described by the Hamiltonian \[ \mathcal{H}\left(\vec{x},\vec{p}\right) = U\left(\vec{x}\right) + K\left(\vec{p}\right) \] where the sum of gravitational \(U\) and kinetic \(K\) energy remains constant with time. Let \[ K(\vec{p}) = \frac{\vec{p}\transpose\vec{p}}{2} \] and \[ U(\vec{x}) = -\log{p(\vec{x})} \] where \(\log{p(\vec{x})}\) is short-hand notation for the log posterior probability for the model parameters \(\vec{x}\) (which we normally denote as \(\vec{\theta}\)) \[ \log{p(\vec{x})} \equiv \log{p(\vec{\theta}|\vec{y})} \quad . \] The time evolution of this particle is given by \[ \frac{d\vec{x}}{dt} = \frac{\partial\mathcal{H}}{\partial\vec{p}} = \vec{p} \] and \[ \frac{d\vec{p}}{dt} = -\frac{\partial\mathcal{H}}{\partial\vec{x}} = -\frac{\partial{}\log{p(\vec{x})}}{\partial{}\vec{x}} \quad . \] It is clear that that to integrate the motion of a particle in this system we will need the derivative of \(\log{p(\vec{x})}\) with respect to \(\vec{x}\).

Let us assume a model where we have a single data point \(y\) that is drawn from the model \[ y \sim \mathcal{N}\left(y_t, \sigma_y\right) \] where our only model parameter is the true value \(y_t\). The single data point is \((y, \sigma_y) = (1.2, 1)\).

The negative log posterior probability function for this model can be written as: def neg_log_prob(theta): return 0.5 * (theta - 1.2)**2 Start with an initial guess of the model parameter somewhere between -1 and +1. Draw an initial momentum value \(p \sim \mathcal{N}(0, 1)\). With this initial position and momentum, integrate the orbit of the particle using a Leapfrog integration scheme. An example of the integration scheme is given below, but you will need to change this to record the particle position and momentum at every step. You will need the derivative of the log probability with respect to the parameters. Either derive this, or use JaxThe derivative here is easy to write down, but this is giving you the opportunity to learn about Jax, so that you are familiar with it when you move to more complex functions.: from jax import grad grad_neg_log_prob = grad(neg_log_prob)

Make a plot showing the particle position (x-axis) and its momentum (y-axis) at every step of the integrated orbit. In another plot, show the total energy of the system with each step. Is the energy conserved? Why or why not?

import numpy as np def leapfrog_integration(x, p, dU_dx, n_steps, step_size): """ Integrate a particle along an orbit using the Leapfrog integration scheme. """ x = np.copy(x) p = np.copy(p) # Take a half-step first. p -= 0.5 * step_size * dU_dx(x) for step in range(n_steps): x += step_size * p p -= step_size * dU_dx(x) x += step_size * p p -= 0.5 * step_size * dU_dx(x) return (x, -p)

Question 4

(Total 15 marks available)

Use the potential and momentum distribution defined in Question 3 and build a Hamiltonian Monte Carlo sampler for the model defined in Question 3. Run your sampler for at least 1,000 steps, ensuring that you are taking appropriate number of steps (and step sizes) in the Leapfrog integration. Start the sampler from an initial value of \(x = -10\). Make a plot of the MCMC chain, and a histogram of the posterior on \(y_t\). Has the chain converged?

Question 5

(Total 10 marks available)

Implement the same model in Question 4 using PyMC or Stan. Use the default sampling algorithm and default hyperparameters to sample the posteriors. Make a plot showing the chains produced by your sampler and those from PyMC or Stan. Comment on any differences.