Recurrent Neural Networks
and
Causality in Machine Learning

Data analysis and machine learning

As this is the last class and you will not be assessed on this topic, this lecture will be more self-directed than all others. I encourage you to read the linked articles and posts, and watch the (shorter) videos on Panopto.

Recurrent Neural Networks

Recurrent Neural Networks are a particular neural network architecture that allows for neurons to re-use or remember the inputs from the last \(n\) inputs (in time). Recurrent Neural Networks haven't seen a lot of use in physics and astronomy, mostly because whenever we have time-based data we normally have enough of it and things are very periodic (rather than quasi-periodic and poorly sampled).

Causality in Machine Learning

This is a really important part of machine learning and is going to be much bigger in the coming decade. I encourage you to read a few recent papers and blog posts on this topic, and watch the corresponding video on Panopto.

Summary

Machine learning describes a collection of tools. As a physicist you can use any number of these tools. But as a physicist we are not simply interested in the prediction (or the output) of that tool. We are interested to understand how that tool works, and whether there are (learned) components of that tool that can tell us something about the Universe. If we are going to use a tool to solve a complex problem in physics, then we ought to understand the tool better than what we understand the problem itself.

 

← Previous class
BNNs and VI
 
Contributions

The animation showing the multiplication of kernels is from here.

The convolution examples are from Vincent Dumoulin and Francesco Visin (see here and here).