# Video Lectures: Information Theory, Pattern Recognition, and Neural Networks, by David J. C. MacKay

by reiverDavid J.C. MacKay has a number of video lectures available on: Information Theory, Pattern Recognition, and Neural Networks.

Here is the complete list of video lectures:

- Lecture 1: Introduction to Information Theory
- Lecture 2: Entropy and Data Compression (I): Introduction to Compression, Information Theory and Entropy
- Lecture 3: Entropy and Data Compression (II): Shannon's Source Coding Theorem and the Bent Coin Lottery
- Lecture 4: Entropy and Data Compression (III): Shannon's Source Coding Theorem, Symbol Codes
- Lecture 5: Entropy and Data Compression (IV): Shannon's Source Coding Theorem, Symbol Codes and Arithmetic Coding
- Lecture 6: Noisy Channel Coding (I): Inference and Information Measures for Noisy Channels
- Lecture 7: Noisy Channel Coding (II): The Capacity of a Noisy Channel
- Lecture 8: Noisy Channel Coding (III): The Noisy-Channel Coding Theorem
- Lecture 9: A Noisy Channel Coding Gem, And An Introduction To Bayesian Inference (I)
- Lecture 10: An Introduction To Bayesian Inference (II): Inference Of Parameters And Models
- Lecture 11: Approximating Probability Distributions (I): Clustering As An Example Inference Problem
- Lecture 12: Approximating Probability Distributions (II): Monte Carlo Methods (I): Importance Sampling, Rejection Sampling, Gibbs Sampling, Metropolis Method
- Lecture 13: Approximating Probability Distributions (III): Monte Carlo Methods (II): Slice Sampling, Hybrid Monte Carlo, Over-relaxation, Exact Sampling
- Lecture 14: Approximating Probability Distributions (IV): Variational Methods
- Lecture 15: Data Modelling With Neural Networks (I): Feedforward Networks: The Capacity Of A Single Neuron, Learning As Inference
- Lecture 16: Data Modelling With Neural Networks (II): Content-Addressable Memories And State-Of-The-Art Error-Correcting Codes

comments powered by Disqus