Tutorial_Linear_Algebra

Linear Algebra

An introduction to the power method, which can be used to estimate the dominant eigenvector of a matrix. The foundation of lots of linear algebra methods that are used in Machine Learning!

A second look at the power method. Here we:


1. Consider the case where A is not symmetric


2. Consider the impact of the initial

estimate that is used in the power method



3. Look at the "shifted power method", which can be used to find different eigenvectors of A

Building on the previous videos in this series, we look at The Gram Schmidt Process which forms the base of many different approaches in linear algebra. We do some maths before thinking about a geometrical interpretation of the algorithm. Oh and we define "span".

Here we look at an application of the Gram-Schmidt process (previous video in this series), specifically showing that it can be used to create a QR decomposition. We then show how the QR decomposition can be used to solve linear systems of equations.

A first look at the Conjugate Gradients algorithm; a method for solving linear systems whose suitability for deployment on GPUs makes it a core part of many modern Machine Learning approaches (GPyTorch, for example).


Funded by the University of Liverpool Centre for Doctoral Training in Distributed Algorithms (https://www.liverpool.ac.uk/distributed-algorithms-cdt/):

Here we look at the "conjugate directions" algorithm and show that it converges after n iterations, essentially motivating what will eventually become the conjugate gradients algorithm.

We complete the conjugate directions algorithm and implement it in Python, highlighting how we can use it to balance precision against computational cost but also motivating how it can be improved, providing motivation for conjugate gradients.

Share by: