Linear Algebra#


Note

A recording of this can be found here.

The following is mostly taken from https://numpy.org/doc/stable/reference/routines.linalg.html. Start there for further details.

NumPy provides a powerful N-dimensional array object and tools for working with these arrays, which makes it specifically attractive for linear algebra tasks.

Linear algebra (numpy.linalg)#

The NumPy linear algebra functions rely on BLAS and LAPACK to provide efficient low level implementations of standard linear algebra algorithms. Those libraries may be provided by NumPy itself using C versions of a subset of their reference implementations but, when possible, highly optimized libraries that take advantage of specialized processor functionality are preferred.

Note

The SciPy library also contains a linalg submodule, and there is overlap in the functionality provided by the SciPy and NumPy submodules. SciPy contains functions not found in numpy.linalg, such as functions related to LU decomposition and the Schur decomposition, multiple ways of calculating the pseudoinverse, and matrix transcendentals such as the matrix logarithm. Some functions that exist in both have augmented functionality in scipy.linalg. For example, scipy.linalg.eig can take a second matrix argument for solving generalized eigenvalue problems. Some functions in NumPy, however, have more flexible broadcasting options. For example, numpy.linalg.solve can handle “stacked” arrays, while scipy.linalg.solve accepts only a single square array as its first argument.

Vectors, Matrices & Dot Products#

# import numpy
import numpy as np
# define vectors (1D arrays)
a = np.array([1,0,0])
b = np.array([0,1,0])
c = np.array([1,1,1])

print( a )
print( b )
print( c )
[1 0 0]
[0 1 0]
[1 1 1]
# get vector products via np.dot( a,b )
print( np.dot(a, b) )
print( np.dot(a, c) )
0
1
# NOTE: This is different to the element wise product!!!
print( a * b )
print( a * c )
[0 0 0]
[1 0 0]
# define a matrix by hand ...
matA = np.array([[8, 7, 6],
                 [5, 4, 3],
                 [2, 1, 0]])

print( matA )
[[8 7 6]
 [5 4 3]
 [2 1 0]]
# .. or via reshape()
matB = np.arange(0,9).reshape(3,3)
print( matB )
[[0 1 2]
 [3 4 5]
 [6 7 8]]
# get matrix vector product
print( np.dot(matA, c), "=!", np.dot(c, matA) )
[21 12  3] =! [15 12  9]
# finally, matrix-matrix products
print( np.dot(matA, matB) )
print( "=!" )
print( np.dot(matB, matA) )
[[57 78 99]
 [30 42 54]
 [ 3  6  9]]
=!
[[ 9  6  3]
 [54 42 30]
 [99 78 57]]
matA @ matB
array([[57, 78, 99],
       [30, 42, 54],
       [ 3,  6,  9]])

Transpose, Trace, Determinant, Norm, Inverse#

# let's start with a matrix
matA = np.arange(0,9).reshape(3,3)
print( matA )
[[0 1 2]
 [3 4 5]
 [6 7 8]]
# do the standard transpose
print( np.transpose(matA) )
[[0 3 6]
 [1 4 7]
 [2 5 8]]
# ... or via MAT.transpose()
print( matA.transpose() )
[[0 3 6]
 [1 4 7]
 [2 5 8]]
# ... or via MAT.T
print( matA.T )
[[0 3 6]
 [1 4 7]
 [2 5 8]]
# and now get the trace
print( matA.trace() )
12
# get the determinant
# ... via the linalg package

print( np.linalg.det(matA) )
0.0
# get the norm
print( np.linalg.norm(matA) )
14.2828568570857
# and now the inverse
# ... via the linalg package
# ... and for a non-singular matrix
matA = np.array([[2,3],
                 [1,2]])

matB = np.linalg.inv(matA)

print( matB )
print( np.dot(matA, matB) )
[[ 2. -3.]
 [-1.  2.]]
[[1. 0.]
 [0. 1.]]

Solving Linear Equations#

Let’s assume we want to solve the following set of equations: $\( \begin{align} x& + 3y + 5z &=& 10 \\ 2x& + 5y + z &=& 8 \\ 2x& + 3y + 8z &=& 3 \end{align} \)\( which we can obviously cast into: \)\( \begin{align} \begin{pmatrix} x \\ y \\ z \end{pmatrix} = \begin{pmatrix} 1 & 3 & 5 \\ 2 & 5 & 1 \\ 2 & 3 & 8 \end{pmatrix}^{-1} \begin{pmatrix} 10 \\ 8 \\ 3 \end{pmatrix} \end{align} \)$

# define matrix and solution vector
M = np.array([[1, 3, 5],
              [2, 5, 1],
              [2, 3, 8]])

v = np.array([10,
               8,
               3])

# and get the solution via the inverse of M and the dot product
x = np.dot( np.linalg.inv(M), v )

print( x )
[-9.28  5.16  0.76]
# check it

print( np.dot( M, x ) ) 
[10.  8.  3.]
# now, solve it via np.linalg.solve
x = np.linalg.solve(M, v)

print( x )
[-9.28  5.16  0.76]
# So, what's the difference?
# --> Performance / time!

# let's setup a large set of equations
M_big = np.arange(0,25,0.0001, float).reshape(500,500)
v_big = np.arange(50,0,-0.1).reshape(500,1)
# time the inverse version
%timeit x_big = np.dot( np.linalg.inv(M_big), v_big )
20.3 ms ± 119 μs per loop (mean ± std. dev. of 7 runs, 10 loops each)
# time the solve version
%timeit x_big = np.linalg.solve(M_big, v_big)
8.24 ms ± 55.9 μs per loop (mean ± std. dev. of 7 runs, 100 loops each)

Eigenvalue Problems#

Let’s get all eigenvalues \(\lambda\) and eigenvectors \(\vec{v}\) of a square-matrix \(A\):

\[ A \vec{v}_i = \lambda_i \vec{v}_i \]
# define matrix A
A = np.array([[1, 4, 0],
              [0, 2, 7],
              [0, 5, 3]])

# use the linalg package to get lambda (l) and eigenvectors (v)
l, v = np.linalg.eig(A)

print( "eigenvalues:" )
print( l )
print( "eigenvectors:" )
print( v )
eigenvalues:
[ 1.         -3.43717104  8.43717104]
eigenvectors:
[[ 1.          0.57997208 -0.36809565]
 [ 0.         -0.64335883 -0.68439757]
 [ 0.          0.49972172 -0.62936918]]

simple check via: \( A v - \lambda v = 0\)

# let's check this:
print( np.dot(A, v) - l * v )
[[ 0.00000000e+00  2.22044605e-16  0.00000000e+00]
 [ 0.00000000e+00  0.00000000e+00 -8.88178420e-16]
 [ 0.00000000e+00  2.22044605e-16  0.00000000e+00]]

another check via: \( \vec{v}_i^t A \vec{v}_i = \lambda_i\)

# let's check it again for a single eigenvalue
print( np.dot( np.dot(v[:,1].transpose(), A), v[:,1]) )
-3.437171043518959
# let's check it again ... for all eigenvalues
print( np.diag( np.dot( np.dot(v.transpose(), A), v) ) )
[ 1.         -3.43717104  8.43717104]
# now, we can also setup the characteristic polynomial
poly = np.poly(l)

print( poly )
[  1.  -6. -24.  29.]
# and get its roots
roots = np.roots( poly )

print( roots )
[ 8.43717104 -3.43717104  1.        ]