Linear Algebra - a Refresher


Mathematics is the basis of any Engineering. Much more so with abstract sciences like Machine Learning. And Linear Algebra is at the core. It is very important to understand linear algebra if we want to proceed with understanding Machine Learning.

Linear Algebra was developed to simplify linear equations. It provides a simple way of representing and formalizing the solution of linear equations.

Consider the two equations below

2x + y = 10
2x - y = 2

One can trivially solve these two equations of two variables. That is high school algebra. We don't need any code to solve it. But what would we do if we were given 100,000 equations of 100,000 variables? Linear Algebra helps us here. Let us see how

Of course, we do not have enough space here to write down the 100,000 equations. But we can use the above two equations to understand the concept. These two equations can be written in matrix form as

Essentially, we have represented the set of equations in the form

Ax = b       # Where A is a matrix; x and b are vectors.

This is the short hand way of representing a set of linear equations - in form of matrices.

Notations


Reducing the representational size is not the only advantage of this. We will see below how it helps in computation. Before that, let us look into the notifications.

  • A ∈ Rm x n => A is a matrix of m rows and n columns, and all its elements are real numbers.
  • x ∈ Rn => x is a vector with n entries and all its elements are real numbers.
  • Aij => Denotes the element of A at the ith row and jth column
  • xi => Denotes the element of ith element of the vector x
  • Ai,: => ith row in the matrix A
  • A:,j => jth column in the matrix A

Intuitively, we can think of a vector as a point in n-dimensional space. And a matrix A as an operation that can map a vector V1 in n dimensional space into another vector V2 in m dimensional space.

Important Concepts


Linear Algebra is a vast domain. In order to use it in Machine Learning, it is necessary to understand some basic concepts.

Matrix Multiplication


The concept of matrix multiplication is not so intuitive. Suppose we have two matrices A ∈ Rm x n and B ∈ Rn x p, the product of the two matrices is C ∈ Rm x pwhere

    Cij = ∑ Aik Bkj

Note the sizes of of the three matrices. For the product A x B to exist, the number of columns in the matrix A must be the same as the number of rows in matrix B. In this case, the product will have rows as many as A and columns as many as B

There could be 4 possibilities of matrix product. Matrix - Matrix product, Matrix - Vector product, Vector - Matrix product and Vector - Vector product. Let us look at two special cases here:

Vector - Vector Product


Given two vectors x, y ∈ Rn, xTy - the dot product of the two vectors - is a real number.

Matrix-Vector Product


Given a matrix A ∈ Rm x n and a vector x ∈ Rn. Their product y = Ax is a vector ∈ Rm. In other words, y is a linear combination of the columns of A, where the coefficients of the linear combination are given by the entries of x.

It is interesting to note here that matrix multiplication

  • Is not commutative. A x B may not be equal to B x A. In fact B x A may not even exist.
  • But it is distributive. A x (B + C) = A x B + A x B
  • And it is also associative. A x (B x C) = (A x B) x C

Identity Matrix


By definition, multiplicative identity is a matrix I such that A x I = A. Here, I happens to be a matrix where all the diagonal elements are 1 and others are 0. In more formal terms, for a matrix A ∈ Rm x n, the identity matrix is the matrix I ∈ Rn x n such that Iij = 1 if i = j and 0 otherwise.

In general the size of an identity matrix is not important. Just that it should be a square matrix (with equal number of rows and columns) such that Iij = 1 if i = j and 0 otherwise.

Transpose


The transpose of a matrix obtained by flipping it around the diagonal. In formal terms, for a matrix A ∈ Rm x n, the transpose AT is matrix B ∈ Rn x m such that Bij = Aji

Interesting properties of transpose operation

  • (AT)T = A
  • (A x B)T = BT x AT
  • (A + B)T = AT + BT

Symmetric Matrix


A symmetric matrix that is symmetric along the diagonal. In formal terms, matrix A ∈ Rn x n is symmetric if AT = A. And matrix A ∈ Rn x n is anti-symmetric if AT = -A

Note that the matrix has to be a square matrix in order to be a symmetric or anti-symmetric.

Observe that for any matrix A, A + AT is symmetric and A - AT is anti-symmetric. Thus, any matrix A = ((A + AT) + (A - AT))/2. That means, any matrix A can be expressed as a sum of a symmetric matrix and an anti-symmetric matrix.

This property is important for subsequent derivations. Symmetric matrices have very nice properties that are useful in real life calculations. It is common to denote a symmetric matrix using S. Thus, A ∈ Sn means that A is an n x n dimensional symmetric matrix.

The Trace


Trace of a square matrix A &isa; Rn x n is denoted by tr(A). It is the sum of all the diagonal elements of the matrix.

    tr(A) = ∑ Aii

The matrix trace has some interesting properties:

  • tr(A) = tr(AT)
  • tr(A + B) = tr(A) + tr(B)
  • tr(n x A) = n x tr(A)
  • If A x B is a square matrix (implying that B x A is also a square matrix), tr(AB) = tr(BA)
  • Extending this, A, B, C, such that ABC is a square matrix, tr(ABC) = tr(BCA) = tr(CAB).
  • This is not limited to 2 or 3, it applies to any number of matrices

Norm


We can think of the Norm of a vector as the length of the segment from the origin to the point denoted by the vector - that is the Euclidean norm (or the LsNorm. It is denoted as

    ||x||2 = sqrt(∑ xi2)

Note that ||x||2 is same as xTx - or the dot product of x with itself.

The L1 norm is defined as

    ||x||1 = ∑ |x|i

Similarly, we can define Lp norm as

    ||x||p = (∑ xip)1/p

If we consider p = infinity, the L simplifies to

    L = max(|xi|)

In fact, the Norm is not limited to these Ln norms. We can define our own function. Any function f(x) can be used as the Norm if and only if

  • f(x) >= 0 for all x [non-negativity]
  • f(x) = 0 if and only if x = 0 [definiteness]
  • For all x ∈ Rn and t ∈ R, f(tx) = |t|f(x) [homogeneity]
  • For all x ∈ Rn and y ∈ Rn, f(x + y) <= f(x) + f(y) [triangle inequality]

Norms are also defined for Matrices. The Frobenius Norm for matrices is defined as

    ||A||F = (∑ ∑ Aij2)1/2

Interesting to note that this is also equal to tr(ATA)1/2

Mathematicians have defined many other kinds of Norms. But these are the important ones we will need for machine learning.

Linear Independence & Rank


A set of n vectors {x1, x2, . . . xn} ∈ Rn is called linearly independent if no vector can be expressed as a combination of the other vectors. Conversely, a set of n vectors is called linearly dependent if at least one of them can be expressed as

    xn = ∑ αixi

for some scalar values {αx1αx2. . . αxn-1}

The column rank of a matrix is the largest subset of column vectors that are linearly independent. Similarly, the row rank is the largest subset of row vectors that are linearly independent.

The rank can be roughly considered as the information contained in the matrix. It can be proved that for any matrix A, the column rank is always equal to the row rank. Hence it is generally referred as the rank of the matrix. Some interesting properties of matrix ranks:

  • For a matrix A ∈ Rm x n, the rank(A) = <= min(m,n). If rank(A) = min(m,n), it is called a full rank matrix.
  • For any matrix A, rank(A) = rank(AT)
  • For any matrix A ∈ Rm x n and B ∈ Rn x p, the rank(AB) <= min(rank(A), rank(B))
  • For any matrix A,B ∈ Rm x n, rank(A + B) <= rank(A) + rank(B)

Inverse


Concept of matrix inverse is very important for most linear algebra problems. Several mathematical libraries are dedicated to just calculating the inverse in an efficient way.

The inverse of a matrix A ∈ Rn x n is another matrix A-1 ∈ Rn x n such that

    A-1A = AA-1 = In

The matrix inverse is unique. A matrix cannot have multiple inverses. Also, the inverse may not be defined for every matrix. It is certainly not defined for non-square matrices. Even with square matrices, the inverse may not be defined. Such a matrix is called Singular Matrix.

A matrix is called non-singular or invertible if and only if A-1 exists. Else it is singular or non-invertible. A non-singular or invertible matrix is always a full rank matrix.

For matrices A, B ∈ Rn x n, we have

  • (A-1)-1 = A
  • (AB)-1 = B-1A-1
  • (A-1)T = (AT)-1. Hence it is also referred as A-T

For the linear equations we saw above,

       Ax = b
    => x = A-1b

That makes the job pretty simple. Of course, for this we require that A is a full rank square matrix. That is, we have as many equations as the number of variables and none of the equations is redundant. These are the necessary and sufficient conditions for a set of linear equations to be solvable.

Orthogonal & Orthonormal


For two vectors x, y ∈ Rn, xTy can be considered to be the shadow of x on y (or vice-versa. The dot product is positive if x has some components in the direction of y (or y in the direction of x) and it is negative if x has some components in direction of y (or y in the direction of x). In a sense, it is an indicator of the angle between them. The two vectors x, y ∈ Rn are orthogonal if this dot product is 0.

And a vector x ∈ Rn is called normalized if its L2 norm is 1. Two vectors are called orthonormal if both are normalized and orthogonal.

When we talk about matrices, the definition of orthogonality is slightly different - although intuitively, it means the same. A square matrix A ∈ Rn x n is orthogonal if all its columns are normalized and orthogonal to each other. From this definition, it follows that

    UUT = I = UTU

In other words, the transpose of an orthogonal matrix is also its inverse.

Note that we need a square matrix. If A is not a square matrix with orthonormal columns, AAT will be I. But since we have more rows than the columns - more rows than the dimensions, they can never by orthonormal. (eg: three vectors in 2D space can never be orthogonal to each other). Thus the ATA is not I.

Thus, we need a square matrix for it to be orthogonal.

The orthogonal matrices have some interesting properties. For example, when we multiple a vector with an orthogonal matrix, its Euclidean (L2) norm remains unchanged. Thus, for an orthogonal matrix U ∈ Rn x n

    ||Ux||2 = ||x||2

For any x ∈ Rn

Range & Null Space


The span of something refers to the extent of its capacity. The span of a set of vectors {x1, . . . xn} is the set of all the vectors that can be referred by a linear combination of these. For example, the vectors (0, 1) and (1, 0) can span the entire two dimensional space. But the vectors (1, 1) and (2, 2) can span only one line in the two dimensional space. Thus the span of the set {(1, 1) and (2, 2)} is just a line while the span of the set {(0, 1), (1, 0)} is a plane.

In formal terms,

    span({x1, . . . xn}) = {v : v = ∑ αixi, αi ∈ Rn}

If we have n vectors are linearly independent, their span is Rn.

The projection of a vector y ∈ Rm} onto the span of n linearly independent vectors {x1, . . . xn} ∈ Rm} (here, m > n), is the vector v ∈ span{x1, . . . xn} such that y is as close to v as possible.

Formally:

    proj(y; {x1, . . . xn}) = argmin v ∈ span({x1, . . . xn})(||y - v||2) 

The Range R(A) also called the columnspace of a matrix is essentially the span of its columns. For a matrix A ∈ Rm x n, the range is defined as

    R(A) = {v ∈ Rm : v = Ax , x ∈ Rn}

This gives us some interesting properties.

If A is full rank and m > n, the projection works out to be

    proj(y, A) = A(ATA)-1ATy

Finally, the null space of a vector A ∈ Rm x n, denoted by N(A), is the set of all the vectors that equal 0 when multiplied by A. That is:

    N(A) = {x ∈ Rn: Ax = 0}

Interestingly, R(AT) and N(A) are disjoint sets and span the entire n dimensional space. Thus, any point in Rn is either in R(AT) or N(A); and never in both.

Hence they are called orthogonal complements of each other.

Determinant


The determinant of a matrix |A| or det(A) can be considered as the "volume" of space covered by its rows - the volume of the hyper cube defined by points that are a linear combination of the row vectors (ai), such that the linear coefficients are less than or equal to 1. That is, the volume covered by the set S such that

    S = {v ∈ Rn : v = ∑ αiai where 0 <= αi <= 1}

The absolute value of the determinant is the volume covered by this set S.

Algebraically; for A ∈ Rn x n consider A\i\j ∈ R(n-1) x (n-1) be the matrix generated by discarding the ith row and jth column. Then, the general recursive formula for the determinant is:

    |A| = ∑i (-1)(i+j) aij|A\i\j| for any j in 1 .. n
        = ∑j (-1)(i+j) aij|A\i\j| for any i in 1 .. n

The determinant is ugly to calculate. But people have developed efficient libraries that help us with this. It is important to understand the concept, and leave the algebra to the machines.

Intuitively, we can think of the determinant as the difference between the two sets of extended diagonals products. For a 3x3 matrix, the determinant can be calculated as

    |A| = a11a22a33 + a12a23a31 + a13a21a32 - a11a23a32  - a12a21a33 - a13a23a31

The same is extended for higher order matrices.

Quadratic Forms


Given a square matrix A ∈ Rn x n and a vector x ∈ Rn, the scalar value obtained by xTAx is defined as the quadratic form. Note that:

    xTAx = ∑ xi(Ax)i = ∑ xi ∑ Aijxj = ∑ij Aijxixj

Also, since the value is a scalar,

    xTAx = (xTAx)T = xTATx = xT(A + AT)x/2

From this, we can conclude that only the symmetric part of A contributes to the quadratic form. Hence, we implicitly assume that matrices appearing in the quadratic form are symmetric.

Based on this, for a non zero vector x ∈ Rn, a symmetric matrix A ∈ Sn is

  • Positive Definite (PD) - If xTAx > 0 - Denoted by A > 0
  • Positive Semi-Definite (PSD) - If xTAx >= 0 - Denoted by A >= 0
  • Negative Definite (ND) - If xTAx < 0 - Denoted by A < 0
  • Negative Semi-Definite (NSD) - If xTAx <= 0 - Denoted by A > 0
  • Indefinite if none of the above - There exist x1 and x2 such that x1TAx1 > 0 and x2TAx2 < 0.

Obviously, if A is positive definite, -A is negative definite; and so on. An important property of positive definite and negative definite matrices is that they are always full rank and invertible.

Gram Matrix presents an interesting case. It is defined as a matrix that can be expressed as G = ATA - for any A ∈ Rm x n. Note that A could be any matrix, need not be square. Any Gram Matrix G is always Positive Semi-Definite. Further, if m >= n and A is full rank, G = ATA is Positive Definite.

Eigen Vectors & Eigen Values


Given a square matrix A ∈ Rn x n, we say that λ ∈ C is the eigenvalue and x ∈ Rn is the corresponding eigenvector if Ax = λx for x ≠ 0. Note that λ and x need not be real. They could be complex.

Intuitively, if we multiply a matrix by the eigenvector, the result points in the same direction as x, but scaled by a factor λ.

Note that for any scalar c, if x is an eigenvector, cx is also an eigenvector. Thus, when we refer to the eigenvector for the eigenvalue & lambda, we imply the one that is normalized to 1. This too involves the confusion between x and -x. But that is not a major problem.

We can rewrite the above equation as

    (λI - A)x = 0

Interestingly, this means that (λI - A) has a non empty null space - implying that it is a singular matrix - implying that its determinant is 0. Thus, we have

    |(λI - A)| = 0.

We can use the definition of determinant to solve this nth order polynomial equation and get n complex values of λ and then we can solve further to find the individual eigenvectors.

Of course solving an nth order polynomial is not a joke - for high values of n. There are better ways of solving this problem. But this was the simplest way that can sufficiently elaborate the concept. Some interesting properties for A ∈ Rn x n

  • The trace of A is equal to the sum of its eigenvalues
  • The determinant of A is equal to the product of its eigenvalues.
  • The rank of A is equal to the number of non-zero eigenvalues.
  • If A is invertible, (1/λi) is the eigenvalue of A-1
  • The eigenvalues of a diagonal matrix are the individual diagonal elements.

We can write the entire eigenvector equation as

    AX = XΛ

Where columns of X are the eigenvectors and Λ is the diagonal matrix with the diagonal elements are the corresponding eigenvalues. If the eigenvectors of A are linearly independent then X is invertible. In that case, we can rewrite the above equation as

    A = XΛX-1

A matrix that can be written in this form is called diagonalizable.