Mathematical Immaturity

2.20 Exercises

11. $\quad$This exercise tells how to determine all nonsingular $2 \times 2$ matrices. Prove that \begin{align*} \begin{bmatrix} a & b \\ c & d \end{bmatrix} \begin{bmatrix} d & -b \\ -c & a \end{bmatrix} &= (ad - bc)I. \end{align*} Deduce that $\begin{bmatrix}a & b \\ c & d\end{bmatrix}$ is nonsingular if and only if $ad - bc \neq 0,$ in which case its inverse is \begin{align*} \frac{1}{ad - bc} \begin{bmatrix} d & -b \\ -c & a \end{bmatrix}. \end{align*}

Proof. $\quad$ By definition, the product $\begin{bmatrix} a & b \\ c & d \end{bmatrix}\begin{bmatrix} d & -b \\ -c & a\end{bmatrix}$ is the $2 \times 2$ matrix: \begin{align*} \begin{bmatrix} a & b \\ c & d \end{bmatrix} \begin{bmatrix} d & -b \\ -c & a \end{bmatrix} &= \begin{bmatrix} ad - bc & -ab + ba \\ cd - dc & -bc + da \end{bmatrix}. \end{align*} Then, because $a, b, c, d$ are scalars, addition and multiplication are commutative, giving us \begin{align*} \begin{bmatrix} ad - bc & -ab + ba \\ cd - dc & -bc + da \end{bmatrix} &= \begin{bmatrix} ad - bc & ab - ab \\ cd - cd & ad -bc \end{bmatrix} \\ &= \begin{bmatrix} ad - bc & 0\\ 0 & ad - bc \end{bmatrix} \\ &= (ad - bc)I. \end{align*}

$\quad$ If $\begin{bmatrix}a & b \\ c & d\end{bmatrix}$ is nonsingular, we can apply the Gauss-Jordan elimination process to determine its inverse. First, we initialize an augmented matrix as follows \begin{align*} \begin{bmatrix} \begin{array}{cc|cc} a & b & 1 & 0 \\ c & d & 0 & 1 \end{array} \end{bmatrix} \end{align*} To determine the inverse, we wish to transform the $2 \times 2$ matrix to the left of the vertical line into the identity matrix. We can do so by using the three basic types of row operations:

$\quad$ (1) Interchanging two equations;
$\quad$ (2) Multiplying all the terms of an equation by a nonzero scalar;
$\quad$ (3) Adding one equation to a multiple of another.

The transformation carries on as follows: \begin{align*} \begin{bmatrix} \begin{array}{cc|cc} a & b & 1 & 0 \\ c & d & 0 & 1 \end{array} \end{bmatrix} \end{align*} \begin{align*} \begin{bmatrix} \begin{array}{cc|cc} ac & bc & c & 0 \\ ac & ad & 0 & a \end{array} \end{bmatrix} \end{align*} \begin{align*} \begin{bmatrix} \begin{array}{cc|cc} ac & bc & c & 0 \\ 0 & ad - bc & -c & a \end{array} \end{bmatrix} \end{align*} \begin{align*} \begin{bmatrix} \begin{array}{cc|cc} ac & bc & c & 0 \\ 0 & ad - bc & -c & a \end{array} \end{bmatrix} \end{align*} It is at this point we find that if $ad - bc = 0,$ then the bottom row is all zeros, meaning that the elimination process ends and that $\begin{bmatrix}a & b \\ c & d\end{bmatrix}$ cannot be inverted, leading to a contradiction. Thus, if $\begin{bmatrix}a & b \\ c & d\end{bmatrix}$ is nonsingular, then $ad - bc \neq 0.$

$\quad$ Now, assume that $ad - bc \neq 0.$ The elimination process continutes as follows: \begin{align*} \begin{bmatrix} \begin{array}{cc|cc} ac & bc & c & 0 \\ 0 & 1 & \frac{-c}{ad - bc} & \frac{a}{ad - bc} \end{array} \end{bmatrix} \end{align*} \begin{align*} \begin{bmatrix} \begin{array}{cc|cc} ac & 0 & \frac{acd}{ad - bc} & \frac{-abc}{ad - bc} \\ 0 & 1 & \frac{-c}{ad - bc} & \frac{a}{ad - bc} \end{array} \end{bmatrix} \end{align*} \begin{align*} \begin{bmatrix} \begin{array}{cc|cc} 1 & 0 & \frac{d}{ad - bc} & \frac{-b}{ad - bc} \\ 0 & 1 & \frac{-c}{ad - bc} & \frac{a}{ad - bc} \end{array} \end{bmatrix} \end{align*} As we can see, the inverse of $\begin{bmatrix}a & b \\ c & d\end{bmatrix}$ is given by \begin{align*} \frac{1}{ad - bc} \begin{bmatrix} \begin{array}{cc} d & -b \\ -c & a \end{array} \end{bmatrix}. \quad \blacksquare \end{align*}