Mathematical Immaturity

2.19 Inverses of square matrices

$\quad$ Let $A = (a_{ij})$ be a square $n \times n$ matrix. If there is another $n \times n$ matrix $B$ such that $BA = I,$ where $I$ is the $n \times n$ identity matrix, then $A$ is called nonsingular and $B$ is called a left inverse of $A.$ If we choose the usual basis of unit coordinate vectors in $V_n$ and let $T: V_n \to V_n$ be the linear transformation with matrix $m(T) = A,$ then we have the following:

$\quad$ Theorem 2.20. $\quad$ The matrix $A$ is nonsingular if and only if $T$ is invertible. If $BA = I,$ then $B = m(T^{-1}).$

$\quad$ Proof. $\quad$ Assume $A = (a_{ij})_{i,j = 1}^{n, n}$ is nonsingular. To prove that this implies $T$ is invertible, we will show that if $T(x) = O,$ then $x = O$ and use the result of Theorem 2.10.

$\quad$ If $x = (x_1, \dots, x_n)$ is an element of $V_n$ such that $T(x) = O,$ we can represent the transformation $T(x)$ as follows: \begin{align*} T(x) &= \sum_{k=1}^n x_kT(e_k) \\ &= \sum_{k=1}^n x_k \sum_{i=1}^n a_{ik} e_i \\ &= \sum_{i=1}^n\left(\sum_{k=1}^n a_{ik}x_k\right) e_i \\ &= O \end{align*} However, for $i = 1, \dots, n,$ we can rewrite the sum $\sum_{k=1}^n a_{ik}x_k$ as the matrix product of the $i^{th}$ row of $A,$ $(a_{i1}, \dots, a_{in})$ with the $n \times 1$ column matrix $X = \begin{bmatrix}x_1 \\ \vdots \\ x_n\end{bmatrix}.$ In other words, we can express the components of $T(x)$ as the matrix product $AX.$ \begin{align*} AX &= \begin{bmatrix} a_{11} & \dots & a_{1n} \\ \vdots & & \vdots \\ a_{n1} & \dots & a_{nn} \end{bmatrix} \begin{bmatrix} x_1 \\ \vdots \\ x_n \end{bmatrix} \\ &= \begin{bmatrix} \sum_{k=1}^n a_{1k}x_k \\ \vdots \\ \sum_{k=1}^n a_{nk}x_k \end{bmatrix} \\ &= O \end{align*} Applying $B$ to this equation, we find \begin{align*} B(AX) &= \begin{bmatrix} b_{11} & \dots & b_{1n} \\ \vdots & & \vdots \\ b_{n1} & \dots & b_{nn} \end{bmatrix} \begin{bmatrix} 0 \\ \vdots \\ 0 \end{bmatrix} \\ &= O \end{align*} But because $A$ is nonsingular, there exists an $n \times n$ matrix $B$ such that $BA = I.$ Applying this product to $X$ gives us: \begin{align*} (BA)X &= \begin{bmatrix} 1 & \dots & 0 \\ \vdots & & \vdots \\ 0 & \dots & 1 \end{bmatrix} \begin{bmatrix} x_1 \\ \vdots \\ x_n \end{bmatrix} \\ &= \begin{bmatrix} x_1 \\ \vdots \\ x_n \end{bmatrix} \\ &= X \end{align*} Then, by the associative law of matrix multiplication, we have $(BA)X = B(AX),$ or $X = O.$ As such, $T(x) = O$ implies $x = O,$ thus proving that if $A$ is nonsingular, then $T$ is invertible.

$\quad$ Now, assume that $T$ is invertible. Then there exists a transformation $T^{-1}$ such that $T^{-1}T = I_V,$ where $I_V$ is the identity transformation on $V_n.$ If we apply the matrix transformation $m$ to this equation, we have $m(T^{-1}T) = m(I_V) = I,$ where $I$ is the $n \times n$ identity matrix defined as before. But as we showed in Theorem 2.16, $m(T^{-1}T) = m(T^{-1})m(T).$ And because $m(T) = A,$ we have \begin{align*} m(T^{-1})m(T) &= m(T^{-1})A = I \end{align*} But as we can see, if we set $B = m(T^{-1}),$ then there exists a matrix $B$ such that $BA = I.$ Hence, $A$ is nonsingular.

$\quad$ Finally, if $BA = I,$ then $A$ is nonsingular, and is thus invertible. As such, there is a transformation $T^{-1}$ such that $TT^{-1} = I_V.$ Taking the matrix transformations of both sides gives us $m(T)m(T^{-1}) = I.$ But since we know $A = m(T),$ we have $Am(T^{-1}) = I.$ Applying $B$ to both sides and using the associative law, we find $(BA)m(T^{-1}) = m(T^{-1}) = B. \quad \blacksquare$

$\quad$ All of the properties of invertible linear transformations have their counterparts for nonsingular matrices. In particular, left inverses (if they exist) are unique, and every left inverse is also a right inverse. In other words, if $A$ is nonsingular and $BA = I,$ then $AB = I.$ We call $B$ the inverse of $A$ and denote it by $A^{-1}.$ The inverse $A^{-1}$ is also nonsingular and its inverse is $A.$

$\quad$ Now, we show that the problem of actually determining the entries of an inverse of a nonsingular matrix is equivalent to solving $n$ separate nonhomogeneous linear systems. Let $A = (a_{ij})$ be nonsingular and let $A^{-1} = (b_{ij})$ be its inverse. The entries of $A$ and $A^{-1}$ are related by the $n^2$ equations \begin{align*} \sum_{k=1}^n a_{ik}b_{kj} &= \delta_{ij} \end{align*} where $\delta_{ij} = 1$ if $i = j$ and $0$ if $i \neq j.$ For each fixed choice of $j,$ we can regard this as a nonhomogeneous system of $n$ linear equations in $n$ unknowns $b_{1j}, b_{2j}, \dots, b_{nj}.$ Since $A$ is nonsingular, each of these systems has a unique solution, the $j^{th}$ column of $B.$