- Calculus, Volume 2: Multi-Variable Calculus and Linear Algebra with Applications to Differential Equations and Probability
- Tom M. Apostol
- Second Edition
- 1991
- 978-1-119-49676-2
3.4 $\quad$ Computation of determinants
$\quad$ This section contains examples of computing determinants using only the axioms and properties given in the previous section, assuming that the determinants exist. Of note, Axiom 4 (determinant of the identity matrix) is not used until the last step in each example.
$\quad$ Example 1. $\quad$ Determinant of a $2 \times 2$ matrix. $ \quad$ We will now prove that \begin{align*} (3.4) \qquad \det \begin{bmatrix}a_{11} & a_{12} \\ a_{21} & a_{22}\end{bmatrix} &= a_{11}a_{22} - a_{12}a_{21}. \end{align*} First, we represent the rows of the matrix as linear combinations of the unit coordinate vectors $\mathbf{i} = (1,0)$ and $\mathbf{j} = (0, 1):$ \begin{align*} A_1 &= (a_{11}, a_{12}) = a_{11}\,\mathbf{i} + a_{12}\,\mathbf{j} \\ A_2 &= (a_{21}, a_{22}) = a_{21}\,\mathbf{i} + a_{22}\,\mathbf{j} \end{align*} Giving us \begin{align*} \det \begin{bmatrix}a_{11} & a_{12} \\ a_{21} & a_{22}\end{bmatrix} &= d(A_1, A_2) \end{align*} Then, by row-wise linearity, we have \begin{align*} d(A_1, A_2) &= d(a_{11}\,\mathbf{i}, A_2) + d(a_{12}\,\mathbf{j}, A_2) \\ &= a_{11}[d(\mathbf{i}, a_{21}\,\mathbf{i}) + d(\mathbf{i}, a_{22}\,\mathbf{j})] + a_{12}[d(\mathbf{j}, a_{21}\,\mathbf{i}) + d(\mathbf{j}, a_{22}\,\mathbf{j})] \\ &= a_{11}a_{21}d(\mathbf{i}, \,\mathbf{i}) + a_{11}a_{22}d(\mathbf{i},\mathbf{j}) + a_{12}a_{21}d(\mathbf{j}, \mathbf{i}) + a_{12}a_{22}d(\mathbf{j},\mathbf{j}) \end{align*} By property (d) of the previous section, we know that any determinant with two equal rows vanishes, making $d(\mathbf{i}, \,\mathbf{i}) = d(\mathbf{j},\mathbf{j}) = 0.$ And property (c) allows us to rewrite $d(\mathbf{j}, \mathbf{i}) = -d(\mathbf{i}, \mathbf{j}).$ Noting that $d(\mathbf{i}, \mathbf{j}) = \det I,$ we get the following: \begin{align*} d(A_1, A_2) &= a_{11}a_{22}\det I - a_{12}a_{21}\det I \end{align*} But by axiom 4, we know that for any $n \times n$ identity matrix $I,$ $\det I = 1.$ Thus, \begin{align*} \det \begin{bmatrix}a_{11} & a_{12} \\ a_{21} & a_{22}\end{bmatrix} &= a_{11}a_{22} - a_{12}a_{21} \quad \blacksquare \end{align*}
$\quad$ Example 2. $\quad$ Determinant of a diagonal matrix. $\quad$ A square matrix of the form \begin{align*} A &= \begin{bmatrix} a_{11} & 0 & \cdots & 0 \\ 0 & a_{22} & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & 0 & a_{nn} \end{bmatrix} \end{align*} is called a diagonal matrix. We will now prove that the determinant of a diagonal matrix is the product of its diagonal entries. \begin{align*} (3.5) \qquad \det A &= a_{11}a_{22} \cdots a_{nn} \end{align*} First, let $I$ be the $n \times n$ identity matrix, then set $I_k$ as the $k^{th}$ row of $I.$ Then, the $k^{th}$ row of $A$ is a scalar multiple of $I_k.$ This gives us the following: \begin{align*} \det A &= d(A_1, A_2, \dots, A_n) \\ &= a_{11}d(I_1, A_2 \dots, A_n) \\ &= a_{11}a_{22}(I_1, I_2, \dots, I_n) \\ &\cdots \\ &= (a_{11}a_{22} \cdots a_{nn})d(I_1, I_2, \dots, I_n) \end{align*} But since $I_1, \dots, I_n$ are the rows of $I,$ $d(I_1, I_2, \dots, I_n) = \det I = 1,$ giving us \begin{align*} \det A &= a_{11}a_{22} \cdots a_{nn}. \quad \blacksquare \end{align*}
$\quad$ Example 3. $\quad$ Determinant of an upper triangular matrix. $\quad$ An $n \times n$ square matrix of the form \begin{align*} U &= \begin{bmatrix} u_{11} & u_{12} & \cdots & u_{1n} \\ 0 & u_{22} & \cdots & u_{2n} \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & u_{nn} \end{bmatrix} \end{align*} is called an upper triangular matrix. The elements below the main diagonal are zero. We will now prove that the determinant of an upper triangular matrix is the product of its diagonal elements \begin{align*} \det U &= u_{11}u_{22} \cdots u_{nn}. \end{align*} First, we will prove that if any of the diagonal elements of $U$ are zero, then $\det U = 0.$ Let $u_{nn} = 0.$ Then the $n^{th}$ row is zero and thus $\det U = 0.$ Now, let any diagonal element of $U$ be zero. For example, if the $k^{th}$ diagonal element is zero, then by the Gauss-Jordan elimination process, we can find a nontrivial linear combination of rows $k$ to $n$ that sum to zero. In other words, the rows of $U$ are dependent and $\det U = 0.$
$\quad$ Now, if we assume that $U$ has nonzero diagonal elements, we can represent rows of $U$ as $U_k = V_k + V'_k,$ where $V_k$ is the vector $U_{kk} I_k.$ For example, when $k = 1,$ we have \begin{align*} V_1 &= (u_{11}, 0, \dots, 0), \qquad V'_1 = (0, u_{12}, \dots, u_{1n}) \end{align*} Then, by additivity of the determinant, we have \begin{align*} \det U &= d(V_1, U_2, \dots, U_n) + d(V'_1, U_2, \dots, U_n) \end{align*} But because $V'_1$ has a diagonal element equal to zero, $d(V'_1, U_2, \dots, U_n) = 0.$ Then, we can do the same for $k = 2:$ \begin{align*} V_2 &= (0, u_{22}, \dots, 0), \qquad V'_2 = (u_{21}, 0, \dots, u_{2n}) \end{align*} \begin{align*} \det U &= d(V_1, V_2, \dots, U_n) + d(V_1, V'_2, \dots, U_n) \\ &= d(V_1, V_2, \dots, U_n) \end{align*} Repeating this process for the remaining $n - 2$ rows, we find that \begin{align*} \det U &= d(V_1, \dots, V_n) \\ &= u_{11}\cdots u_{nn}\,d(I) \\ &= u_{11}\cdots u_{nn}. \quad \blacksquare \end{align*}
$\quad$ Example 4. $\quad$ Computation by the Gauss-Jordan process. $\quad$ We can also apply the Gauss-Jordan elimination process to a matrix $U$ in order to transform it into a an upper triangular matrix from which we can calculate its determinant as outlined in Example 3. We recall that the process involves the application of three row operations to a square matrix:
$\quad$ (1) $\quad$ Interchanging two equations;
$\quad$ (2) $\quad$ Multiplying all the terms of an equation by a nonzero scalar;
$\quad$ (3) $\quad$ Adding one equation to a multiple of another.
$\quad$ When (1) is applied to a matrix $A,$ the sign of its determinant changes. When (2) is applied to $A,$ for some nonzero scalar $c,$ the determinant is multiplied by $c.$ When (3) is applied, the determinant remains unchanged. Then, if $U$ is the upper triangular matrix reached by applying the elimination process to $A,$ we get \begin{align*} \det A &= (-1)^p (c_1c_2\cdots c_q)^{-1} \det U \end{align*}