Mathematical Immaturity

2.10 Matrix representations of linear transformations

$\quad$ Theorem 2.12 shows that a linear transformation $T: V \to W$ of a finite-dimensional linear space $V$ is completely determined by its action on a given set of basis elements $e_1, \dots, e_n.$ Now, suppose the space $W$ is also finite-dimensional, say $\dim W = m,$ and let $w_1, \dots, w_m$ be a basis for $W.$ (The dimensions $n$ and $m$ may not be equal.) Since $T$ has values in $W,$ each element $T(e_k)$ can be expressed uniquely as a linear combination of the basis elements $w_1, \dots, w_m,$ say \begin{align*} T(e_k) &= \sum_{i=1}^m t_{ik}w_i \end{align*} where $t_{1k}, \dots, t_{mk}$ are the components of $T(e_k)$ relative to the ordered basis $(w_1, \dots, w_m).$ We will display the $m$-tuple as follows: \begin{align*} (2.9) \qquad \begin{bmatrix} t_{1k} \\ t_{2k} \\ \vdots \\ t_{mk} \end{bmatrix} \end{align*} This array is called a column vector or column matrix. We have such a column vector for each of the $n$ elements $T(e_1), \dots T(e_n).$ We place them side by side and enclose them in one pair of brackets to obtain the following rectangular array: \begin{align*} \begin{bmatrix} t_{11} & t_{12} & \cdots & t_{1n} \\ t_{21} & t_{22} & \cdots & t_{2n} \\ \vdots & \vdots & & \vdots \\ t_{m1} & t_{m2} & \cdots & t_{mn} \end{bmatrix} \end{align*} This array is called a matrix consisting of $m$ rows and $n$ columns. We call it an $m$ by $n$ matrix, or an $m \times n$ matrix. The first row is the $1 \times n$ matrix $(t_{11}, t_{12}, \dots, t_{1n}).$ The $m \times 1$ matrix displayed in $(2.9)$ is the $k$th column. The scalars $t_{ik}$ are indexed so the first subscript $i$ indicates the row, and the second subscript $k$ indicates the column in which $t_{ik}$ occurs. The more compact notation \begin{align*} (t_{ik}), \qquad \text{or} \qquad (t_{ik})_{i, k = 1}^{m, n} \end{align*} is also used to denote the matrix whose $ik$-entry is $t_{ik}.$

$\quad$ Thus, every linear transformation $T$ of an $n$-dimensional space $V$ into an $m$-dimensional space $W$ gives rise to an $m \times n$ matrix gives rise to an $m \times n$ matrix $(t_{ik})$ whose columns consist of the components of $T(e_1), \dots, T(e_n)$ relative to the basis $(w_1, \dots, w_m).$ We call this the matrix representation of $T$ relative to the given choice of ordered bases $(e_1, \dots, e_n)$ for $V$ and $(w_1, \dots, w_m)$ for $W.$ Once we know the matrix $(t_{ik}),$ the components of any element $T(x)$ relative to the basis $(w_1, \dots, w_m)$ can be determined as described in the next theorem:

$\quad$ Theorem 2.13. $\quad$ Let $T$ be a linear transformation in $\mathscr{L}(V,W),$ where $\dim V = n$ and $\dim W = m.$ Let $(e_1, \dots, e_n)$ and $(w_1, \dots, w_m)$ be ordered bases for $V$ and $W,$ respectively, and let $(t_{ik})$ be the $m \times n$ matrix whose entries are determined by the equations \begin{align*} (2.10) \qquad T(e_k) &= \sum_{i=1}^m t_{ik}w_i, \qquad for \quad k=1, 2, \dots, n. \end{align*} Then, an arbitrary element \begin{align*} (2.11) \qquad x &= \sum_{k=1}^n x_ke_k \end{align*} in $V$ with components $(x_1, \dots, x_n)$ relative to $(e_1, \dots, e_n)$ is mapped by $T$ onto the element \begin{align*} (2.12) \qquad T(x) &= \sum_{i=1}^m y_iw_i \end{align*} in $W$ with components $(y_1, \dots, y_m)$ relative to $(w_1, \dots, w_m).$ The $y_i$ are related to the components of $x$ by the linear equations \begin{align*} (2.13) \qquad y_i &= \sum_{k=1}^n t_{ik}x_k \qquad for \quad i=1, 2, \dots, m. \end{align*}

$\quad$ Proof. $\quad$ With $T(e_k)$ as defined in $(2.10),$ we have \begin{align*} T(x) &= T\left(\sum_{k=1}^n x_ke_k\right) \\ &= \sum_{k=1}^n x_k T(e_k) \\ &= \sum_{k=1}^n x_k \left[\sum_{i=1}^m t_{ik}w_i\right] \\ \\ &= x_1(t_{11}w_1 + t_{21}w_2 + \dots + t_{m1}w_m)\ + \\ &\cdots \\ &+ x_n(t_{1n}w_1 + t_{2n}w_2 + \dots + t_{mn}w_m) \\ \\ &= (t_{11}x_1 + t_{12}x_2 + \dots + t_{1n}x_n)w_1\ + \\ &\cdots \\ &+ (t_{m1}x_1 + t_{m2}x_2 + \dots + t_{mn}x_n)w_m \\ \\ &= \sum_{i=1}^m\left(\sum_{k=1}^nt_{ik}x_k\right)w_i \\ &= \sum_{i=1}^my_iw_i \end{align*} where each $y_i$ is defined as in $(2.13). \quad \blacksquare$

$\quad$ Having chosen a pair of bases $(e_1, \dots, e_n)$ and $(w_1, \dots, w_m)$ for $V$ and $W,$ respectively, every linear transformation $T: V \to W$ has a matrix representation $(t_{ik}).$ Conversely, if we start with any $mn$ scalars arranged as a rectangular matrix $(t_{ik})$ and choose a pair of ordered bases for $V$ and $W,$ then it is easy to prove that there is exactly one linear transformation $T: V \to W$ having this matrix representation. We simply define $T$ at the basis elements of $V$ by the equations in $(2.10).$ Then, by theorem 2.12, there is one and only one linear transformation $T: V \to W$ with these prescribed values.