跳到主要内容

東京大学 新領域創成科学研究科 メディカル情報生命専攻 2024年1月実施 問題8

Author

zephyr

Description

Suppose that the eigenvalues and the corresponding eigenvectors of an square matrix are and respectively.

Suppose that is the identity matrix, and the inverse matrix of an invertible matrix is .

Answer the following questions.

  1. Show all the eigenvalues and the corresponding eigenvectors of .

  2. If are mutually different, show that is a diagonal matrix, using that is a matrix of concatenated eigenvectors.

  3. Show all the eigenvalues and the corresponding eigenvectors of .

Misplaced & 3 & 0 & 0 \\ -2 & 3 & 2 \\ 0 & 0 & 1 \end{pmatrix}$$ 4. Suppose that $\mu$ is the maximum eigenvalue of $\mathbf{B}$, and $\mathbf{\gamma} = \begin{pmatrix} 1 \\ 0 \\ 0 \end{pmatrix}$. Calculate $\mathbf{\delta} = (\mathbf{B} - \mu \mathbf{I}_3)\mathbf{\gamma}$. 5. Suppose that $\beta$ is the eigenvector of $\mathbf{B}$ corresponding to the minimum eigenvalue. Calculate $\mathbf{Q}^{-1}\mathbf{B}\mathbf{Q}$ using $\mathbf{Q} = (\mathbf{\delta}, \mathbf{\gamma}, \beta)$ that is a matrix concatenating $\mathbf{\delta}, \mathbf{\gamma}, \beta$. 6. Suppose that $m$ is an arbitrary positive integer. Calculate $\mathbf{B}^m$. --- 假设 $n \times n$ 方阵 $\mathbf{A}$ 的特征值及相应的特征向量分别为 $\lambda_1, \dots, \lambda_n$ 和 $\mathbf{\alpha}_1, \dots, \mathbf{\alpha}_n$。 假设 $\mathbf{I}_n$ 是 $n \times n$ 的单位矩阵,并且可逆矩阵 $\mathbf{C}$ 的逆矩阵为 $\mathbf{C}^{-1}$。 回答以下问题。 1. 展示 $\mathbf{A}^2$ 的所有特征值及相应的特征向量。 2. 如果 $\lambda_1, \dots, \lambda_n$ 是互不相同的,证明 $\mathbf{P}^{-1} \mathbf{A} \mathbf{P}$ 是一个对角矩阵,其中 $\mathbf{P} = (\mathbf{\alpha}_1, \dots, \mathbf{\alpha}_n)$ 是由特征向量构成的矩阵。 3. 展示 $\mathbf{B}$ 的所有特征值及相应的特征向量。 $$\mathbf{B} = \begin{pmatrix} 3 & 0 & 0 \\ -2 & 3 & 2 \\ 0 & 0 & 1 \end{pmatrix}$$ 4. 假设 $\mu$ 是 $\mathbf{B}$ 的最大特征值,并且 $\mathbf{\gamma} = \begin{pmatrix} 1 \\ 0 \\ 0 \end{pmatrix}$。 计算 $\mathbf{\delta} = (\mathbf{B} - \mu \mathbf{I}_3)\mathbf{\gamma}$。 5. 假设 $\beta$ 是 $\mathbf{B}$ 对应于最小特征值的特征向量。计算 $\mathbf{Q}^{-1}\mathbf{B}\mathbf{Q}$,其中 $\mathbf{Q} = (\mathbf{\delta}, \mathbf{\gamma}, \beta)$ 是由 $\mathbf{\delta}, \mathbf{\gamma}, \beta$ 构成的矩阵。 6. 假设 $m$ 是任意正整数。计算 $\mathbf{B}^m$。 ## **Kai** ### 1. Positive Eigenvalues and Normalized Eigenvectors of $\mathbf{A}^T \mathbf{A}$ Given the singular value decomposition (SVD) of $\mathbf{A}$ as $\mathbf{A} = \mathbf{U} \mathbf{\Sigma} \mathbf{V}^T$, we can express $\mathbf{A}^T \mathbf{A}$ as follows:

\mathbf{A}^T \mathbf{A} = (\mathbf{U} \mathbf{\Sigma} \mathbf{V}^T)^T (\mathbf{U} \mathbf{\Sigma} \mathbf{V}^T) = \mathbf{V} \mathbf{\Sigma}^T \mathbf{U}^T \mathbf{U} \mathbf{\Sigma} \mathbf{V}^T = \mathbf{V} \mathbf{\Sigma}^2 \mathbf{V}^T

You can't use 'macro parameter character #' in math mode The matrix $\mathbf{\Sigma}^2$ is diagonal with the diagonal elements $\sigma_k^2$ ($k = 1, \ldots, r$). Thus, the positive eigenvalues of $\mathbf{A}^T \mathbf{A}$ are exactly the $\sigma_k^2$, and the associated normalized eigenvectors are the columns of $\mathbf{V}$. ### 2. Surjectivity and Injectivity of $T_{\mathbf{A}}$ **Surjective (onto)**: The mapping $T_{\mathbf{A}}: \mathbb{R}^m \to \mathbb{R}^n$ is surjective if the range of $\mathbf{A}$ spans $\mathbb{R}^n$, i.e., $\mathbf{A}$ has full row rank. This occurs when $r = n \leq m$. **Injective (one-to-one)**: The mapping $T_{\mathbf{A}}$ is injective if the kernel of $\mathbf{A}$ contains only the zero vector, i.e., $\mathbf{A}$ has full column rank. This occurs when $r = m \leq n$. ### 3. Image of $T_{\mathbf{B}}$ and Kernel of $T_{\mathbf{A}}$ The pseudoinverse $\mathbf{A}^+$ is defined as $\mathbf{A}^+ = \mathbf{V} \mathbf{\Sigma}^{-1} \mathbf{U}^T$. Consider $\mathbf{B} = \mathbf{I}_m - \mathbf{A}^+ \mathbf{A}$. We need to show that $\mathrm{Im}(T_{\mathbf{B}})$ is isomorphic to $\mathrm{Ker}(T_{\mathbf{A}})$. Observe the following:

\mathbf{B} \mathbf{A} = (\mathbf{I}_m - \mathbf{A}^+ \mathbf{A}) \mathbf{A} = \mathbf{A} - \mathbf{A}^+ \mathbf{A} \mathbf{A} = \mathbf{A} - \mathbf{A} = \mathbf{0}

\mathbf{B} \mathbf{x} = (\mathbf{I}_m - \mathbf{A}^+ \mathbf{A}) \mathbf{x} = \mathbf{x}

You can't use 'macro parameter character #' in math mode Thus, $\mathbf{x} \in \mathrm{Im}(\mathbf{B})$. Therefore, $\mathrm{Im}(\mathbf{B}) = \mathrm{Ker}(\mathbf{A})$. ### 4. Orthogonal Decomposition Given $\mathbf{x} = \mathbf{x}_1 + \mathbf{x}_2$ where $\mathbf{x}_1 = \mathbf{B} \mathbf{x}$ and $\mathbf{x}_2 = \mathbf{x} - \mathbf{x}_1$:

\mathbf{x}_2 = \mathbf{x} - \mathbf{B} \mathbf{x} = \mathbf{x} - (\mathbf{I}_m - \mathbf{A}^+ \mathbf{A}) \mathbf{x} = \mathbf{A}^+ \mathbf{A} \mathbf{x}

\mathbf{x}_1^T \mathbf{x}_2 = (\mathbf{B} \mathbf{x})^T (\mathbf{A}^+ \mathbf{A} \mathbf{x}) = \mathbf{x}^T \mathbf{B}^T \mathbf{A}^+ \mathbf{A} \mathbf{x}

\mathbf{x}^T (\mathbf{I}_m - \mathbf{A}^+ \mathbf{A}) \mathbf{A}^+ \mathbf{A} \mathbf{x} = \mathbf{x}^T (\mathbf{A}^+ \mathbf{A} - \mathbf{A}^+ \mathbf{A}) \mathbf{x} = \mathbf{0}

You can't use 'macro parameter character #' in math mode Thus, $\mathbf{x}_1$ and $\mathbf{x}_2$ are orthogonal. ### 5. Minimizing $(\mathbf{A} \mathbf{x} - \mathbf{b})^T (\mathbf{A} \mathbf{x} - \mathbf{b})$ Let $\mathbf{x}_0 = \mathbf{A}^+ \mathbf{b}$. We need to show that $\mathbf{x} = \mathbf{x}_0$ minimizes the expression. Consider the error:

\mathbf{A} \mathbf{x} - \mathbf{b} = \mathbf{A} (\mathbf{x} - \mathbf{x}_0) + (\mathbf{A} \mathbf{x}_0 - \mathbf{b})

\mathbf{A} \mathbf{x} - \mathbf{b} = \mathbf{A} (\mathbf{x} - \mathbf{x}_0)

(\mathbf{A} \mathbf{x} - \mathbf{b})^T (\mathbf{A} \mathbf{x} - \mathbf{b}) = (\mathbf{A} (\mathbf{x} - \mathbf{x}_0))^T (\mathbf{A} (\mathbf{x} - \mathbf{x}_0))

You can't use 'macro parameter character #' in math mode This is minimized when $\mathbf{x} = \mathbf{x}_0$ since $\mathbf{A} \mathbf{x}_0 = \mathbf{b}$ and $\mathbf{A} (\mathbf{x} - \mathbf{x}_0) = \mathbf{0}$. ## **Knowledge** ### 重点词汇 - singular value decomposition (SVD) 奇异值分解 - pseudoinverse 广义逆 - surjective 满射 - injective 单射 - orthogonal decomposition 正交分解 ### 参考资料 1. "Linear Algebra and Its Applications" by Gilbert Strang, Chapter 7: The Singular Value Decomposition (SVD) 2. "Matrix Computations" by Gene H. Golub and Charles F. Van Loan, Chapter 2: Matrix Analysis