Implementing Truncated Matrix Decompositions for Core.Matrix
Eigendecompositions and Singular Value Decompositions appear in a variety of settings in machine learning and data mining. The eigendecomposition looks like so:
Q contains the eigenvectors of A and Λ is a diagonal matrix containing the eigenvalues.
The singular value decomposition looks like:
\mathbf{U} contains the eigenvectors of the covariance matrix \mathbf{A}\mathbf{A^T} . \mathbf{V} contains the eigenvectors of the gram matrix \mathbf{A^T}\mathbf{A} .
The truncated variants of these decompositions allow us to compute only a few eigenvalues(vectors) or singular values (vectors).
This is important since (i) a lot of times, the smaller eigenvalues are discarded, and (ii) you don’t want to compute the entire decomposition and retain only a few of the rows and columns of the computed matrices each time.
For core.matrix, I implemented these truncated decompositions in Kublai. Details below.