Theory - 2023.2 English

Vitis Libraries

Release Date
2023-12-20
Version
2023.2 English

The SVD of an m-by-n matrix A is given by

\[A = U \Sigma V^T (A = U \Sigma V^H \, in \, the \, complex \, case)\]

where \(U\) and \(V\) are orthogonal (unitary) matrix and \(\Sigma\) is an m-by-n matrix with real diagonal elements.

Theoretically, the SVD can be characterized by the fact that the singular values are the square roots of eigenvalues of \(A^TA\), the columns of \(V\) are the corresponding eigenvectors, and the columns of \(U\) are the eigenvectors of \(AA^T\), assuming distinct singular values. The approximation can simplify the general m-by-n matrix SVD problem to a general symmetric matrix SVD problem. Due to the roundoff errors in the formulation of \(AA^T\) and \(A^TA\), the accuracy is influenced slightly, but if we don’t need a high-accuracy, the approximation can largely reduce the complexity of calculation.

There are two dominant categories of SVD algorithms for dense matrix: bidiagonalization methods and Jacobi methods. The classical bidiagonalization method is a long sequential calculation, FPGA has no advantage in that case. In contrast, Jacobi methods apply plane rotations to the entire matrix A. Two-sided Jacobi methods iteratively apply rotations on both sides of matrix A to bring it to diagonal form, while one-sided Hestenes Jacobi methods apply rotations on one side to orthogonalize the columns of matrix A and bring \(A^TA\) to diagonal form. While Jacobi methods are often slower than bidiagonalization methods, they have better potential in unrolling and pipelining.