metamerist

Friday, August 25, 2006

Linear Algebra for Graphics Geeks (SVD-II)

This is post #2 of Linear Algebra for Graphics Geeks. The first post is here, and the subject at hand is the Singular Value Decomposition.

A = U*S*V'

Previously, we found the SVD factors a matrix into the product U*S*V' which amounts to a rotation * scale * rotation. From a linear algebra perspective, the rotation matrices U and V are orthogonal matrices and the scale matrix S is more commonly referred to as a diagonal matrix.

(Note: If A is not a square matrix, S will not be square, but that's something we'll need to address later. For now, let's assume A is square.)

Let's talk about U and V. Because U and V are orthogonal matrices, their inverses and their transposes are one and the same. (Orthogonal Matrix: Wikipedia, MathWorld). This makes it easy to manipulate the SVD equation. For example, let's multiply both sides of the equation by V.

A*V = U*S*V'*V

V' is the transpose of V. Because V is orthogonal, V' is also the inverse of V. Consequently, the product V'*V is the identity matrix and the V' and V on the right side of the equation cancel each other out leaving us with the following equation.

A*V = U*S

This equation also has interesting implications. U and V are orthogonal matrices, and another interesting property of orthogonal matrices is that the columns form an orthonormal basis (Wikipedia, Mathworld). What that means is if you consider each column of V to be a vector, you'll have a set of vectors that are all perpendicular to each other. (Note: the rows of an orthogonal matrix form an orthonormal basis as well, but let's focus on columns for now.)

In 2D, for example, the vectors [1 0] and [0 1] form an orthonormal basis. If you rotate them both together, they'll remain perpendicular but form a new orthonormal basis. If you think about it, the columns of a rotation matrix (Wikipedia, Mathworld) always form an orthonormal basis.

Back to our new equation...

A*V = U*S

Since the columns of V form an orthonormal basis and the columns of U form an orthonormal basis and S is a scale matrix, we have:

A * (orthonormal basis) = (orthonormal basis) * (scale matrix)

In other words, if we use the matrix A to transform a set of orthogonal vectors (the column vectors of V), our result is a new set of orthogonal vectors (the columns of U) multiplied by a scale matrix (S).

This is a point where visualizations can do a lot of good. Let's say V is an orthonormal basis in 2D. The rules apply in higher dimensions, but 2D is easy to visualize.

V might be [0 1] and [1 0] as we see below, or V might be any possible rotation of those two vectors. The space covered by sweeping through all possible rotations is the unit circle as seen below again.



If A is an invertible matrix (nonsingular), then A will transfer the unit circle into an ellipse. For example, A might transform the unit circle into the following ellipse.



The orthogonal vectors shown are the principal axes of the ellipse (semimajor axis and semiminor axis). After considering these components, let's return again to our equation:

A*V = U*S

The ellipse and its principal axes shown above give us a geometric interpretation of the result of matrix A transforming V (representing axes of the unit circle) into the ellipse whose principal axes are defined by U*S.

The columns of matrix U contain unit vectors of the axes of the ellipse. The scale matrix S contains the the length of each of those vectors along its diagonal (S[1,1] is the length of the first vector in the first column of U, S[2,2] is the length of the second vector in second column of U, etc.).

Here are some more graphics.

In 2D, an invertible (nonsingular) matrix A transforms the unit circle (or any circle) into an ellipse. The result might be circle, but a circle is an ellipse.




In 3D, an invertible matrix A transforms the unit sphere into an ellipsoid.



Likewise, in N dimensions, an invertible matrix A transforms the unit n-sphere (hypersphere) into an n-ellipsoid (hyperellipsoid).

Viewing the SVD from the perspective of matrix A acting as a transform on the unit circle is useful in understanding the nature of the decomposition.

(see also: Olga Sorkine's slides: 3D Geometry for Computer Graphics)

Update (11/18/2006): I've created a small applet to demonstrate the ideas in this post. There's an explanation of the applet in this post.

to be continued here

3 Comments:

Anonymous Anonymous said...

Hi there,

Thanks for the nice article! It helped me demystify a number of SVD related question I had.

2:01 AM  
Anonymous Anonymous said...

Thanks you're really help me to understand SVD

6:03 AM  
Anonymous Bill Walker said...

Hi,

I'm trying to work out how to use the results from an SVD analysis to orient an object along the cardinal axes.

If I take a matrix, A, and transform a sphere, I get an ellipsoid.
If A = UxSxVT, and AV = US, then applying VT to the ellipsoid generated from A ought to give me the axis-aligned ellipsoid.

I'm trying to test/verify this by generating random points on the surface of a sphere, scale and rotate that data (Matrix A) to give points/vectors on an ellipsoid, and then run an SVD on those vectors.

At present, it's hit-or miss if I can get the axes of V to line up with the ellipsoid, and get VT to orient the ellipsoid with the cardinal axes. VT _does_ orient the orthonormal vectors of V, so that's something.

2:24 PM  

Post a Comment

Subscribe to Post Comments [Atom]

<< Home