metamerist

Tuesday, August 29, 2006

Linear Algebra for Graphics Geeks (SVD-IV)

This is post #4 of Linear Algebra for Graphics Geeks. Here are links to posts #1, #2, #3. The subject at hand is the Singular Value Decomposition.

In the last episode we looked at the determinant and its relationship to the SVD. In closing we came to the realization that zeroes on the diagonal of S were evidence of A's singularity.

While a zero determinant indicates the singularity of a matrix (it has no inverse), a really great property of the SVD is that it not only tells you if a matrix is singular, but also gives you an indication of which dimensions have been blown away (zeroed).

Let's look at a very simple example. Consider the following 2x2 matrix.

[1 0 ]
[0 0]

Transforming 2D vectors [x y]' with this matrix eliminates the y component, projecting everything down onto the x axis. That is,

new_x = [1 0] * [x y]'
new_y = [0 0] * [x y]'

No matter what, the new transformed value of y becomes zero. The SVD of this matrix is (U*S*V'). Ignore the periods--I'm just trying to maintain spacing in Blogger.

[1 0] * [1 0] * [1 0]
[0 1] . [0 0] . [0 1]

Note that U and V' (and V) are equal to the identity matrix. The first column of U is [1 0], which is a unit vector along the x axis. The second column of U is [0 1], which is a unit vector along the y axis.

The top-left element of S is 1, which shows that the dimension indicated by the first column of U (in this case, the x axis) is unchanged (scaled by 1); the bottom-right element of S is 0, which indicates the dimension indicated by the second column of U (the y axis) is blown away to zero.

Let's look at another matrix.

[2 0]
[0 0]


Like the first example, this matrix again zeroes y values, but this time the scaling factor for the x dimension is 2, so all x values are doubled. The SVD of this matrix is (U,S,V').

[1 0] * [2 0] * [1 0]
[0 1] . [0 0] . [0 1]


The only change from the previous result is the top-left element of S has changed to 2, indicating the new scaling factor for the x dimension.

This time, let's eliminate the x dimension and double the y dimension.

[0 0 ]
[0 2 ]

The SVD for this matrix (U,S,V') is:

[0 1] * [2 0] * [0 1]
[1 0] . [0 0] . [1 0]

It might be surprising that S is the same as it was in the previous example. Why? We're not scaling the x dimension by 2 this time--we're scaling y by 2. Why is the top-left of S still 2? The answer: The SVD sorts the elements on the diagonal of S in descending order. The top-left element of S contains that largest value and the bottom-right element of S contains the smallest value (we're still assuming square matrices).

If you look at U, you'll see that the column vectors have been swapped. The first column vector [0 1] is a unit vector on the y axis. The second column vector of U is the unit vector [1 0] on the x axis. You'll also see the rows of V' have been swapped (although it's impossible to differentiate between a column swap and a row swap in this case, the rows of V' are our concern).

One reason these examples are so simple and clean is that there's no rotation involved. We've simply scaled the x and y axes without rotation. As a consequence, the orthonormal basis comprised by the columns of U consists of vectors on the x and y axes.

Here's another nice property of the SVD. Zeroes on the diagonal of S indicate which of the column vectors of U are singular. Not only that, values near zero on the diagonal of S indicate which of the column vectors of U are nearly singular. This is valuable information because near zero is often as bad as zero. Also, the rank is easily determined by inspecting S; the rank of A is equal to the number of non-zero elements on the diagonal of S.

Next, let's look at matrix inversion.

0 Comments:

Post a Comment

Subscribe to Post Comments [Atom]

<< Home