metamerist

Tuesday, September 19, 2006

Linear Algebra for Graphics Geeks (Pseudoinverse)

This is post #9 of Linear Algebra for Graphics Geeks. Here are links to posts #1, #2, #3, #4, #5, #6, #7, #8.

This post is on the Moore-Penrose pseudoinverse of a matrix. I'm going to present this in ways I've never seen it presented, so wish me luck.

Let's start with the following matrix.



It scales x by 3, y by 2, and it leaves z as it is.



This is a scale matrix, a diagonal matrix. It can be used to transform from one 3D space to another 3D space that is three times as wide and twice as high. It's a nonsingular matrix. There's not reduction in dimensionality. It's invertible.

The inverse of a diagonal matrix is trivial. Each element on the diagonal is replaced by its reciprocal. Here's the inverse of our matrix.



And, here it is in action.



In the last post on linear algebra, I discussed "the column view" and presented some examples in which columns of the identity matrix were zeroed out. Let's do something similar and zero out the last column of our matrix.



The effect of this matrix is much the same as the original matrix, x is tripled and y is doubled, but this time z is zeroed; that is, everything is scaled in x and y and then projected onto the x-y plane.



This matrix is singular. It's not invertible. If you read the last post on this subject, it should be clear the dimensionality of the column space is 2 and that this matrix is less than full rank.

If we try to invert this diagonal matrix, we'll divide by zero when we try to replace the element in the third column with its reciprocal, but here's the big question:

What if we invert the elements we can invert (the non-zero elements) and forget about the elements we can't invert (the zeroes)?

If we do that, we'll wind up with this matrix:



Let's try using this "we-inverted-what-we-could matrix" as a substitute inverse.



Conclusion? We got x back to its original state. We got y back too. On the other hand, z's still zero, but what can we expect? Once you've multiplied by zero, there's no way to go back. Everything projected onto the x-y plane in the original transformation stays there.

Let's look at everything from another perspective. In the previous post, I showed how the product of a matrix and a vector results in a linear combination of the column vectors of the matrix--in other words, the product is always in the column space of the matrix.

In the case of the matrix that's the subject of this post, we calculated the inverse as best we code, inverting the elements of a diagonal matrix where possible. When we used this substitute inverse, we saw some of the components restored; the ones restored were in the column space of the original matrix--the z values weren't in the original column space, and they weren't restored.

The substitute inverse we calculated is an example of a Moore-Penrose pseudoinverse, and it's often just referred to as a pseudoinverse. It's denoted with a superscripted plus sign, as in the following.



A Moore-Penrose pseudoinverse must meet the following criteria:

1. AA+A = A
2. A+AA+ = A+
3. (AA+)T = AA+
4. (A+A)T = A+A

(The T indicates tranposes. I have been using apostrophes, but I'm concluding they're hard to see.)

There's a lot to be said about pseudoinverses (see Wikipedia, Mathworld). Here are a few key points.

1. Pseudoinverses may be rectangular (unlike ordinary inverses which must be square).
2. When a matrix is square and invertible, its inverse and pseudoinverse are equal.
3. If (ATA) is invertible, then A+ = (ATA)-1AT. (Important fact for least squares fit!)
4. Given, Ax = b, the pseudoinverse provides the least squares solution with x = A+b.
5. The SVD is the most common means of calculating pseudoinverses (next post).

Next: The SVD and the Moore-Penrose Inverse.

0 Comments:

Post a Comment

Subscribe to Post Comments [Atom]

<< Home