This is post #3 of Linear Algebra for Graphics Geeks. Here are links to posts
#1 and
#2. The subject at hand is the Singular Value Decomposition.
Determinants & SVDI remember learning determinants as being some sort of magical numbers that told you when a matrix was invertible. Then one day I stumbled onto a statement such as this:
"The fundamental geometric meaning of a determinant is as the
scale factor for
volume when A is regarded as a
linear transformation." (
Wikipedia)
Another way I like to say it is that the absolute value of the determinant is the volume of a unit cube after it's transformed by A. (In 2D, it's the area of a transformed unit square.)
Since rotations don't change volume in any way and determinants are scaling factors for volume, we can correctly infer that the absolute value of the determinant of a
rotation matrix is 1. The determinant of an
orthogonal matrix is always 1 or -1.
Note: I've read a lot of literature that discusses U*S*V' as rotation*scale*rotation. At the same time, I've seen cases where det(U)=-1, which Mathworld refers to as rotoinversion or improper rotation, so perhaps some authors are a little loose with their definitions of rotation. In any case, it's always true to say that U and V are orthogonal matrices One rule of determinants is that the product of determinants is the same as the determinant of a product. In other words:
det(A) = det(U*S*V') = det(U)*det(S)*det(V')
Since U and V are orthogonal matrices with determinants of -1 or +1, the absolute value of the determinant of A must be the same as the absolute value of the determinant of S. Because S is a diagonal matrix, calculating its determinant is trivial.
The determinant of a diagonal matrix is simply the product of the diagonal elements (S(1,1) * S(2,2) * S(3,3) ... S(n,n)). Since we know determinant is "scale factor for volume," this makes perfect sense; e.g., the overall scale in 3D should be the product of the width scale * height scale * depth scale.
One consequence of the SVD factoring A into U*S*V' is that all of the scaling winds up in the S matrix.
In linear algebra, determinants are primarily used to determine whether or not a matrix is invertible. If the determinant of a matrix is zero, the matrix is said to be
singular. Trying to invert a singular matrix is the linear algebra equivalent of dividing by zero. Since I've seen people stumble in this area, I'm going to elaborate a bit.
Let's look at a one dimensional case.
When you convert from inches to centimeters, you scale everything up and multiply inches by a scaling factor of 2.54. When you want to go back to inches from centimeters, you multiply by the inverse 1/2.54. For every length you can can express in inches, from 0 to as big a number as you can imagine, there's an equivalent length in centimeters. Likewise, if you convert from feet to inches, you'll scale by 12 to go one direction and 1/12 to go the other.
As you know, there are many units of length, there are scaling factors used to convert between the various units, and for everyone, reversing a conversion involves multiplying by the inverse of the scaling factor used to do the original conversion. Everything works fine on one condition: the scaling factor can't be zero. If the scaling factor is zero, the result of the conversion, no matter which number you use, is always zero. 1*0=0, 2*0=0, 3*0=0, etc.
When you use zero as a scaling factor, you reduce a one dimensional space (length) down to a zero dimensional space (a point: zero). When that happens, there's no way to go back. You can try, but you'll wind up dividing by zero. You've lost a dimension, and there's no way to recover all of the lost information.
In the case of matrices as transformations, there are multiple ways to lose one or more dimensions. A matrix might transform everything in a 3D space down a single point, or a matrix might reduce everything in a 5D space down to a 2D plane. Whenever you lose a dimension or more, there's no way to map back, and when that's the case, we refer to the matrix as singular.
Let's think about something. We know that the determinant of a singular matrix is zero. We know that the determinant of an orthogonal matrix (U,V') is -1 or +1. We know that the product of determinants is the same as the determinant of products. This means that if the determinant of A is zero, it has to be due to something going on with S, because det(A) = det(U) * det(S) * det(V') and det(U) = det(V) = +/-1.
From another perspective, if matrix A blows away one or more dimensions, it's going to reduce the volume of everything it transforms down to zero. With the SVD all of the volume scaling is handled by the diagonal matrix S. The overall volume scaling factor (determinant) is the product of the elements on the diagonal of S. If one of the diagonal elements of S is zero, that means the determinant of S is zero and therefore the determinant of A is zero and therefore A is singular.
We're beginning to understand why this is called the Singular Value Decomposition. :)
Next, we look at simple cases involving matrix singularity.