Linear Algebra for Graphics Geeks (SVD-I)
I've been meaning to write something about the Singular Value Decomposition, but I compose and compose, get too picky, leave it in my drafts folder, come back to it, don't have time. I'm doing what we all must do, if we're to get anything done: I'm going to lower my standards and just do it.
If you've done enough graphics programming, you're familiar with the use of matrices to transform vectors. Once you've got it down and learned to think in such terms, there's a huge side benefit in that you're well prepped for learning more advanced linear algebra. In fact, if you're in college and you can choose the order of the classes, I recommend taking the basic computer graphics course before linear algebra.
I believe learning matrix transformations in 2D and 3D is ideal, because it's easy to visualize and conceptualize the various operations and transformations, and almost everything you learn in the process extends to higher dimensions.
This post is written with the graphics programmer in mind. What I want to cover is bigger than a single post, so I'm going to [hopefully] post pieces as I have time. Perhaps later I can collect them into a more meaningful whole.
The subject I'm going to talk about is about the Singular Value Decomposition, or the SVD. It often isn't covered in undergraduate linear algebra courses even though it's one of the most important matrix decompositions you can learn (according to M.I.T's Gilbert Strang it's "absolutely the high point of linear algebra.").
The SVD breaks every matrix down into the product of three simpler matrices. These three simpler matrices (U, S and V) possess many useful and desirable properties. (Note the S matrix is often also called "Sigma," but it's a pain to enter sigmas in this editor, so I'll use S most of the time.)
The Singular Value Decomposition is usually written in the following form.
Plainly speaking, a matrix A is factored into the product of a matrix U, a matrix S (or Sigma) and the transpose of a matrix V.
Every matrix can be factored via the SVD.
What makes the SVD extremely useful and interesting is that the matrices U, S and V have many wonderful properties, including being simple and easy to work with.
In graphics speak, U and V are rotation matrices (orthogonal matrices) and S is a scale matrix. Let's repeat that.
The SVD decomposes every matrix into rotation * scale * rotation.
This is cool all by itself, but there are also profound implications and consequences of this fact. Next time, we'll start looking into them.
I like to play with examples whenever possible. Perhaps you do too. If so, you'll find that I put a Scilab session up here.
The session contains the following examples: (1) creation of a function that returns a 2x2 rotation matrix, (2) creation of a function that returns a 2x2 scale matrix, (3) examples of calling these functions, (4) calculation of the SVD of the product of a rotation matrix and a scale matrix, (5) calculation of the product of U*S*V' and comparison with original matrix.
Scilab is an excellent and free competitor of Matlab. You can get it here.
Here's a link to the next post.
If you've done enough graphics programming, you're familiar with the use of matrices to transform vectors. Once you've got it down and learned to think in such terms, there's a huge side benefit in that you're well prepped for learning more advanced linear algebra. In fact, if you're in college and you can choose the order of the classes, I recommend taking the basic computer graphics course before linear algebra.
I believe learning matrix transformations in 2D and 3D is ideal, because it's easy to visualize and conceptualize the various operations and transformations, and almost everything you learn in the process extends to higher dimensions.
This post is written with the graphics programmer in mind. What I want to cover is bigger than a single post, so I'm going to [hopefully] post pieces as I have time. Perhaps later I can collect them into a more meaningful whole.
The subject I'm going to talk about is about the Singular Value Decomposition, or the SVD. It often isn't covered in undergraduate linear algebra courses even though it's one of the most important matrix decompositions you can learn (according to M.I.T's Gilbert Strang it's "absolutely the high point of linear algebra.").
The SVD breaks every matrix down into the product of three simpler matrices. These three simpler matrices (U, S and V) possess many useful and desirable properties. (Note the S matrix is often also called "Sigma," but it's a pain to enter sigmas in this editor, so I'll use S most of the time.)
The Singular Value Decomposition is usually written in the following form.
Plainly speaking, a matrix A is factored into the product of a matrix U, a matrix S (or Sigma) and the transpose of a matrix V.
Every matrix can be factored via the SVD.
What makes the SVD extremely useful and interesting is that the matrices U, S and V have many wonderful properties, including being simple and easy to work with.
In graphics speak, U and V are rotation matrices (orthogonal matrices) and S is a scale matrix. Let's repeat that.
The SVD decomposes every matrix into rotation * scale * rotation.
This is cool all by itself, but there are also profound implications and consequences of this fact. Next time, we'll start looking into them.
I like to play with examples whenever possible. Perhaps you do too. If so, you'll find that I put a Scilab session up here.
The session contains the following examples: (1) creation of a function that returns a 2x2 rotation matrix, (2) creation of a function that returns a 2x2 scale matrix, (3) examples of calling these functions, (4) calculation of the SVD of the product of a rotation matrix and a scale matrix, (5) calculation of the product of U*S*V' and comparison with original matrix.
Scilab is an excellent and free competitor of Matlab. You can get it here.
Here's a link to the next post.
0 Comments:
Post a Comment
Subscribe to Post Comments [Atom]
<< Home