Tuesday, October 18, 2005

PhD :: Modelling shape variation using PCA

I spent a major amount of my time reading and trying to grasp the basic concepts used in Principal components analysis (PCA) in Lindslay Smith's tutorial on PCA. This is a very gentle introduction to PCA and lays out the mathematical concepts in a rather very "gentle" way. At first, I thought the paper was meant for psychology students, only later to find a chapter on the use of PCA in machine vision.

A few days earlier, I had learned the true meaning of covariance, which is really the variance measure in higher dimensions. Covariance is usually represented in matrix form, and I believe it is only useful when it is used for higher dimension (The higher dimension gives it the matrix form). Lindslay's tutorial describes covariance really well.

I later refered back to the Cootes' paper on Statistical models, which on my list of literature reviews for the next few weeks. I have understood the idea behind capturing shape variation, however, I still need to understand further the mathematical equations which Cootes has laid out. I find it amusing how a shape described by n points in d dimensions, can easily be represented by a single vector, and compared using PCA.

The fact that an equation can be obtained (using just a single parameter) for a training set, on which a PCA has been performed, is also beautiful. I have peeked further into the paper, and Cootes describes how the distribution of the parameter in the equation can be modeled (from training set) to produce plausible shapes.

I have a meeting with Dr. Daniel tomorrow, and will attempt to clarify a few things that I have been wondering about.


1 comment:

Anonymous said...

Are you meeting Daniel Rückert in VIP? *_^