Tuesday, October 25, 2005

PhD :: XP programming

Extreme prorgamming (XP) is a agile software development method, which I came across in the undergraduate Software engineering course that I took at Toronto. It didn't make much sense to me, and usually it doesnt make much sense to people who haven't programmer in the real-world for the real-world. All I remember of XP was that it required programmers to program in pairs. But that is probably a lesser important characterisitic of the agile method. Just the fact that two programmers had to sit together and program in XP, probably amused everyone.

XP is a rapid application development methodology, whereby less time is spent on design. This immiediately requires a lot of refactoring (cleaning up) code, which is another important characteristic of XP. So when do we pick XP over other methods? XP is best used for developing systems which require relatively a lesser amount of design time. So for e.g. e-commerce systems built using the Java framework, would require a far less time spent on design, due to the availability of already-built frameworks (such as JavaBeans, etc.). The developers build on top of the framework, which in a way dictates the programmers to follow a certain design. I say this from my experience at BEA, where portals developed using BEA's Weblogic Workshop, followed a more-or-less standard design rules set by BEA. These rules were obviously well established, and have known to work. The rules could also, ofcourse be changed, by re-configuring a few XML files.

Thursday, October 20, 2005

PhD :: Learning how to teach and what to teach

(Written for yesterday)
Today I have been observing how the computing department here at Imperial college introduces programming to first-year computer science students. Haskell is taught to first years here, and it appears that Haskell is quite popular amongst hackers, as discovered at a programming contest for hackers. More details here. Haskell is a functional language. Not many computer science graduates would have even heard about it. I think it's quite unpopular in the programming community, and the main reason for that is there are other functional languages out there which are "more" functional than Haskell. I have noticed that MIT Scheme, a functional language we were taught at Toronto, resembles Haskell to a certain extent, except that Scheme is purely functional.

No first year computer sciencers get to cross the bridge without doing Java. At Imperial, Kenya is used to introduce students to Java. Well, now Kenya has a different story. It hides from it's learner huge loads of useless information, which for someone leanring java for the first time is really a pain. Kenya is an interface to java, where Kenya code is translated to Java code. So for example a new learner can type "void main" in kenya and this gets translated internally to "public static void main". S/He can learn what "public static ... (String[] args)" later on, and concentrate on the logic required to learn programming.

Other UK universities, who are not proponents (or not aware of) of Kenya, have started using a fairly easy-to-use IDE called the BlueJ.

This year I am tutoring for Software engineering methods and Hardware. Dr. Huth and Dr. Gillies are my sueprvisors, and it is a pleasure chit-chatting with them. I try spending informative sessions with them about the course material, and in the process trying to teach myself how to teach these courses.

My meeting with Dr. Ruckert was cancelled early morning. I am meeting him this Friday. I have spent most parts of the day trying to teach myself Haskell.

Tuesday, October 18, 2005

PhD :: Modelling shape variation using PCA

I spent a major amount of my time reading and trying to grasp the basic concepts used in Principal components analysis (PCA) in Lindslay Smith's tutorial on PCA. This is a very gentle introduction to PCA and lays out the mathematical concepts in a rather very "gentle" way. At first, I thought the paper was meant for psychology students, only later to find a chapter on the use of PCA in machine vision.

A few days earlier, I had learned the true meaning of covariance, which is really the variance measure in higher dimensions. Covariance is usually represented in matrix form, and I believe it is only useful when it is used for higher dimension (The higher dimension gives it the matrix form). Lindslay's tutorial describes covariance really well.

I later refered back to the Cootes' paper on Statistical models, which on my list of literature reviews for the next few weeks. I have understood the idea behind capturing shape variation, however, I still need to understand further the mathematical equations which Cootes has laid out. I find it amusing how a shape described by n points in d dimensions, can easily be represented by a single vector, and compared using PCA.

The fact that an equation can be obtained (using just a single parameter) for a training set, on which a PCA has been performed, is also beautiful. I have peeked further into the paper, and Cootes describes how the distribution of the parameter in the equation can be modeled (from training set) to produce plausible shapes.

I have a meeting with Dr. Daniel tomorrow, and will attempt to clarify a few things that I have been wondering about.