Friday, November 17, 2006

Surface and volume renderings of Left-atrium MRA

We are ready to show our results and get it verified from the cardiologist. Here are some images showing both volume and surface renderings of the Left atrium with respect to the entire thoracic region. These are MRA scans where the patient is injected with a radio-contrasting agent such as Gadolinium, hence causing the blood vessels to appear as bright structures.


Following is a surface rendering of the above image. One can see the surrounding structures and the descending aorta.


The following image is a surface-rendering of the left atrium which was posted earlier. The previous post featured the same image, however with a lot of noise generating from the extra triangles which were produced as a result of choosing an incorrect threshold in the marching cubes algorithm.



This is apparently one of cardiologists' favorites. The following is a volume rendering of the left atrium and can be seen segmented in green and viewed w.r.t. to its neighboring structures.

Monday, November 13, 2006

A second Atrium

Finally I have got my hands on a second atrium MRA. It is quite different in shape and topology, from the first atrium. It is quite intriguing how such varying shapes of the same organ can occur in nature. The atrium is much larger in shape compared to the previous one, and has thicker PV branches. Such topological variations can make it very difficult to model the shapes using statistical models. One might need to resort to incorporating other techniques that can encode a-priori information, such as probabilisitic atlases.




The MRA data appears to contain a significant amount of noise. I am not entirely certain if this is due to actual noise, or if it's being caused by the rendering algorithm. In the near future, I will be looking more closely into this patient's MRA data and attempt to acquire better images of the segmentation.

Friday, November 10, 2006

Left atrial shape changes

I have come across scores of works which investigate the ventricular volume changes during the cardiac cycle. It has important applications, and is primarily used to quantitatively assess functional parameters such as wall motion, wall thickness and ejection fraction. Considering the fact that the ventricle changes shape, it is not unusual to expect the left atrium to also change its shape. The systole (when the heart contracts) and diastole (when the heart relaxes) are evident in all the four components of the heart. Little work has been done to investigate the atrial volume, and the only piece of work I could find is this. The authors investigate the changes in left-atrial volume in healthy children using 3D Echocardiography.

The changes in left atrial volume can have implications in RF ablation surgery. H. Zhong et. al. describes the RF surgery well in here by comparing it to a painter trying to paint a wall (left atrial endocardium) with his brush (catheter). Assuming that the painter has made an impression of what he is going to paint before hand, on a piece of paper. If the wall started moving, it would be quite trivial to paint it if the painter's motion and the wall's motion are registered (finding a one-to-one correspondence) , along with that of his impression. The hurdle is to be able to solve this correspondence problem and register our 4D model of the atrium with that of the acquired MRI/CTs. H. Zhong et. al. attempts to solve this in their work.

The MRI datasets came in just today, and I will be looking at more atrium segmentation (for training data) in the coming week.

Monday, October 30, 2006

Awaiting to get more MRI

I get most of the MRI data from patient-studies at the National Heart and Lung Institute in London. I work primarily on MRIs obtained after an Angiogram. An Angiogram is a technique in which the blood is injected with a radio-contrasting agent such as Gadolinium. This causes the contaminated blood to appear as bright white in the MRI. Blood vessels and heart structures such as the Left atrium 'light-up' due to this contamination.

I had received quite a number of MRI studies from a cardiologist in early March this year. Most of these do not have an Angiogram. The ones that do, only captured the contaminated blood before it had flowed into the left atrium, leaving the LA invisible.

I will be awaiting to acquire more MRI studies, in order to segment a few more of left atriums. Segmenting more atriums will help us validate and confirm the correctness of our segmentation and provide us with enough training data to (possibly) build a statistical model.

Wednesday, October 25, 2006

Some more segmented atriums

Segmenting each atrium takes less than about 10 minutes. Here is an atrium segmented showing all four PV drainages.


The points around the left atrium are the local maximum points, and the red-background is the EDT map of the original image. With little training, it is fairly easy to segment the PV drainages using this method. In the final segmentation step, the user picks local maximum points, and the voxels which are part of the basic component centered by these local maximum points are revealed. The user must be trained before-hand on which local maximum points to pick. It is a fairly trivial task to pick local maximu points on the PV drainages, since the local-maximum points in the PV drainages lie roughly around a straight-line


Saturday, October 21, 2006

More segmented left-atriums

I have been experimenting to find out how my implementation performed in segmenting the atrium. Here is one of the cases I have been looking at very closely. I especially like this case since the original image from which the left atrium needs to be extracted is quite 'mesh'-ed within the MRI with the atrium concealed completely inside. Here's the original image:


This is almost the entire thoraic region segmented from an image which was obtained by subtracting a post-angiographed from a pre-angiographed MRI (in order to remove the bone structures).

The atrium, after computing the local maximas and basic components surprisingly yielded very few orphans (those which dont belong to a basic component). This is however, not the case with some other segmented images/MRI. I quite do not understand yet what feature of an Euclidean distance transformed image causes these orphans to occur. However, a very preliminary speculation has led to me thinking that it might be because of the absence of local-maximum points in the vicinity of image-edges. Usually, it is the case that when searching for a voxel's basic component close to an edge of the image, the path gets led to a neighborhood of voxels with 0 EDT values. Once it enters such a neighborhood, the search terminates, since the path has exited the EDT-transformed image and it's impossible to search any further. There could be ways of somehow searching further by querying perhaps the original image itself. I might be soon looking at these possibilites soon.

Here is a segmented atrium I obtained by running the implementation on the original image above. The good thing about this segmented atrium is that it reveals all the pulmonary-vein drainages, in addition to also showing how the drainages are branching out.



Thursday, October 19, 2006

Breakthrough

After spending an entire week optimizing and debugging code, I am finally able to segment the left atrium semi-automatically. Semi-automatic in the sense that it requires user-interaction where the user first needs to crop a region-grown segmented MRI, approximately locating where the atrium lies. This step is not essential to the algorithm, however, it speeds up significantly the EDT (Euclidean distance transform) computation. In the next step, the local maximum points are computed. Interestingly, these were arranged in a very interesting fashion on the atrium. I will discuss this in further detail on future posts.

After computing the local maximas, for each voxel in the segmented MRI, its basic component is computed. A voxel's basic component is its closest reachable local maxima where the path must be the one with increasing EDT values. I have noticed that some voxel's don't belong to a basic component. I call these voxels orphans. These usually occur when during a basic component path search, the search-path explodes into a neighborhood of voxels with zero EDT values. In which case, there is no one local maxima point and hence no basic component. These may cause discontinuity within an atrium or its drainage vessels.

Here is the original image, from which the atrium was segmented, one can hardly see the atrium as it is surrounded by a dense mesh of blood vessels and the aorta.


And here's the segmented atrium:


In the final segmentation step, the user clicks on local-maximums which approximately lie inside the atrium. This reveals all voxels which belong to basic components centered by the selected local-maximas. This system cant be navigated by someone who hasn't seen an atrium before.

The segmentation technique is quite powerful and it can prove to be very useful in extracting structures that are embedded within a dense mesh of blood-vessel-like structures like the atrium.

Programming note:

func (a)
{
for ( .. )
*p = new someClass(a,b,c)
}

Doing the above, for some reason, doesn't clear up memory held by someClass objects on exiting the function func.

Wednesday, October 11, 2006

Cutting the blood pool at narrowings

Mathias John's paper discusses how the atrium can be segmented by cutting the blood pool at narrowings. I have been trying to identify the narrowings where I would ideally like to cut the blood pool. Here are two images which show the left atrium, the narrowings at which the atrium needs to be cut at, in order to segment it.




After implementing a system, that can identify the local-maximums of the EDT-transformed image, it has been possible to perform a preliminary segmentation of the left atrium.

There are 220 local maximums, and some of these maximums lie within the vicinity of the left atrium. These local maximums play a crucial role in segmentation. My theory is (and possibly M. John's theory) voxels which will lie within the atrium will be part of a basic component centered only by those local maximums.

I have run several test cases to check this and all of them strongly suggest that this is the case.



Monday, October 09, 2006

Left atrium segmentation and local maximas

I have been lately looking at the paper 'Automatic Left atrium segmentation by Cutting Blood pool at narrowings' by Matthias John et. al. My attempt to implement the method is still under-way. The 'apparent' simplicity of their method is the main reason behind the adoptation. Although not entirely non-trivial, their method is far less complicated than other attempts which have been made to segment the right atrium and ventricles.

The trick is to first determine the Euclidean distance transform (simply put for each voxel v, the distance to the closest unmarked voxel is the EDT for that voxel v. An unmarked voxel is that with a scalar value of 0). Here is a EDT color-coded map of the left atrium and surrouding heart structures and vessels. The red regions indicate high EDT, with gradually decreasing EDT as you move away from the center of the atrium (and other structures), which is indicated by other colors.


The image is however badly represented on red planes, which color-clashes with our red for high-EDT indicator.

After the EDT is determined, the next step is to locate the local maximas within the image. This is fairly trivial and not greatly processor intensive. For each voxel, a 14 or a 26-neighborhood is examined to determine if the voxel has the largest EDT in it's neighborhood. This would occasionally be the case, since there is almost always a neighbor with a larger EDT. Using a 26-neighborhood, my system yields around 220 local maximums. If you use a 10x10x10 neighborhood, you will only end up with 8 local maximums. The local maximums usually occur at the bary-center of the structures, indicating the deepest points in the structure.

Once the local-maximums have been determined, the subsequent step is to assign every voxel to a basic component. A basic component is a subset of the image. I like to think of it as the k-th partition of the image, where the image can be divided into n partitions with each partition being called a basic component. Each voxel ends up in one of the n partitions. The partitions are divided according to the local-maximums, with each parition having a local-maximum at its bary-center (or simply center). A voxel belongs to a partition, iff by following its gradient we end up at that partition's center (a local max). We follow the voxel's steepest gradients (i.e. with the largest EDT value).

I have been trying to implement this and visualize how a voxel can be assigned to a basic component partition. The following image below shows some local-maxs and a path marked by a caterpillar-like object. This path has actually been traced by several spherical points, thus giving it the caterpillar-like look.


A voxel is looking for its basic component, and this is traced by the path (caterpillar-like) which is trying to reach the local-max right above it (marked in figure by 'a local max'). However, it gets into a never-ending cyclic loop. It is stuck between two voxels that have the largest values in both their neighborhoods. This can be easily avoided if we keep track of which voxels we have visited.


Wednesday, September 27, 2006

Determining local maximas and saddle points of an EDT

I am still studying this paper (left atrium segmentation by cutting blood pools at narrowings) and looking to see if some of the techniques can be adopted and implemented. The blood pool is cut at narrowings which are saddle points. After studying the atrium for sometime now it seems its safe to assume that the atrium is connected to the extra cardiac structures (such as the aorta) by narrow blood vessels. This is the narrowing we are after. Our segmentation step should be able to identify these narrowings. However, saddle points can only be calculated once the local maximas are determined.

Computing saddle points is not trivial. In the mathematical world, saddle points in 3D are points where a function is both convex and concave, on any two chosen direction. For a continuous function, this can be determined by computing the Hessian matrix (matrix of partial derivatives) and checking if the eigen values of the Hessian are both positive and negative. A good summary is given here.

However, in our 3D graphics world, the saddle point can be determined using an iterative algorithm which examines a neighborhood of pixels (26, 14 neighborhoods). As the paper states, a voxel v is a saddle point if in its neighborhood one can determine two voxels v1, v2 which have a scalar values larger than that v, and these voxels are separated from one another by a ring of voxels with smaller values than v1 and v2 .

I was also able to complete designing an Euclidean distance transform (EDT) tool which computes the EDT for each voxel, given a roughly-segmented atrium. I am visualizing and anlyzing the EDT results to see where the local maximas and saddle points actually occur, and if they are of any real significance in segmenting the atrium.

Here's an example of a z-slice I extracted off an EDT I computed over a segmented dataset. What you see in the image below is a certain z-slice of the segmented-MRI showing the EDT values. I have roughly pointed out where these values occur. Notice how the large EDT values indicate a local maxima (major heart structures) and the narrow blood vessels connecting these structures have a low EDT indicating saddle points.


Some non-PhD stuff: Firefox memory leak and Google analytics

I love Firefox. It allows tabbing and seems quite lightweight when compared to IE. But sometimes Firefox can be a pain. On my computer, sometimes it starts taking up a lot of memory for some reason. This is a known memory leak issue, and there is a fix for this. Look here, you can assign a certain amount of allocated cache memory for firefox.

Another interesting piece of software i came across is Google Analytics. It provides useful visitor information for your website. For e.g. if a visitor is a new or a returning visitor, the total number of visitors on a certain day and even the geographical location of your visitors. It displays all these information in a nice way (pie charts, line graphs, world-map display). Try it out . If you are a Google fan, another product to try out is the Google calendar. It's like any electronic calendar with a special feature: for certain countries, you can set it so that a reminder is sent to your celll-phone via an sms. These reminders are sent out right before an event in your calendar. You can set whenthese reminders should be sent to you, for e.g. 15 minutes before an event is about to take place.

Tuesday, September 19, 2006

Euclidean distance transform tool

Recently I have been preoccupied developing an Euclidean distance transform (EDT) tool. Although ITK provides functions to do this; there is a need to be able to perturb the transformation parameters. A recent MICCAI 2005 paper on left atrium segmentation is my motivation for experimenting with this function. The authors of the paper have published an intuitive algorithm to segment the left atrium by cutting the blood pool at narrowings. The narrowings, I am hoping can be detected after performing the EDT. There is however a great deal of user-interactivity still required at this stage, for segmentation.

Sunday, September 17, 2006

Bounding boxes in manual segmentation

From my experience with manual segmentation, for a left atrium which has 4 PV branches, we need to define a minimum of 4 bounding boxes, with a seed-point in each bounding box. Region-growing segmentation at each of these bounding boxes are then merged together to give the final segmentation result.


Above is a dirty hand-drawn illustration of the bounding boxes together with the region they segment.

Manual segmentation results

I was trying to manually segment a left atrium, using the segmentation tool I have developed (C++/VTK/FLTK/ITK). The results are rather crude, and the segmentation is not that great. Here is a segmentation I obtained by segmenting parts of the atrium separately and stichting them together (performing a union of segmentations):


Each of the pulmonary trunks (4 in total for this atrium) have been segmented separately. Some of the trunks are over-segmented. It is very hard to tell sometimes where the trunks finish, whereby drawing out a bounding-box (for region-grow) which is bigger than it's necessary.

Here's a glimpse of a part of the tool I use:


The segmented result is super-imposed on the 3D volumetric view of the MRI scan which uses movable orthogonal slices. This is useful in locating deficiencies in the segmentation.

Saturday, September 16, 2006

So what does it mean to manually segment?

From most of the works I have read on medical image segmentation, researchers first aim to interactively manually segment training data sets. These segmented data sets are then possibly fed into a machine learning process whereby a model is produced. This model is then capable of recognizing unseen instances of its shape, thus fitting itself to the new instance. Fitting here means segmentation, since the model has fitted itself and segmented an instance.

This all sounds ok, but the manual interactive segmentation part leaves us thinking: is it that easy to manually segment all those training sets by hand?? That's the reason why we are trying so hard to create ways of automatically segment, since manual segmentation is so strenuous and time consuming. Although, it depends on the sort of segmentation tool you are using and your experience in segmenting that particular structure; this is definitely not trivial.

I think I would be wrong saying that manual segmentation is always difficult. For some structures it can be relatively easy, and in some cases quite trivial. For the left atrium, it's a nightmare. The main reason being: there are lots of other blood vessels or arteries which run over (for example) the PV drainages of the left atrium. The diagram below illustrates this:

This is a poorly hand-drawn mspaint illustration. The main point I want to bring out are the red vessels which run over the black-coloured bordererd left atrium and its PV drainages. If one were to use a segmentation tool which enabled the user to pick a seed point (starting point of the region-growing segmentation process), and define a bounding box that defined the boundaries of the region-growing process; it would be impossible to segment out the left atrium in one go. A careful and interactive segmentation is required, possibly taking a great deal of time, depending on experience and the segmentation tool's capabilities.

A technique which I tend to follow is to first start out with the core of the atrium. Once the core is segmented, I move out to the PV drainages, segmenting each PV drainage one at a time. This also requires me to define bounding boxes for each of these drainages, and possibly also sometimes changing the threshold levels. This is since, the blood intensity at the drainages tend to be a lot less (35-150) than that of the core of the atrium (typically 60-150). Before getting a properly segmented atrium, it takes on an average 10-15 steps of stichting (union of segmented results) and removing unwanted regions.

After a visit to the cardiologist, I learned that this is exactly what cardiologists do these days. They strenuously join pieces of the left atrium from a 3D rendering of an MRI scan. There seems to be an urgent need of an automatic segmentation tool for the left atrium.


Monday, September 11, 2006

Removing extra-cardiac structures

Segmentation of the left-atrium will usually yield connecting extra-cardiac structures. These could be parts of the ventricle and other vessels nearby (such as the aorta). Here is an image of what an initial segmentation would produce:


The structure crossed in red, is the left atrium. The red-outlined structure is presumably the aorta passing right infront of the left-atrium, which was segmented along with the left atrium, in the process of segmenting some of the atrial pulmonary vein drainage endings.

The cardiac structure attached to the left atrium was observed to be attached at points where the blood pool narrows down and opens up into the structure. However, it was also observed to be attached at points which are not narrowed blood pools. The image below is an attachment which is narrowed:



And the image below shows an attachment which is not narrowed.



This makes the removal of such post-segmentation artifacts non-trivial, although there is still a possibility of removing most of these by detecting blood-pool narrowings (M. John et. al. MICCAI 2005).

Saturday, September 09, 2006

Differing left atrial morphologies

The left atrium, like many organs of the body, comes in different sizes and shapes. They exhibit quite a wide range of differing morphologies. After reading through a medical journal paper by J. Sra et. al. I started to realize how complicated left atrial segmentation can get, due to its varying morphology. Sra's paper looks at 3D reconstructions of left atriums and down below is an image of three varying topologies:

The image on top is the 3D rendering of the left atriums, and the images below are the corresponding endocardial views. If we look the left and right pulmonary veins, we notice that there is a great degree of variation. The far-most left atrial image is what I would call the standard atrial configuration. The middle atrial image is what surprised me the most, and something I have never come across. The right atrial image is also unusual since it has a right middle pulmonary vein portruding out of the right panel.
The paper talks very little about segmentation, and seems more like a review of registration technique and how both segmentation and registration can benefit the medical community, especially in the treatment of cardiac arrhythmias. Total automated segmentation of the left atrium is still far from becoming a reality, and most segmentation techniques I have come across yet requires a great deal of user interaction.

Monday, September 04, 2006

Segmentation is not straightforward

Today i realized that it's not at all easy to segment the left-atrium, or be it any cardiac structure. I asked my PhD colleague, who's working on tongue segmentation, if it was easy to segment the tongue. From his experiences its neither easier to segment the tongue.

I attempted to draw out using ms-paint, an illustration of why it is difficult. The top two reasons are these:

  • Left atrium is connected to other cardiac structures by blood-vessels. The segmentation algorithm needs to be able to differentiate between these.
  • Sometimes, blood vessels (not connected to the LA) might pass over, or under the Left atrium. In an attempt to define a region-of-interest box, one might try to avoid the vessel, lets say, which passes on top of a LA vein. This would cause the box to define a region which is not sufficient and crops remaining part of the LA vein, which would have emerged out in further slices.
Here's my illustration:


There may be other segmentation methods (apart frm region-growing) which may be worth trying. There are deformable models such as Active contours or snakes, which ITK-Snap uses to segment its MRIs. However, all methods will require a great-deal of user-interaction, which is why I am doing my PhD.

Friday, September 01, 2006

Left atrium segmentation

It is quite a tedious process to segment the left atrium from an MRI, interactively, using a recursive-region growing segmentation algorithm. Small changes in threshold levels causes a major change in the segmented volume. The segmentation is performed within a region of interest (a user-defined cuboid). This ROI is the volume within which the human user thinks is where the left atrium is expected to lie by looking at the MRI.

I totally quite yet dont understand if we can assume whether the blood has filled the entire left atrium, in the post-angiograph. I suspect, frm initial segmentation results, that there could be parts where the blood hasnt completely reached the left atrium. However, this is total image acquisition issue. But a burning question, when is a post-angiograph taken? I can definitely confirm that I have seen post-angiographs where the radiocontrast-agent hasn't yet reached the left atrium. But I wonder, why is that the case? Shouldnt post-angiographs be taken when the patient has a complete circulation of radio-constrasting agent blood?

Following is what would be, my dream segmentation of the left atrium:



As you can see, the number of branches of the PV drainage branching out of this atrium is what makes our lives difficult, and thus giving people like us the opportunity to do a PhD.



Thursday, August 31, 2006

Left/Right atriums and ventricles in MRI

After doing a thorough google search, it was difficult for me to locate an MRI showing the left and the right atriums, together with the ventricles. I am posting some MRI slices obtained by subtracting pre and post-angiographs to remove bone structures. What you see in these MRIs is blood travelling through the different vessels and heart structures. 'Post-angiograph' is after the blood is injected with Gadolinium (Radiocontrasting agent) and what you see in the MRI here are the bright regions indicating the blood travelling through the different structures. However, a pre-angiograph of the thorax region shows nothing really, and only the bone structures.


Above is an MRI showing the heart with a long-axis view. But beware! What you see is only the blood travelling through the different regions of the heart, and the radiocontrasting agent in the blood making it appear bright in the MRI.


Above, is a better picture of the left atrium together with the PV drainages. There are actually more drainages to this atrium, and these appear in other slices. The left atrium is situated right next to the right atrium and extends to behind the right atrium.


Thursday, August 24, 2006

Some VTK tips and PhD progress

Its been a while since i have written to this post. I have been fiddling around with VTK. Its quite hell of a learning curve when you are teaching VTK to yourself. Here are few things I have to say about VTK:

1) Its amazing that the visualization library is open source.

2) You will hate VTK from the day you start learning it, till the day you master it. It's the most wonderful thing when you have mastered it.

3) It's a nightmare if you can't debug your VTK programs, make sure your debugger works with VTK

4) Make sure you know how to look into VTK documentation. The documentation has been
generated by DOxygen, and make sure you know how to find functions, how to list them alphabetically, etc. It really helps if you can know how to quickly search through documentation. Generally, I look through 'VTK Class members' (see here ), class members are functions ofcourse. There is an alphabetic index on top of this page, so if you are searching for 'foo', click on f and search through page.

5) The best way to solve your VTK problems is by going through the VTK forums. Their search utility is not the greatest, but you can search intelligently, once you know how their search engine behaves. Here's the VTK forum.

6) ... and the bottom line: if you are not an avid C++ programmer, you will be surprised at how small things can cause segmentation faults within a VTK program (make sure you have called your constructors, etc). Have patience, once you know how VTK behaves, you will enjoy using the VTK library.

7) .. and hey .. dont forget to get a dedicated graphics card for your PC (not the ones which come with the motherboard). It really helps with your renderings!

My PhD progress is quite good, these days I am trying to create an interactive segmentation tool using VTK + Imperial college's ITK + FLTK. I had trouble trying to embed VTK windows into FLTK GUIs, and I was successfully able to do it. There are third pirty libraries out there such as the vtkFltk library (just 2 files), which is the most easiest to use.

Apart from this I am able to view the MRI volume using two orthogonal planes (which slice through the volume), here are orthogonal views of a brain MRI:


And here's the hypothalamus segmented by allowing the user to specify a region of interest:


These were segmented using a region-growing approach with a simple upper-lower threshold technique. This requires a great deal of trial and error to choose the correct upper and lower threshold limits.




Friday, May 26, 2006

Finally some segmentation. Flattening 3D volumes to 1D array

All this time I was thinking my region-growing procedure was running out of stack space. However, after some debugging I discovered that I was doing something wrong in the way I was transforming my 3D volume to a 1D array. Just sometime ago, when I was doing this course on Advanced graphics visualizations, I realized that you could store a 3D volume in a 1D array, and essentially the same goes with a 2D array. So if your 3D volume has dimensions x, y and z, a 1D array can be used to store the scalar value at any position (x,y,z). As you would have already imagined, the 1D array runs from index 0 to (x*y*z - 1). All you need is a mapping function that maps co-ordinates (x,y,z) to an index i in your 1D array. In your application, you might also require the inverse of this, i.e. from i to (x,y,z). The equations are quite straightforward:

Mapping function from (x,y,z) to array index i:
index i = (z * X * Y) + x + (X * y);
(where X, Y and Z are: 0<= x <=X and 0 <= y <= Y and 0 <= z <= Z)
and mapping from index i to (x,y,z):
z = i/(X * Y)
y = (i - z * X * Y)/(X)
x = i - (z * X * Y) - (y * X);
I was going wrong with these equations. Finally when I had them corrected, to ensure that these were correct, I wrote a verification program to test that the equations were invertible ( i.e. fg(x) = x if f = g^-1 ).
Both the recursive and non-recursive region-growing procedures now work fine. I had them tested on an artificial image of a brain MRI. I will be posting more results on the segmentation. Here are some preliminary results.
The procedure accepts both local and global thresholds. Local thresholds are thresholds imposed on the direct neighbors of a voxel, and these threholds vary from one voxel neighborhood to another. For e.g. I am using a threshold whereby every voxel in the region is allowed to include into the region a voxel in its neighborhood which is within a certain intensity +/- 5 for instance.
It so appears now, that if we try to pinpoint a small vessel and try to segment it using a small global threshold value, the entire volume gets segmented. However, the entire volume segmented comes out much "darker" than the original volume in the Maximum Intensity Projection (MIP) rendering.













The image on the left is a MIP rendering of the segmented volume by using a seed point close to the atrium of the heart, and by using threshold values of 60 100. The image on the right is a MIP rendering of the segmented volume by using the same seed point, but using thresholds of 0 and 1000. The image on the right is essentially the entire volume and technically with nothing segmented.

Saturday, May 20, 2006

Installing ITK at home

I have been trying to get ITK (Imperial college's ITK NOT Insight Toolkit) running at my home using Visual .NET 2003. Apparently ITK has problems compiling in .NET 2005. The errors which show up are correctable and it becomes evident that .NET 2005 has improved their cl compiler. There are some errors that has to do with assigning a const char* to a char* to which .NET 2003 doesnt complain which it should have.

Getting down to important things to remember when installing ITK, one has to make sure that VTK and FLTK is compiled using the same compiler or else it might throw some errors. FLTK also needs to be compiled with the FLTK_USE_ZLIB, FLTK_USE_JPEG, FLTK_USE_PNG options set to off. This is assuming that our system does not have ZLIB, PNG, JPEG libraries and we make FLTK build the libraries for us.

It is also important to compile ITK under Release mode. There are different compile modes in Visual .NET 2003 (i.e. Debug, Release, etc). The IDE usually defaults itself to the Debug mode. Setting .NET 2003 compiler to the Release mode makes the compilation process a lot quicker. Technically we should be able to compile it in Debug mode, but I havent tested it.

If you are going to be using VTK volume rendering libraries, strangely the vtkVolumeRendering library (vtkVolumeRendering.lib file in the vtkVolumeRendering folder) is not loaded as an additional library by default to the visual .net 2003 linker. One can easily manually do this, by going to Project properties -> Linker -> Input.

One last word: watch out for the errors, ignore the warnings. Some of these errors can actually be fixed if looked into more closely.


Thursday, May 18, 2006

More on Region growing for vessel segmentation

The problem of using a trivial region growing algorithm which uses a one or two-fold threshold as its growing-criteria is that it is bound to "explode" in the case of heart-vessels. What usually follows this explosion is a "stack-overflow" exception which results from the large amount of stack space taken up by the recursive region growing process. I have been experimenting with a recursive region-growing algorithm, and it runs out of stack when run on the visual .net C++ compiler despite increasing the stack size to several hundred megabytes (using the /STACK option in the linker, or the /F option of the compiler). What actually happens is that the region explodes into the atrium of the heart where it ultimately runs into a stack exception. However, the algorithm segments correctly without a stack-exception if the lower and upper-thresholds are set to point to high-intensities (scalar value of 80-150).

Grey-value intensities in contrast enhanced MR angiography is directly proportional to the amount of blood flow in a region. The atrium of the heart presents high intensities, the reason of which is quite obvious. However, the venous drainages also present high voxel intensities in certain regions. I presume this is related with the high blood-pressure in these areas due to the narrowing of the vessels towards the drainages. Although we would expect the entire drainage to be of high-intensity, however, this is not the case. Close to the endings, we come across low intensities. So now this variation in intensity poses a challenging problem when using thresholds, i.e if we were to select a threshold for region-growing segmentation of the venous endings, we would have no choice but to pick a low-threshold value (in-order to capture the minute vein endings) and a high-threshold in order to capture the parts of the vein-endings where the blood flow is high. Since the Atrium of the heart has a high-intensity value, this causes the algorithm to explode into the atrium, causing a stack-exception at some point. The stack-exception can also be equally attributed to the lower-threshold value, since this threshold value causes the algorithm to further leave the atrium and flow into the numerous vessels which branch out of the atrium.

Following is an illustration of a veinous drainage as seen through an MRI contrast-enhanced angiography. This is only a slice through the entire volume captured in the MRI.

After meeting with Prof. Daniel today, he suggested to me to further investigate the possibility of increasing the stack space and to see if the recursive algorithm can be made to work. The best possible result is to get the entire MRI segmented by using low-high threshold values. I also suggested to him how people have been performing brain vessels segmentation using an atlas-based region growing approach as described in "Region growing segmenation of brain vessels: an atlas based approach" by N. Passat et. al. Brain-vessel segmentation seems to be different from heart-vessel segmentation; in essence brain vessels dont branch out of an "atrium" like structure.

Prof. Daniel suggested to me to use a non-recursive approach to region-growing whereby we could keep the growing-region to a spherical-like shape at any time instant. A recursive process tends to grow out the region in one direction at a time. Making the region grow out spherically could allow us to develop better cost functions which for example could utilise the compactness (surface area / volume) of the growing-region.

The 9-month transfer report is due soon.

Tuesday, May 16, 2006

Experimenting with Region Growing methods

I have slacked off a bit due to tasmia's arrival to London and my exams. I havent been working for the past 4 days (which incl. 2 days of the weekend). I was able to install Visual studio 2005 at home, so which means I will be able to work on my experiements from home. My Google interview is soon, and so I will have to prepare for that as well.
I will be reading Region growing based techniques for volume visualization by R. Huang et. al. They have used simple statistics for region-growing criteria selection. I might get better results than the ones I have below (using simple thresholding).


I have used a recursive algorithm which uses a two-part threshold. The recursion is on the 6 neighbors of a voxel (2 in each direction -> 2 in x, 2 in y and 2 in z). The image above is a MIP rendering of the volume. The segmented region is show in dark one-color green.

Saturday, May 13, 2006

Final exams and preparation week

I will soon be posting on the 3D segmentation I was able to do, only to a certain extent, using a recursive region growing algorithm. The criterion for region growing is simple and I am using an upper and lower threshold. This is known to produce "holes" in the MIP rendering of the segmented region. It seemed quite clear later on as to why such local discontinuity occurs in the rendering. Since we are doing a Maximum Intensity Projection, the maximum intensity function around a neighborhood of the segmented region is not guaranteed to be continuous or approximately continuous.

Using a recursive region growing algorithm with a criterion like the one I am using at the moment (thresholds) is just the beginning. It is popular to use other criterions, such as the Fischer's criterion. Most of these really boil down to using some sort of statistical mesaures. I am looking to first try and see how well it works if I used a simple idea of a pixel fulfilling a group if it is within some n number of standard deviations.

Apart from research work, I have been busy preparing for finals for the Machine vision and Advanced graphics and Visualization courses I took this year. Imperial exams are very thought provoking, much like the ones I "reckon" I used to get at U. Toronto.

Thursday, April 27, 2006

Region growing segmentation


Region growing segmentation seems like a relatively easier thing to use to segment the pulmonary venous drainages. I will be trying to implement a Seeded Region Growing segmentation to start extracting the drainage patterns. However, I anticipate that Region-growing might not work perfectly with veins. The problem with region growing is that it seems to explode and find its way out somehow out of the region which we are interested in. To get a jist of how difficult segmentation might get, have a look at the MRI image above. I wish everyday that MRI image acquisition was clearer.
In near future, I will also be looking at other image segmentation techniques used in medical imaging. There is a good piece of literature out there that explores out all the different techniques that have popularly been used in segmenting medical images. Here's the paper.

Wednesday, April 19, 2006

VTK Error

I have been stuck for about a couple of weeks trying to get rid of this error
vtkVolumeRayCastMapper Cannot volume render data of type short, only unsigned char
or unsigned short.
that I have been getting when I was trying to MIP render a GIPL image. I was finally able to resolve the issue, with Prof. Daniel's help. I have looked around the web for a solution, and all of the solutions that I have looked at didnt help. Infact, people working with ITK (Insight ITK) commonly face this problem when they are trying to render an ITK image.

The solution is simple, only if you knew the vtk classes inside out. Anyone would have guessed what needs to be done here: we simply need to make sure that the scalar values we have in our vtk data set (vtkStructuredPoints for e.g.) are non-negative values. Well, if you have negative values, you want to make sure that you can scale them back to the set of positive real numbers. The way you do this is by first determining the minimum and maximum of your scalar values and then setting the minimum to be 0. So if -5 is the minimum, you set it to zero and add all the other scalars by 5. Once you have non-negative values, all you now have to do is pass the vtk data through a small pipeline. A pipeline is an important concept which I learned about today. Most VTK textbooks have this explained well. What it really is that the output of a filter becomes the input of another filter, and this is done by using common vtk functions such SetInput and GetOutput. Going back to our discussion, once we have non-negative values, we need to cast and make sure that we can safely cast. If you have scaled your scalars properly, it should be a safe cast.

vtkImageCast *vtkImageCast = vtkImageCast::New();
vtkImageCast->SetInput(vtkImage);
vtkImageCast->SetOutputScalarTypeToUnsignedShort();
....... // somewhere down the line ...
volumeMapper->SetInput(vtkImageCast->GetOutput());

As you can see, a new vtkImageCast object casts the vtkStructuredPoints vtkImage object's scalar values to unsigned short and then the GetOutput function is called to feed the output to the renderer.


Saturday, April 15, 2006

Moving homes and other stuff

I have been spending most parts of this entire week either at Glu or trying to move out to my new apartment. Looking for a place to stay in London is harder than solving an optimization problem with defined constraints. My interview at Google is very soon and I am very excited aboout it. It is one thing that I noted about Google that is atleast different from Microsoft, Google's hires a lot of "academically" bright students, and I mean students from the good schools with the good grades. I have been reading a couple of profiles and quite a handful of them even have PhDs. I think Microsoft puts much more emphasis on programming quality and ability than any other thing.

I have been working on my MRA visualization. Somewhere in VTK i am having problems trying to render an ITK image. Apparently some common VTK rendering functions (such as ray-casting, MIP) can only render volumes with voxel scalar values that are of type unsigned short/char. The ITK images which I have are of type signed ints. It does not make sense to me why VTK designers would want the format to be unsigned short/char. unsigned makes sense, but why short and not an int?

At the same time, I am also working on a coursework which was due ages ago. Here is a rendering of the skull using Back-to-front (BTF) compositing. The rendering is very basic since I have only used a single threshold and no lighting model. I have also limited the colors B/W as I wanted to get an X-ray like image. The original volume is from a CT scan though. So here you go, your CT to X-ray converter.


Friday, April 07, 2006

ITK + VTK Compiled

After a long haul, VTK and ITK finally compiled. The problem was with having VTK not installed properly on my machine. The thing with open source software is that the installers (if any) are not that great. Sometimes, as it did with me, it installed a version of VTK that was not fully compatible to run on my OS. The best thing to always do is to download the source and compile it using your compiler. That's what I did with VTK, when I realized that it was not working with ITK. There were also problems with Project (ITK), and things needed to be updated. I also had to install FLTK in order to install rview. rview apparently doesn't work without FLTK. There are also other little things that I should be careful about: for e.g. turn the Advanced options on when CMake-ing, and make sure you have put the GL/gl.h and glut32.lib files under your compilers lib folders (such as vc7/lib). However, as a piece of advice, the best thing to do is to read the errors carefully when compiling ITK and to stay calm when errors > 100 show up (after which vc .net stops compiling further :(( )


Wednesday, April 05, 2006

A thing called CygWin

CygWin is a blessing to the windows operating systems. Basically it makes your Windows system acts like a UNIX OS. It provides some basic UNIX OS functionality together with programming development tools like GCC, etc. You could run xemacs/emacs on cygwin as well. So, if you are an emacs person and run windows on your PC (a rare combo) you should probably get CygWin.

CygWin also allows X11 forwarding through its CygWin/X package. So basically, you could connect to a remote linux/Unix server and get the graphical output on your WinOS. And fundamentally, Cygwin consists of a library that implements the POSIX system call API in terms of Win32 system calls.

I have been trying to setup CygWin/X. And today I got stuck trying to configure X11 forwarding and setting up display settings, although I had done it some time before. Basically you need to call the startwin.sh script, as described in here. After that you can ssh to your remote host on the X-window (window that appears after you run startwin.sh).

Tomorrow I get to meet Prof. Daniel. My ITK build is now completely messed up. I cleaned up the build and now after re-building it gives me errors which I never got before. I have written a basic visualization code. I also need to invest some time in getting my Adv. viz. coursework program code to work.

Tuesday, April 04, 2006

For most parts of the weekend, I have been spending my time at Glu. Its seems like its been a while since I have taken time off, and Sunday was finally a holiday for me. I am working on my Advanced graphics coursework that requires me to render a brain CT scan using fundamental rendering algorithms. The coursework looks complicated and I am right now trying to run it on Windows. ddd Debugger is horrible. I was stuck trying to figure out what an error meant, which popped out everytime I ran ddd on the executable. To make a program debuggable, you must gcc it with a -g switch. Ah well, ddd has a poor array analyzer. I remember back in my MSc days, when I used visual C++ 98 to examine arrays, and it was so much simpler.

The coursework requires openGL. And to my surprise, the download links to SGI's openGL does not work! I recently learned that Windows OS ships with OpenGL 1.1. I have no clue about where the exes and dlls are located.

My PhD work this week is currently at a Standstill. I am still trying to do my visualization code. On Monday I had translated some of the VTK python/TCL code to C++ for visualizing using some basic rendering function. I cant remember what rendering I used, but thinking about it right now, what could be simpler than MIP rendering??? :) Strangely, at this point I cleaned my ITK build, and it is not compiling anymore. There are scores of errors and perhaps I need to go to Raj or Prof.



Wednesday, March 29, 2006

Stuck with VTK

It's getting a nightmare trying to compile programs which use VTK+ITK. Today I spent the entire day trying to learn how to use VTK for visualizing a dataset. After our meeting I learnt how ITKImage has functions that can convert a gipl image to a vtk structured point set (i think thats what they are called). There is also an .exe called "convert" that converts gipl to vtk format.

The next two days, I will be spending mostly at Glu.

Meeting with Prof. Daniel

It was a short meeting that we had today. We talked about how important it was for me to get the visualization done as quickly as possible. From there on we are planning to segment the images somehow first using some fundamental segmenting algorithms like Region growing. At some point we intend to be able to automatically get data for creating a Statistical shape model for the pulmonary drainage veins.
I also discussed my meeting with Dr. Tim Cootes at Oxford. After showing him the different drainage patterns, Cootes suggested that it would probably be a good idea to use more than just one shape model to describe a single pattern. However, I also explained to him how difficult it was to acquire images for patterns which are rare.
After discussing the rarity of the patterns, Prof. Daniel suggested that it would probably be a good idea if we could create models for the rare patterns with whatever data we have, and then declare that the model has some limitations. He also said that we could find out what proportion (e.g. 95%) of the data it explains (not clear on this)

non PhD: How do you export highlighted syntax source files

Someone was asking me how you export a highlighted-syntaxed file to a word document. This is something I have never thought of in all the years I have programmed and created reports. Anyways, there is a solution. It happens that one of the editors out there has an export function that enables you to export its highlighted content to various formats such as PDF, RTF (works in Word), XML, etc.
The editor's called SciTE and here's the link to their page.

Tuesday, March 28, 2006

Subtraction works

I finally was able to subtract a Post Angio image from a Pre Angio image. The images are MRIs showing the pulmonary vessels. Subtraction, although a very trivial procedure, took me some time to figure out since I was new to the IDE.

The subtracted iamge reveals the blood vessels. Daniel has suggested that I perform a MIP (Maximum Intensity Projection) rendering on the vessels to analyze the structure. MIP can be easily implemented using the VTK library. I am doing some background reading on basic Computer graphics concepts such as volume/surface rendering. I have got hold of some papers and a book on the VTK library.

Installing Daniel's ITK

I have been trying since weekend (25th March) to set up Daniel's ITK. It has really been difficult. Thanks to Raj. He helped me a lot to set up ITK on Visual .NET. Sometime I need to sit down and write all the installation procedures.

It's simple, first you need to checkout the code (which vip calls Project) from CVS. I have been using Tortoise CVS on windows. You then need CMake to install the binaries, this step is important as CMake produces code optimized for your compiler. Next, it's just a matter of opening up the solution workspace on Visual .NET and building the solution.

What got me the most was how we are supposed to deploy code that we write, which uses the ITK. So for example, if we write code which uses the ITK library, our file (.cc) needs to sit under project/applications. CMake needs to install the binaries once again, and the solution needs to re-built. Its a nuisance doing it over and over again, but thanks to the Visual .net IDE which only compiles files which got recently changed. Look for an option which only builds the file and not the entire solution.

Oxford Uni. for Spring school

I spent a week (Mar 19- 24) at Oxford attending lectures given by some prominent researchers in the field of medical imaging. It was a great learning experience, and I enjoyed my time greatly. For any one interested in the Oxford Spring school, here's the link to their webpage.

The entire conference costs around 500 pounds, and the accomodation is included. We were given accomodation at St. Anne's College. It is a good idea to take your laptop with you. I survived the whole one week without a computer.

I met some people there who are also new researchers in the field of medical imaging. Notably, Arif Asif Qazi at the IT university of Denmark, and others. It was funny that I met a few people from Imperial College for the first time there. Ioannis (neurologist) and Mustafa Anjari (medic) are both working at the Hammersmith hospital, which is also a part of Imperial College.