Taking and processing photomicrographs — part 4: fixing anisotropic illumination.

As I mentioned before, anisotropic illumination occurs when the bulb does not make the background uniformly white (or whatever color it makes it).  Traditionally with light microscopes, it makes the center brighter than the edges.  In addition, when dust gets into the light path, it can cause grey blobs.  For the rest of these posts, I’ll call the image of the tissue you took the “target image.”  After you’ve processed the image, the resulting image becomes the new “target  image” of the next step.

The order in which you do things *does* matter, at least sometimes.  I’m going to give you the order in which I do things.

Subtract darkfield image:

The first thing I do is get rid of hot spots if I took a darkfield image.  It’s simple. I just subtract the dark image from the target image.  Image arithmetic (adding, subtracting, multiplying, dividing images) is a common task and is included in all sorts of image processing software.  Good open source packages for this include ImageJ and Fiji, Gnu Octave with the “image” package, scilab with the “image processing and computer vision toolbox”, and numerous libraries if you want to code it yourself.  These are all linux packages, but they all come in flavors for Windows and Macs I think.

I find ImageJ/Fiji very convenient.  Open the two images, choose “Image Arithmetic” under the “Process” menus, choose “Subtract” and you’re done. ImageJ also allows you to put commands in macros and scripts so you can process a large number of images automatically.

But, mostly, I don’t bother with darkfield images.  If you do, you need to do this before you do the more important thing — correcting for the anisotropic illumination in the brightfield image.

Divide by the brightfield image:

The easy solution is to *divide* your regular image by the brightfield image you took.  Again, it’s a simple image arithmetic function.  Instead of choosing “subtract” in ImageJ, choose “divide.”  Easy peasy.

Sort of.  Here is where it matters what image processing package you use.  The reason is that there’s some assumptions made on how to regenerate the image once you divide it.  Think of it this way.  Remember that your pixel is defined by (usually) three values — red green and blue.  There are other conventions, e.g. LUV, HSV, Lab, etc. but we can ignore them for now.  For now, it’s RGB.  Most of the time, these red, green, and blue values are encoded by integers (whole numbers) with a value between 0 (dark) and 255 (bright).  Thus, white is r255,g255,b255 where r is red, g is green, and b is blue.  Here’s a screenshot of a color picker in GIMP showing a bright red with r=174.3, g=30.4, b=30.4.

Dealing with integers like this is fine if you are adding or subtracting, but what about dividing?  Let’s say my original pixel is 100,120,110, and my brightfield pixel is 100,100,100?  That means that the results of division will be 1, 1.2, 1.1. which is very dark.  Worse, since you can’t represent real numbers (e.g. 1.1) as an integer, if you did all this in integer arithmetic, the result would be 1,1,1.

So, in order to do the division, you really have to:

  1. Get the original two images.
  2. Convert the pixel values from integers to real (floating point) numbers.
  3. Do the division.
  4. Scale everything back to between 0 and 255
  5. Convert back to integers.

There are more ways to do this than you’d think and more decisions to make than is comfortable.  For instance, do you process the red, green, and blue channels completely separately, and then combine them, or do you combine them early and treat them all the same.  For instance, let’s say that the red channel goes from 10 to 200 down to 1 to 6, and the blue channel goes from 20 to 210 to 1 to 5.  Now, let’s take a pixel that has a resulting value of r=1.5, g=2.1 after division.  If I process the channels independently, that r of 1.5 becomes 30.  If I combine the two channels, that 1.5 becomes 37.5.  Plus, what happens when you have a zero or very dark value in the brightfield image?  You can’t divide by zero.  So, your program has to decide how to handle that.

You can get those dark spots on the brightfield image by having dust or specks in the light path.  For some programs they can be a problem  In that case, you will have to manually edit the brightfield image to remove them.  The easiest way I’ve found to do this is to use the “clone” function in a graphics program and cover them up with a patch of a nearby clean space that has similar illumination.  It’s a standard tool in commercial programs like Photoshop.  Again, there are multiple free tools for linux.  I use “GIMP” the Gnu Image Manipulation Program.

For instance, ImageJ (or it’s related Fiji) does image division by first breaking the image into separate colors, doing the division, and recombining.  Here’s an image I took of a tumor:

Here’s the brightfield image:

Note all the dust in the light path. Yuck.

 

 

Here’s the result of the division:

The problem is that all of the dynamic range is eaten up by the result of dividing by a very small number.  Note the bright spots in the lower left and left margin (arrows):

Those dots are nice and bright, but everything else is scaled down.  If one is tied to ImageJ, you can deal with this in part by manually editing out the dark spots and then adjusting the white balance.

Here’s the brigthfield image with the darkest spots (though not all of the spots) manually removed using a clone tool in GIMP:

Now here’s the result of the division:

It’s much more flat, and doesn’t have the white spikes.  The color balance is still off. I’ll discuss that later, but here it is fixed a little:  Note that it’s still a bit blue.  It turns out that most pathologists like it that way, probably because we all use blue filters:

 

Or… you can use a different piece of software.

There is a very nice llibrary of image processing tools called the “vips” library (https://www.libvips.org/).  Learning to use vips is a bit more of a challenge because it doesn’t have a gui.  In linux, you use it through the shell.  In Windows, I assume there is some terminal-based way to do it, though I don’t use Windows so I don’t know.

Here’s a shell script for doing the division.  Most of the code is involved in casting the image from its native formate (jpeg) into the format used by the library (.v) and back:

!/bin/sh

#bring in the names of the files, two input and one output
first=$1
second=$2
out=$3

#change the input image into .v format
vips cast $first first.v float
vips cast $second second.v float

# do the division
vips divide first.v second.v out.v

#make the output image a .tif file
#it turns out that vips does better moving from
#.v to a floating point tif file and *then* to
# integer jpg format
vips cast out.v out.tif float

#then jpg
convert out.tif $out.jpg

#delete all the intermediate files
rm out.tif

 

Here’s the output from this division, using the very same brightfield image:

And here it is with a simple histogram stretch, using a “curves” tool in GIMP:

The white balance isn’t perfect, but everything has a much more uniform background, and sprucing it up a little is trivial.  It’s a lot faster than using ImageJ.

I’ve also gotten some good results with OpenCV image division, though I can’t run it at the moment because of some fedora configuration issue I don’t want to deal with right now.

 

 

 

 

 

 

 

 

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.