If we look at image processing from the mathematical perspective, all digital images are just large arrays of numbers. Across different fields of study, image processing applications (although initially developed for very specific needs) often use similar image processing routines based on common algorithms. Why I am writing all this?
Because this spring I was working on my “Galaxy Photometry“ project with SAO.
The main goal for this project was to master the astronomical image processing technique. Students were asked to reduce the galaxy photometry data from the given sample of multi-band raw images of the galaxies (M51, M63, NGS 4258, and NGS 4725) from the McDonald Observatory. Students were asked to reduce raw images, first, and then combine them into 3-color images of the selected galaxies using the software of their choice.
That was my problem – I needed to use the imaging software for astrophotography. Although, I can call myself an expert in Adobe Photoshop; I am still an amateur astrophotographer. For this project I had only 6 weeks to select the right software and to master its use. And here come an insight! Why don’t I use the image processing software which I already know how to use? Why don’t I use MIPAV? I have done some additional research and found some interesting papers by Jennifer L. West and Ian D. Cameron (2006) and also by Borkin et al (2006) where authors explored capabilities of ImageJ and other medical imaging software (including D Slicer and OsiriX medical imaging tools) in astronomy. That strengthened my point.
MIPAV (which states for Medical Image Processing, Analysis, and Visualization) is an open source Java based flexible image-processing package. While it was intended for medical images, it also supports various non-medical imaging formats including raw and FITS (it also reads FITS headers). MIPAV also has a lot of tools for creating 3D images from 2D and for 3D image processing and visualization.
I found MIPAV algorithms being more mathematically transparent than those used in commercial packages such as Adobe Photoshop or MaxImDL. I found this transparency a big advantage, because I like to know what exactly software is doing each moment and why.
The image processing routine included cropping, scaling, alignment, noise reduction, bad pixel elimination, and, finally, creating a single multi-colour image for a given galaxy. My full image processing diary is published on MIPAV wiki.
MIPAV has a built in image math functionality that allows adding, deleting, subtracting, multiplying, images and even has bulk functions where one can manipulate multiple images an once. I took full advantage of this when I was cropping overscan regions and averaging bias frames.
MIPAV offers several tools (called algorithms) to co-align and register images. I used the simplest one – the Landmark Least Squares algorithm to 1) co-align science frames for each filter (R, B, and V) and 2) co-align the calibrated science images from each filter. The algorithm provides a way for registering an image to the reference image by using at least 3 corresponding points in both images. This is similar to the star alignment technique used in MaxIm DL, but one doesn’t need to use the stars – using any 3 noticeable points (including galaxy centres and/or dust specs) is OK.
MIPAV offers a variety of noise reduction filters including Gaussian, Median, Mean, and Unsharp Mask to name a few. It also provides tools for extracting fine detail. I choose to use the following MIPAV filters – Unsharp mask (first) and Nonlinear Noise Reduction (after) to eliminate noise from images. The last algorithm uses nonlinear filtering to reduce noise in an image while preserving both the underlying structure and the edges and corners. Of course, one needs to play with the parameters to achieve the best results.
To reveal fine detail I used the Coherence-enhancing diffusion filter, which is found useful in medical imaging for filtering relatively thin, linear structures such as blood vessels, elongated cells, and muscle fibres. I found it equally good for revealing spiral arms.
Basic brightness/contrast and min/max level adjustments are available in MIPAV. This includes the ability to do separate colour adjustments (R, V, and B).
A whole bunch of lookup tables (LUTs) is designed specifically for customization on how the images are displayed without changing the actual pixel values in the image. This functionality is analogous to stretching in MaxIm DL, but more mathematically transparent.
I found it very helpful that MIPAV allows keeping a history of all of the actions (filters and transformations) performed on images. The history includes the specific parameters that were set for the action. Each time an image is modified and saved in MIPAV (no matter in which format) the history of actions and their parameters appears in the corresponding .xmp file (XML). That makes keeping image processing diary even easier.
I was able to record a script and automate steps 1 to 6 of my image processing routine. This saved me a lot of time. There is also another option – one can write a custom plug-in which would work as a small application. Since MIPAV is an open source, there is already a lot of plug-ins available.
More detailed account of my image processing actions can be found on the MIPAV WIKI.
I did not use MIPAV for colour stacking – for some reason it did not work very well. I saved monochromatic R, B, and V images as FITS files, and then used FITS liberator to convert them to 16 bit TIFF files. Then I transferred TIFF files to Adobe Photoshop CS6 and created the final “pretty picture” image. I did not have to use any of Photoshop capabilities to improve the image, because it has been already processed; I only used Photoshop for colour stacking.
Both astronomy and medicine often use multi-dimensional and multi-modal imaging data to create a single image. In medicine these are MRI, DTI, X-ray and CT scans. In astronomy these are gamma-ray, X-ray, optical, IR, and radio data. The tools from one field can be adapted to use in the other. We just need to look around.
Another possibility is to use medical imaging software for visualization of 3D multi- modal astronomical data (Borkin et al 2006, Covington, McCreedy et al, 2010). 3D visualization tools already developed in many medical imaging applications because biologists need to study cells in 3D and generate 3D confocal microscopy data sets, virologists need to generate 3D reconstructions of viruses, radiologists need to study and quantify tumors (in 3D) from MRI and CT scans, neuroscientists detect regional metabolic brain activity from PET and functional MRI scans, etc. All these tasks require using and manipulating 2D and 3D images in various modalities. I strongly believe that many of these application can be adapted for astronomy because all digital images are just large arrays of numbers.
- Jennifer L. West, & Ian D. Cameron (2006). Using the medical image processing package, ImageJ, for astronomy J.Roy.Astron.Soc.Canada100:242-248,2006 arXiv: astro-ph/0611686v1
- Gonzalez, R. C. & Woods, R. E., Digital Image Processing 2 nd, Ed, Prentice Hall, Upper Saddle River, New Jersey, 2002
- Imaging tools in astronomy and medicine (2008)
- Michelle Borkin (Initiative in Innovative Computing, Harvard University), Alyssa Goodman (Initiative in Innovative Computing/Harvard-Smithsonian Center for Astrophysics), Michael Halle (Initiative in Innovative Computing/Harvard Medical School), Douglas A (2006). Application of Medical Imaging Software to 3D Visualization of Astronomical Data http://arxiv.org/abs/astro-ph/0611400
- Covington K, McCreedy ES, Chen M, Carass A, Aucoin N, & Landman BA (2010). Interfaces and Integration of Medical Image Analysis Frameworks: Challenges and Opportunities. Annual ORNL Biomedical Science and Engineering Center Conference ORNL Biomedical Science and Engineering Center Conference, 2010, 1-4 PMID: 21151892
- Visualization in MIPAV, by Alexandra Bokinsky, PhD and Ruida Cheng