Category Archives: Image Processing

Getting Crabby

One of my favourite domains for Image Analysis is timelapse imaging. The combination of X, Y (possibly Z if you want to get cheeky) and time makes for rich analytical possibilities.

Despite starting a new job in which the time domain is largely absent, I’ve been moonlighting in the evenings doing some outreach work helping with a problem that first came up on the forum.

It’s time to do some tracking, but let’s avoid getting too crabby.

Continue reading

Correcting the record

Unless your imaging facility is in a clean room (and you never touch it), from time to time, we all end up with unsightly splotches on our transmitted light images. The best fix for this is to clean the microscope but sometimes you just have to do what you can with what you’ve got.

Thankfully there’s a fairly easy way to correct it post-acquisition. Let’s Flat-field correct!

Continue reading

In the Bin

A lot of work we do at the CCI uses scanning confocal microscopes, which have the advantage that the operator can pick the number of pixels in X and Y that will make up the final image.

For camera-based systems this is a less simple endeavour as the array of the CCD chip is fixed. For this reason, we may want to downsample or bin our images. In this post we’ll cover a bit of theory and details on how (and why) to bin your images.

Continue reading

Tools for Open Access Research

As it’s Open Access week, I’ve decided to write a post about Open Access in the context of software, file formats and Imaging Data.

Continue reading

Making it up (Part 1)

Whenever you’re testing a new analysis protocol or playing around with some software, it’s always handy to have some sample data to mess with. But what if you don’t yet have the data, or what if you need more, or need more specific data? In this post, we’re going to delve into the world of synthetic data by making a sample tracking data set.

Continue reading

Divide and conquer

One of the more annoying things about fluorescence imaging is that it’s a bit like trying to describe our location in the universe. There’s no absolute point of reference, the values are rather arbitrary and you rely heavily on relative measures (like being 1AU from our local star).

This post will demonstrate some of the problems with quantifying basic fluorescent images and use a case study to show how ratiometric imaging (among other things) can be used to solve them.

Continue reading