In the toolbox of the image analyst, being able to correlate objects in time is a very useful skill. It opens up the doors to be able to look at dynamic changes in a system be they intensity, shape, spatial localisation or just about anything else. In this post, we’ll be covering the basic theory of object tracking and showing you how to track with open source tools.
The great thing about tracking is that the underlying principles are the same whatever the biological (or not) system you’re studying. This means that tracking algae moving in a petri dish, bacteria swimming in 3 dimensions or cells migrating along a substrate are all approached in a similar way. It’s all about features and links.
We’ll start with the theory and then show you how it’s done in the slightly-more real world.
The Main Feature
The first step in any tracking adventure is identification of features. Features are any object that can be identified with intensity above the local background. It’s important to remember that even things smaller than the resolution limit of the microscope (thus acting as an infinitely small point emitter) will project a much larger emission profile. As an example, below is a single quantum dot.
Despite the quantum dot being about 10x smaller than the resolution limit of the microscope, it appears on our camera with a diameter of about 12 pixels (at this magnification ~770nm).
The neat thing about this is that with enough pixels we can use curve fitting to calculate a sub-pixel position. Here’s a line scan of the image above.
Each one of the 24 pixels has an associated intensity (note the spots are all on the vertical lines. If we fit a curve to the pixel values we can find the position of the peak without being limited to single pixels. In this case it’s about 11.63.
The example above is only using one spatial dimension but of course the same idea can be applied to a second spatial dimension to provide a sub-pixel position in both X and Y.
About Time
If you’re interested in tracking then of course you’re interested in a series of images in time. Using the principles above, you can find features in every frame. What remains is to link feature A in frame 1 to feature A in frame 2. Worry not, there are some great implementations out there so you don’t have to re-invent the wheel.

The cost matrix is reduced until the most efficient global matching solution is found.
When we’re done, what you will end up with is a trajectory map. Usually this is provided in the form of a list of XYZ positions of each feature in each trajectory by timepoint.

Here is the transition between track 1000000001 and 1000000002. Track 1 finishes at timepoint 37.
Some software calculate extra parameters but almost everything you need can be calculated from the coordinates and a bit of maths.
Not asking the impossible
There are (perhaps) surprisingly few parameters that you need to provide to most tracking software. The two most common are Feature size (roughly how large are the features) and Inter-frame distance (in the course of one frame, how far do the objects move).
Possibly the most important thing to remember is that the distance the particles move frame-to-frame should always be smaller than the distance between particles within a frame. If this is not the case, you’ll get linkage errors.
You can’t change the movement of the particles but what you can do is increase the acquisition frame rate (reducing the distance particles move frame-to-frame) or increase magnification (effectively increasing the number of pixels between objects).
Getting on Track
There are lots of options out there including paid, open and closed source solutions. We’re going to be using TrackMate which comes bundled into Fiji. The documentation is really good so I’m not going to go through the details, just cover some of the main points.
Features
The first window will ask for the calibration parameters which if you’ve set these already will come pre-populated.
What makes this plugin nice and user-friendly is that you can move forward and backward through the analysis steps using the buttons. Hit Next to go onto the next step.
Next comes feature selection. Select an estimated feature diameter and hit Preview to get a rough idea. Apart from a huge number of background features, the size looks about right.
Now we need to set the threshold so that only bona fide features are included. Play with this until you no-longer have background spots. Here, I’ve selected 5, but YMMV. Once you’re happy go ahead and hit next.
Once the processing is done, you will have the opportunity to filter the results on a whole bunch of parameters (location, intensity, the ever-mysterious ‘quality’). In this example, I’ve filtered on quality to remove a couple of aberrant spots that made it though the first round of thresholding.
Again, it’s worth checking a few timepoints to make sure everything is OK. Hit next to move onto tracking. The “Simple LAP tracker” will do everything we need here, but for more complex tracking (including merge and split events) you will need to use the regular LAP tracker or one of the other algorithms. Next!
At tracking we now need to provide the last of the values, the Maximum link distance and the gap closing options. The latter is useful if you have noisy data, weak features or if your spots are likely to move in and out of focus. Gap closing looks ahead to see if the feature reappears in the non-subsequent frame.
In this case, we’re going to turn off gap closing by setting the Max Frame Gap to zero.
Hit next and get the tracks. Again you have the option to filter the tracks based on loads of different parameters. A useful one here is to set a minimum duration. If used well, this can be remove junk and some linkage errors.Great! Now we have features and have linked them together. The last three tabs are really options as to what you want to do with the data. I’m just going to highlight the options I use:
1) Display Options
To do any sort of complex analysis you will need to get at the numbers (remember the XYZ position by timepoint table above). If you hit the ‘Analysis’ button you’ll be presented with three tables. One with Spot data, one with Linkage data and one with Track data. This post is already a monster so we’ll deal with data analysis in a subsequent post.
2) Plot Features
In the second tab, you can pick from the spots, links and tracks and plot a number of parameters.
3) Misc Actions
The third tab contains lots of miscellaneous options for exporting overlays and visualisations, various plots as well as passing off data to other programs.
It’s important to have a play with these options and get to grips with what you can do with TrackMate. Also once you’ve run through a couple of times, you’ll find that other tracking software basically do the same thing in various ways. Talking of which…
Some Honourable mentions
MOSAIC: For the longest time, I used the Mosaic ToolSuite from Ivo F. Sbalzarini’s group at the MPI-CBG, Dresden. This is available from their website as an ImageJ plugin. It’s a great tool but slightly less user-friendly than Trackmate. There are also fewer visualisation options. That said, it won the 2014 tracking ‘competition’ and plays very well with ImageJ scripting. I still keep this one in my back pocket.
IMARIS: It’s not often that I talk about commercial software on this blog but Bitplane Imaris deserves a mention. Imaris is primarily a 3D/4D rendering platform but it really excels at feature segmentation and tracking, especially in 3D. It has a ‘wizard’ interface similar to TrackMate and lots of visualisation options. For big (>500GB) or complex (high dimensionality) data, this is still my go-to. There is obviously a substantial price tag associated but if you have access to the software, make use of it.
Pingback: Making it up (Part 1) | Post-Acquisition
Pingback: In the Bin | Post-Acquisition
Pingback: Getting Crabby | Post-Acquisition