One of my favourite domains for Image Analysis is timelapse imaging. The combination of X, Y (possibly Z if you want to get cheeky) and time makes for rich analytical possibilities.
Despite starting a new job in which the time domain is largely absent, I’ve been moonlighting in the evenings doing some outreach work helping with a problem that first came up on the image.sc forum.
It’s time to do some tracking, but let’s avoid getting too crabby.
You’ll be surprised to hear (I’m sure) given the terrible crab puns, that today’s post involves tracking crabs. This is part of an ongoing high-school science project looking at how hermit crabs react to novel objects in their environment. My help was requested to assist in tracking the crabs although as usual, the analysis feeds back into the experimental design.
If you don’t like … what you see here … get the frame out
Apologies to extreme. The starting material was an AVI taken presumably from a webcam or something similar. A hermit crab is placed into a plastic container with a bit of water and recorded for 5 minutes. Note the (very smart) inclusion of a piece of 1cm grid paper under the dish. We’ll come to that later.
There are a couple of useful bits of information, first the video Codec (MS MPEG-4) and second, the frame rate (25 fps). While we’re here, also notice the frame dimensions: 1920×1080.
It’s important to remember that for tracking, it’s a good idea to work with the lowest spatial and temporal resolution that you need to maintain precision. Realistically that means we should downsample the image at least 2x in X and Y and also significantly decrease the frame rate of the movie (both of these will also reduce the file size, increasing processing speed).
One of the most useful pieces of software I know for dealing with video and audio is FFMPEG. Almost any program (like VLC) that processes video usually has FFMPEG on the backend. It’s incredibly powerful for transcoding video, extracting video or audio and a multitide of other things.
I have the windows version of FFMPEG installed (which is no mean feat but so worth the effort), so from my command line, I ran the following command in the folder where the movie resides:
ffmpeg -i "movie1.avi" -r 2 -vf scale=iw/2:ih/2 frames\frames_%04d.bmp
Let’s deconstruct this. We’re running the ffmpeg command using movie1.avi as our input (
-i "movie1.avi") outputting at a frame rate of 2 frames per second (
-r 2) using a video filter (
-vf) to scale the images to 50% height and width (
scale=iw/2:ih/2). Finally we’re outputting the files into a folder called frames using a file mask (
frames\frames_%04d.bmp). The output is worth another mention as in the middle of the output file name we’re using the formatting string
%04d. This means take the string “frames_” then create a number zero padded to 4 digits then add the string “.bmp” to the end.
What we end up with is something like this (assuming you made the folder before hand):
FFMPEG can export just about any format (png, jpg &c), however, I chose a bitmap because (being uncompressed) it’s super quick to write out and read into Fiji … which neatly brings us to the next step.
QUICK ASIDE: There are a bunch of freeware programs that will extract frames from a movie. You might even be able to do it in VLC but I was using the tools that I know and love.
If you don’t like … what you see here … get the frames in
That’s the last one, I promise. The reason that we’re going to this trouble is that Fiji (even with Bio Formats) can’t readily handle these sorts of video formats (at time of writing, the plugin from the FFMPEG update site throws an error when I try to import these files).
As we have the bitmap files, we can just import them as an image sequence with [File > Import > Image Sequence]. Run the command and select one of the files in the frames folder. You’ll see something like this:
You’ll notice that the frame size is now 960×540 and we have 622 images instead of the original frame size of 1920×1080 and framerate of ~7500 (25 fps for 5 minutes). You’ll notice that you can also scale the images here. Useful if you used another software earlier that does not allow for image scaling.
Let’s take a look at what we have:
The info bar at the top of the image is (as always) a wealth of useful info. From a quick glace we can see how many frames we have, the X&Y dimensions, the fact that the image is not calibrated and the current colour mode (RGB).
I really like TrackMate for this type of problem, but there are a few things we need to take care of before starting it up (NOTE: the following order matters!):
1) Trackmate cannot use data in RGB colour mode as input
TrackMate uses single greyscale images for tracking, so we will need to convert the RGB image into an RGB stack (IE three greyscale images one each for Red, Green and Blue)
Do this by running [Image > Type > RGB Stack]. Note the inclusion of a “Channels” bar underneath the image:
Also worth noting that the image itself looks the same (IE not greyscale) because we’re displaying the image in Composite colour mode.
2) The built in blob detectors recognise bright blobs on dark backgrounds
TrackMate is amazingly extensible, so if you have the skill and interest you can write new modules for feature detection or linkage. The built-in feature detectors however, recognise bright blobs on dark backgrounds (which is mostly what you want in microscopy). Looking at our crab however, this is not going to work at all.
Thankfully, fixing this problem is as easy as inverting the image with [Image > Invert]. It looks a bit weird but otherwise works like a charm.
3) The image is not calibrated
Strictly speaking calibration is not necessary, but any quantitative outputs will be in pixels per frame. As we only want the best for our crabs we’ll take the extra step to calibrate the image and get measurements in cm per second.
Space first, so find a frame in your movie where you can see something of which you know the physical size (this is where the grid comes in super handy but you can always use a ruler in shot or the width of a piece of paper).
Pick the line selection tool and draw along an object of known length. In this case, I’m using 15 squares on the grid:
Now run [Analyze > Set Scale]. Leave the “distance in pixels” box alone and fill in the known distance (in this case 15). Add in the unit of length (cm in this case) and hit OK. Your image will now be spatially calibrated. Check the infobar to make sure it looks about right (this is the whole image width and height).
Now for time. Open up [Image > Properties] and set the Frame Interval. This will depend upon the rate at which you exported frames in the very first step. If you imported the entire video, this would be “0.04 sec” (ie 1/25 because the original movie was 25 fps). In this example however, we only exported 2 fps, which is an inverval of “0.5 sec” (1 divided by 2).
While you’re here you may notice that when converting to “RGB Stack” earlier, the dimensions were mixed up and the image currently has 622 z-slices and only one timepoint. You should correct this here by switching these numbers over. TrackMate is smart enough to spot this and correct it for you but at time of writing doing this will mess with your calibration.
Track that crab!
With the preprocessing done, you can proceed with the tracking. Run [Plugins > Tracking > Trackmate]. If you didn’t correct the dimensions you’ll see a notice asking you if you want to switch (see note above).
I’ve covered trackmate before and also have a play-along trackmate tutorial from my Image Analysis With Fiji workshop slides, so I’m not going to go into a huge amount of detail here, and will just try to pick up points of interest.
Calibration and Cropping
The first step in trackmate is to check the calibration info. For this sort of purpose, it might also be worth restricting tracking to frames in which your target is present. See below, we’re starting tracking in frame 15 (remember that trackmate zero indexes frames while Fiji does so from 1) after the crab has been added.
Hit next and select the default spot detector, hit next again to get to the spot detection parameters.
There are three important parameters you will see:
Segment in Channel: Select which of the red (1), green (2) or blue (3) channel you would like to use for spot detection. With a black crab shell, all of the colours will be represented evenly, so any will work.
Estimated Blob Diameter: As you might expect, roughly how big are your objects. What this will actually do is set the kernel size for the gaussian fit.
Try to make it just bigger than your object of interest.
Threshold: How good does the fit (see above) need to be to be included as a spot. If you leave the value at zero and hit Preview, you will see everything that is detected.
The aim is to increase the threshold until the object of interest in detected and as many of the background spots are removed as possible. Note that it may not be possible to remove everything, but we can correct that later.
When picking thresholds, I usually start with 0.1 and depending upon the image, will either go up (0.5, 1, 5, 10) or down (0.05, 0.001) as needed. It all depends on your kernel size and how noisy is the image. Once you’re happy, hit Next.
Skip the initial thresholding and proceed until you get to “Set Filters on Spots”. We’ll talk about how to make the experiment as easy to analyse as possible but at this point, let’s assume that you have a bunch of extra spots that you don’t want (see above).
Every experiment is different but here are some filters you might want to add to make your linkage easier (remember the aim is to remove as many of the non-object features as possible without removing the object feature):
- Maximum Intensity: If you have very bright background objects detected you can exclude those with a super high intensity
- Mean Intensity: While you can bandpass with a single filter, there’s nothing stopping you adding the same filter twice to make a bandpass if your object has consistance intensity throughout.
- Position: You can exlude spots based on position (great to remove things outside the experimental dish), but if you’re filtering here, consider cropping your image during preprocessing so they don’t get detected in the first place.
In this example, I used two filters to construct a bandpass maximum intensity filter to remove most (but not all) of the unwanted features.
Proceed until you get to the choice of trackers and hit next to pick the default (Simple LAP Tracker). I’m not going to go into great detail here about the linkage steps, but the following diagram explains two of the variables:
As a rule of thumb here, make the linking max distance at least as big as the maximum distance that the object moves in one frame. Set the gap closing distance to the same and the max frame gap to 2. This last value allows the object to disappear for up to two frames and still be linked correctly.
Once done, you should have something looking like this:
Here, each track is labelled a different colour along a rainbow spectrum. This is useful as it allows you to see whether your object track is broken into multiple tracks (it’s not in the case above). If this is the case, you may have to go back and adjust your linkage (or even your feature detection) settings. Don’t worry about those background features on the side, we’re about to filter those away.
The trick is to look for a metric that separates your track from the unwanted tracks. A good one to think about first is Displacement.
By adding a displacement filter you can see the histogram of all the displacement values from all the tracks in the movie. It’s worth noticing that near the bottom, you will see how many tracks you have (138) and how many the current filters are keeping (1). The aim is to pick a cutoff that separates your one track from the rest and here it’s pretty easy to see where that is (anywhere in that void would work equally well).
Hit next to apply the filter, and you’re basically done. Click next until you can’t click any further.
Post processing and outputs
Even though we’ve filtered those pesky tracks, the features are still displayed on the image (see the purple circles on the right hand side). To remove features that are not part of a track, pull down the box and select “Trim non-visible data”. Before you hit execute, it will look like this:
And after is below. Note that the unwanted features have been removed.
Nearly there. Most people want two things out of a tracking experiment, a nice movie and some data.
Let’s start with the movie, but first let’s decide how to display our lovely track. What you’re after is the track display options so hit the spanner/wrench button at the bottom of the dialog and you should get the following displayed:
The track display modes are not immediately obvious so below is an example of each. Note that you also have the option to “Limit Frame Depth” which will (in some cases) only draw on a limited part of the track (the examples below use 20 frames).
Entire Track: Will simply overlay the whole track over every frame of the movie:
Local Tracks Backward: Will draw the track behind the object as the movie plays (without the “Limit” option this will display the whole track back to frame 1).
Local Tracks Forward: The opposite. Will draw tracks that have yet to come (again, without “Limit” this will draw the track up until the end of the movie).
Local Tracks: Both of the above, will show before and after (in this case limiting frame depth gives both 20 frames forward and 20 frames back).
One further permutation is the choice of whether to fade the tracks (normal) or to draw the FAST version with no track fading. I actually prefer the latter, as I find it clearer (examples above use fast).
Finally, to actually save the movie, you need to hit the spanner/wrench again to go back to the end of the TrackMate wizard and select the first action called “Capture Overlay”. This will create a “burned in” version of the track respecting the display settings you’ve set earlier.
Before you do this, remember to re-invert the image by running [Edit > Invert] in Fiji (you don’t have to close the Trackmate window to invert the image).
Save the output as a TIFF for later and if you want to play or share it, select the new movie and run [File > Save As > AVI]. JPEG compression plays on most systems so is a fairly good choice (although quality will suffer). Next, figure out what frame rate to use for the movie. The movie below was tracked at 2 fps, so playback at 20 fps is 10x speed.
Show me the data!
The second output once you have a nice movie are the tracking data. You can find these on the Display options dialog (where we found the track display options before) by hitting the “Analysis” button (see below right)This will open three tables of data, one each for Track Data, Link Data and Spot Data.
To understand better what you’re getting in each spreadsheet, it’s worth looking at an example with a single simple track:Here the TRACKS output will have one row (there’s only one track). This will give you information about the whole track (that it contains 4 spots, the total displacement and the average speed). The LINKS output will have three rows and deals with the connections between spots in the track. Each row will list the source and target spots as well as the step displacement for that link and step velocity (IE the distance moved divided by the frame interval). Lastly, the SPOTS output will contain 4 rows (one per spot) and have position information for each of the spots as well as track ID numbers so you always know to which track the spot is allocated.
So what can I do with the outputs?
Basically…anything! Check out the links output and you can plot the step velocity against time:
In the ongoing project, we’re looking at distance to an object, so by calculating the Euclidean distance between each feature (in the SPOTS output) and an arbitrary point in the image (such as a foreign object: see below) you can plot the proximity over time.
Or why not take the position data from the SPOTS output and create a heatmap from the location of the crab at each frame?
The possibilties are endless as long as you can come up with interesting questions.
A note on acquisition
As is often the case, doing a little itterative design on your acquisition can significantly help in analysis and tracking.
In one of the original movies, a couple of things made the analysis harder. For one, the background in the movie was inconsistant. Even if you crop the movie to the inside of the dish, the dark strip on the right hand side, makes crab detection a pain and means there are more filtering steps involved.
The simple solution to this is to get a bigger piece of paper or a smaller dish to provide a consistant background.
To make the point, I decided to build my own acquisition rig and try to capture some minibeasts from the garden (#MostFunDad).
Below is my rig, built using a cell phone, a cell-phone mount and some construction toys:
This works pretty well for your basic arachnid acquisition:
The point is not to show off my awesome maker skills but to make the point that if you’re an educator interested in doing something like this, it’s pretty trivial and cheap to set up an aquisition system and start asking some interesting questions.