Making Movies

Arguably one of the best things about doing Research using microscopes is the awesome power of the Image for getting your point across. Whether it’s for public outreach, presentation at a conference or just showing off your work at group meeting, a picture says… well you know the rest.

But what if you’re looking at a temporal phenomenon? Sure you could always use a montage but there’s nothing like a moving picture to wow your audience and get pulses racing. In this post we’ll look at a few different ways of turning your multidimensional data into movies.

Bring your own popcorn!

Film School 101

We’ll start off with the basics of making a movie which we’ll come back to throughout the post. For this, all we need is a dataset with 2 spatial dimensions (X & Y) in time (although you could apply the same idea to a z-stack if you want to show progression through the depth).

When you open your dataset in Fiji, assuming that your time series is recognised as such, you should have a play button on the control bar:

2015-09-movies_01 If you right click on this you can control the speed at which the movie is played back. When we export our movie, we will need to provide the FPS, so change this and hit OK to play the movie until you’re happy with the playback speed.

To make a movie, all you need to do is run [ File > Save As > AVI… ]. Select the compression method and the Frame Rate (which will be loaded from the current playback speed) and hit OK to save (you’ll be asked for where to save it). Easy!

For most applications, I use JPEG compression which while technically a lossy compression format, makes the movie much smaller and in many cases doesn’t noticeably reduce the quality.

2015-09-movies_02Just about any Operating System should be able to play a JPEG compressed AVI and in my experience this should give the least problems when embedded into a presentation.

Changing the Channel

The movie will always be created using the current visualisation settings. This includes the currently-selected channel and any adjustments such as brightness and contrast.

We’ve covered the different colour modes in a different post so I’m not going to explain it here. If you want to create a composite (also known as an overlay), pick a multi-channel dataset and run [ Image > Color > Make Composite ]. This will overlay your channels.


Images shown in Colour mode (left and middle) and composite overlay mode (right).

NOTE: In colour mode, the Channels slider (the top one above) is used to change between channels. In composite mode however, it’s used to select the channel for any subsequent manipulations (brightness adjustments or changing the Lookup table for example). The selected channel is indicated by a 1 pixel coloured border around the image but also the colour of the info text (green channel selected below):


After you’re happy with the channels (and brightness &c) you can follow the original instructions to turn the dataset into a movie.

How do you solve a problem like z-depth? (or Maria … it works for both)

Just as channels can be overlaid to enable visualisation of more data so too can z-slices be projected. As we’ve not really covered z-projection before, let’s take a moment to make sure we’re all on the same page.

Projection is about reducing the actual dimensionality while retaining pertinent information. Take the following example of a three-slice z-stack:

2015-09-movies_07 Here slices 1 and 2 have bright pixels while slice 3 has none. For each pixel through the stack, a projection performs some arithmetic and outputs the result to a single image. For example if we use a maximum intensity projection, this is what we would get:


A summed projection (in this case) would look the same (because there are no overlapping signals), however a mean intensity projection would end up with a lower intensity (remember, the mean is calculated from the pixel stack and thus includes the blank pixels from z=3).


Often, if you want to show features from all slices, sum or maximum projections are your best bet (although this will vary depending on your experiment). Certainly if you’re trying to quantify the images, sum is probably your best choice (so no intensity information is lost).

You’ve probably figured out by this point, that once you’ve projected your stack, you can then make a movie in the same way as above, and you’d be right (although I was lying about being able to solve a problem like Maria).

Getting the third degree

Any of the projection methods mentioned above might be useful, but they do discard the spatial information from the z-axis. For the truly impressive movies, why not include it all? For this we need to render our z-stack into a 3D volume. Luckily, we already have just the tool for the job. Open a stack with z-depth and run [ Plugins > 3D Viewer ].

2015-09-movies_05In the dialog, you can usually just select the defaults. Click OK and wait a few seconds for the data to render. Blobs in space!

2015-09-movies_06If this dataset has a time component, you can use the play button at the bottom of the window to play the rendered images as a movie.

Using the “save as AVI” method doesn’t work in the 3D Viewer window, but fortunately, there is also a record button at the bottom of the windows which will render each timepoint and provide you with a stack of images. From there you can “Save As AVI”. You can find out more about controlling the 3D viewer on the plugin website.



Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.