In the Bin

A lot of work we do at the CCI uses scanning confocal microscopes, which have the advantage that the operator can pick the number of pixels in X and Y that will make up the final image.

For camera-based systems this is a less simple endeavour as the array of the CCD chip is fixed. For this reason, we may want to downsample or bin our images. In this post we’ll cover a bit of theory and details on how (and why) to bin your images.

A pixel isn’t a little square (except when it is)

There is a long running argument that “A pixel isn’t a little square” which confuses people no end when they zoom into their images and see this:

What is meant by this argument is that is that each pixel represents a non-square sampling point. Regardless, in a digital image, a pixel is demonstrably a little square (even if it is an incorrect representation of the true spatial sampling of the image).

OK, rant done. Let’s get back to it.

There are two major (and I’m sure many minor) reasons why you may want to bin your image during or after acquisition. Here are the biggies:

You’re taking up too much space!

Binning is a process whereby you can reduce the number of pixels in your image via some mathematical process. For example, a 2x average bin will do the following:

We’re reducing our pixels from four to one and doing so by giving the resultant pixel the mean value of the four original ones (A,B,C,D). On an image, this will look like this:

Note on the info bar that we go from 512×512 (left) to 256×256 pixels (right). So the dimensions reduce by 2x and the number of pixels by 2².

But why would you want to do this?

I’ve explained before how the size of an image can be calculated by multiplying through the dimensions (X, Y, Z, channels, positions, timepoints) then by the bitdepth. So for example, a 2 channel, 512×512, 16 bit image will be approximately 1MB (512x512x2x16 ~ 8.4 million bits) in size.

If you’re massively oversampling, looking at qualitative data or are going on to do something like tracking, often you can bin your data and reduce your file size (in this example, by 4 times!).

As an example in the feature identification step of tracking, consider the object below in it’s original pixel number (top) and the same object binned 4x. Looking at the line profiles, there is plenty enough data with which to perform a curve fit and do sub-pixel localisation. What you will lose is positional precision.

Remember, if you’re interested in high resolution or intensity quantitation this is probably not for you.

I need more light over here!

If you’re working with dim samples, on cameras with high pixel numbers you may wish to trade up pixel number for intensity. In this case, we’re summing the intensity of the 4 pixels to make the resultant one. Let’s see how that looks:

Oh no! What happened? Well, our original image was 8 bit so when we added up the four pixels, if the sum of the values was over 256, our image becomes saturated.

This simple example highlights a problem with any operation on an image that results in non-standard output (you may remember that we came across a similar problem when calculating ratio of channels).

The trick here is to pre-convert your images to an image type that can cope with (much!) bigger values. You can do this by running [ Image > Type > 32 bit ] then performing the summed bin operation again.

Now you can see that the image looks the same but that the (x) scale of the histogram has expanded to take into account the sum of multiple 8 bit pixels (see above the white point is 255 on left vs 1017 on right – also note that the right side image is displayed at 200%).

To get a truly fair comparison however, we need to set the white and black points to be equal in both images by running [ Image > Adjust > Brightness & Contrast ]. In the following example, the black point is set to zero and the white point to 1020.

No we can see that the summed pixels are (unsurprisingly) brighter than the original.

That’s a quick intro to the theory, but let’s see how to actually do this in practice.

Binning your data (…in a good way)

A quick aside: We’ll be focussing on Post-Acquisition, but binning can often (and more usefully) be done at the point of Acquisition.

Take the microManager interface for example:

Note that some software doesn’t give you the option of bin method, so it may not have the expected results if you’re trying to increase signal. Always read the manual.

Let’s get back to Post-Acquisition.

To bin in Fiji, there are a couple of ways you can approach the process. The most obvious is by opening your image and running [ Image > Transform > Bin… ]. Select the binning factor (in most of the examples above, we’ve been using 2) and the Bin Method (above we used Average or Sum but other options are available).

Of course there’s no reason you can’t use a value higher than 2, just remember that your linear pixel dimensions (eg. 512×512) reduce by this factor, while your number of pixels reduces by that factor squared.


On the Edge

Every example I’ve given so far has dealt with images of nicely binnable dimensions, but what happens if your image doesn’t divide by your bin factor easily? The short answer (at least in Fiji) is that image is first cropped then binned. We can confirm this with a quick test.

If you run [ File > New > Image ] you can create a test image with an odd number of pixels. In this case, I’ve made a 5×5 image using ‘Ramp’ fill, which (in this case) makes an image with intensities (L -> R) of {0,51,102,153,204}. If we now perform an average bin with factor 2, this is what we get:

First we notice that the resulting image has 2×2 pixels. Secondly the intensities are (L->R) {26,128} which are the averages of columns 1&2 and columns 3&4 respectively from our original image.

2 thoughts on “In the Bin

  1. Dan White

    See Fiji plugin
    And it’s Fiji wiki page,
    For how to bin images but NOT destroy the image data. Trick is to smooth out the high spatial frequencies you should be discarding first before you bin (aka downsample)

    For on camera binning, usually CCD cameras make a sum of the binned pixels, so image intensities seem to get higher, and signal to photon noise ratio increases, at expense of resolution. Also you get lower camera read noise as the binned pixels are read out in one go with one chunk of read noise. Not so on scmos cameras usually. Binning is done at a later firmware stage and takes the average of the binned values. There is still the same read noise per pixel, but read noise is anyway much lower so it’s all good! You still get the signal to photon noise reduction of taking the mean value of several pixels.




Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.