In the list of things that everyone who uses a microscope should know, this has to be near the top and yet I find a surprising number of people are either never taught it or don’t fully grasp the idea of Bit Depth. In this post and Part 2, we will deal with (almost) everything you need to know about bit depth, dynamic range and image histograms.
Why does it matter? My pictures look nice!
It’s all about the context. Lets look at an example image:
Let’s say that the intensity value in the cell at the tip of the arrowhead is 2000. Is that a bright signal? It’s impossible to say without knowing a bit more about our image.
During acquisition at some point, photons (quanta of light) are converted into electrons on a Charge-Coupled-Device (for most wide-field systems) or a PMT (for most confocal systems). The electrons (as a voltage) are an analog signal, which is then converted to a digital one (the image) via the imaginatively named “Analog to Digital Converter” (all of this usually happens within the camera/PMT itself).
The A/D converter controls the dynamic range of the resulting image. The dynamic range is the number of levels between absolute black and absolute white. Examples are always helpful and you know what they say about pictures and words so…
Understanding the importance of Bit Depth
Hopefully what you can appreciate is that with more levels, you get more detail (for example, look at the shadows at the side of the building). The number of levels in an image is typically expressed as a bit depth, which is the number of binary digits (AKA “bits”) that you require to record the intensity information.
Remembering that one binary digit can express one of two values (0 or 1), the images above are examples of 1bit, 3bit and 8bit images. The link between the bit depth and the number of levels can be expressed as:
The common bit depths encountered in imaging are 8, 12 and 16, which give 256, 4096 and 65536 levels respectively.
Back to the image
Looking back at the image we can now make a judgement. If the image is 12bit (remember that’s 4096 levels) then we’re about half way up the scale which is pretty good. If however, the image is 16bit (65536 levels) then there is not very much signal there at all.
All of this should become even clearer when we start to look at image histograms in Part 2.
So why use higher bit depths?
For most qualitative applications, 8bit images are fine. If however, you’re trying to quantify your images it’s wise to go higher. Remember the shadows? The more grey levels you have the more granular your measurements will be. This is particularly important when looking at small changes in noisy data.
The only real downside of routinely imaging at higher bit depths is that because the image takes more information to describe, the file size will be larger.
For more on images bitdepths and intensity, Check out Part 2.