Go West! (…ern)

The Western blot is a staple of many Research Labs. Proteins are separated based on their size then labelled and identified using antibodies. Instead of using fluorescent labelled antibodies (although this can be done), most WBs use ChemiLuminescence to detect the amount of protein present. In this post we’ll look at the best way to acquire and analyse the humble Western blot.

A quick thanks to Rosalie Richards (of the Sée Lab) for supplying the lovely western blot used as an example in this post.

Back is black (…but not too black)

I’m not going to go into the details of acquisition (this is PostAcquisition after all) so let’s assume that you have an image of your blot that has both your protein of interest and loading control in the same blot and that the image is greyscale (my example image is 16-bit so values range from 1 to 65536). It will probably look something like this (the band at the top is very feint).

2015-08-Western_01There are two checks that you should do before starting to analyse your blots. Firstly, it’s important to make sure that your background is actually dark. You can easily check this by hovering your cursor over the image in a bright area and a dark area.

Fiji will give you a readout of the intensity value in the Status bar. Background should be less than signal!

2015-08-Western_02

Some cameras will invert both the Intensity data and the Look Up Table so bands are low values and background are high values (but look bright and dark respectively). To correct this use a combination of [Edit > Invert] which will change the pixel values and [Image > Lookup Tables > Invert LUT] which will change the way the values are represented.

Check two is to make sure that you don’t have an oversaturated or underexposed image. A quick way to do this is to use the HiLo lookup table [Image > Lookup Tables > HiLo] which labels pure black values (IE zero) in blue and pure white (IE 256, 4096 and 65536 in 8, 12 and 16-bit respectively) in red.2015-08-Western_03To help with understanding this, below is a lookup table for an 8-bit greyscale image. Each square has an intensity value from 1 (black) to 256 (white). On the left is the Greyscale LUT on the right is HiLo:

2015-08-Western_04NOTE: The lookup table will represent the currently selected range in [Image > Adjust > Brightness & Contrast]. Make sure that this is set to display the full range of your image.

An alternative (which is maybe quicker and simpler) is simply to measure [Analyze > Measure] the whole image with Min and Max selected in [Analyze > Set Measurements]. As long as your Min is greater than pure black (with a value of 1) and your Max is less than pure white (256, 4096 and 65536 in 8, 12 and 16-bit respectively), you’re set.

Making Measurements

We will take one measurement for each band plus a background. Make sure that you’re set to take measurements of Area, Mean Grey Intensity and Integrated Intensity ([Analyze > Set Measurements]).

Using the rectangle tool on the Fiji toolbar, draw a region of interest (ROI) to use for background. This should be away from the bands and should avoid any smears or bright spots (unless these are representative of the background). Hit ‘m’ to measure the background.

Draw a region encompassing your first band. It’s important to select all of the signal from the band but none of the signal from adjacent bands! See the examples below:

2015-08-Western_05Measure the ROI and repeat for your bands and loading controls.

Scrunching Numbers

At this point you can copy your numbers out to your favourite spreadsheet application (like LibreOffice and OpenOffice) to do some basic maths.

2015-08-Western_06The first step is to calculate the background-subtracted integrated density (BSID) for each band. We do this by calculating the total background for the band (the number of pixels multiplied by the mean background intensity) and subtracting this from the total integrated intensity.

2015-08-Western_07

 

 

 

 

 

ASIDE: Background-Subtracted Integrated Density (BSID)

To help to understand this point (because it’s an important one and is the reason why our ROIs need not be the same size), below is an example. Imagine the following ROI encompassing a band (the light grey bit):

2015-08-Western_08The ROI has an area of 50 pixels and an integrated density of 1220 (18 pixels with an intensity of 50 and 32 pixels of intensity 10). We have separately measured the mean background intensity to be 10.

By calculating the BSID, we’re removing the background component from each pixel (it’s important to remember that even the “signal” pixels will have a noise component). This works out to be 1220 (the integrated intensity) minus the area (50 pixels) multiplied by the mean background (10), which gives an answer of 720. Schematically the ROI now looks like this:

2015-08-Western_09When considering the BSID, note that it doesn’t matter how big the ROI is, because any extra background pixels will add zero to the integrated Intensity. Clear? Right, back to the scrunching.

Normalising

To calculate the relative abundance, the BSID for each band should be divided by the BSID of its loading control band.

Finally, to make the data more readable, these values can be normalised to the first band (or whichever is an appropriate experimental control) by dividing the relative abundance by the first value (which will become 1. See the left-most bar in the graph below).

2015-08-Western_10

Now just repeat your experiment a few times. The nice thing about normalising your data like this is that you’re doing away with absolute intensity changes and reporting relative values allowing you to compare results from experiments even if the absolute intensity changes.

Epilogue: The wrong way to quantify Western blots

One of the big mistakes in the quantitation of Western blots is to develop blots onto film, scan them then try to quantify them. This is quite a bad idea and here’s why:

A digital camera records light in a roughly linear fashion. That is to say, if you double the number of photons incident on the detector, the brightness of a pixel should be roughly twice as much. That means that within an image, if you find an area (like a band) that is twice as bright as another you can say that there should be roughly twice as much protein. This system can be described as having a linear transfer function (or more specifically a gamma of 1).

The problem is that film and scanners both have non-linear transfer functions meaning that the relationship between bright and dark shades in an image are not linear (at least at the low and high ends). With this in mind, without knowing the gamma function (and correcting for it), the quantitation will be inaccurate.

The second point is that it is very hard to maintain (or even detect in some cases) saturation in an image that has been developed on film then scanned. Depending on the settings it’s possible to have a digital image that is technically not saturated but represents a saturated film.

TL:DR; quantitation of film. Double trouble. Just say no.

Advertisements

Comment!

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.