Sampling and more specifically, sampling frequency is a really important and much misunderstood concept in many fields of research. As we’ll see in this analogy-ridden post, it’s important to understand sampling in both time and space.

### The obligatory bus stop analogy

As promised, we’ll start with a simple analogy. Let us assume that you want to find out how often a bus stops at your local bus stop. To find this out, we arrive at our bus stop and see if there’s a bus there (it’s clearly a slow work week).

We start by checking every twenty minutes (at time 0, 20, 40 and 60). Our sampling frequency is therefore three per hour. What luck! Every time we visit the bus stop there’s a bus there.

This is not actually as helpful as you might think. The logical conclusion here (and bear with me on this one) is that there is *always* a bus at the bus stop. What we need to do now is** increase our sampling frequency**. So now we go back to the bus stop every 10 minutes and find that there is *no* *bus* at 10, 30 and 50 minutes (see below).Now this is more helpful, because it is clear that there is not always a bus at the stop, we now conclude that the buses have a frequency of *at least* three per hour.

Let’s take this to the extreme. Because we really have nothing to do, we go back to the bus stop every 5 minutes and see if a bus is present.

Disappointingly, there are no extra buses at the newly sampled times. Despite the higher sampling frequency, the bus frequency is still three per hour.

If you have followed along so far, then you now know the basic theory of sampling. The first situation is an example of **UNDER-SAMPLING**, we are not sampling often enough to tell the signal (a bus) from background (no bus).

The second situation is what is known as** NYQUIST SAMPLING**. Here we’re sampling at twice the frequency (six per hour) of our signal (three per hour). This is the absolute minimum sampling frequency that you need to see both bus and no bus.

The third example is that of **OVER-SAMPLING**. We’re collecting more data but it doesn’t improve our accuracy (there are still only three buses per hour).

Importantly, we can only draw conclusions based on the maximum sampling frequency that we use. There could for example, be buses coming at 7 minutes past the hour but we wouldn’t see them because we’re not looking every minute.

### Sampling in Space

The concept of Nyquist sampling applies to many different situations in 1-dimensional signal processing but also in spatial applications. Let us consider a widefield microscope. We know that the resolution of the microscope (according to the Abbe diffraction limit) is equal to:

Where d is distance between objects (IE the resolution), λ is the emission wavelength and NA is the numerical aperture of our detector.

In our imaginary example, we’re studying a something with an emission maxima of 625nm using a 100x objective lens (which has an NA of 1.45). With this we know that the resolution of our optical system is 216nm (using the equation above). This means that to satisfy the Nyquist criterion, we need to sample twice in this distance (that is, every 108nm).

To calculate our sampling frequency, we need to look at the camera. The physical pixel size of the camera (in this example) is 16 microns, so we know that the image pixel size is 160nm (this is calculated by dividing the physical pixel size by the total magnification in the system).

That means that we’re under-sampling our image (given that 216/160 < 2 ). That’s not terrible but it’s not great, and it means that two objects at our theoretical resolution limit would look like one object (below left):

We can improve the situation by increasing the magnification. This does nothing to our resolution (note its absence in the Abbe equation) but it does make the effective pixel size smaller (it actually makes the image bigger but it has the same effect – see above, right).

Using our numerical example, by switching in a 1.6-times magnifying lens (in addition to our 100x objective), the pixel separation is now 100nm which means that we’ve (just barely) met the Nyquist criterion with a sampling factor of 2.16.

To add further complication, according to the Rayleigh Criterion, things are considered to be resolved if the maximum peak of one object’s Airy disk (the shape a point emitter forms on a detector) is further apart than the first minima of the Airy disks of another (see below):

In practice this means than a sampling factor of about 2.3 is more useful than aiming for Nyquist sampling.

Pingback: In the Bin | Post-Acquisition