/* */

Advertisement

Primers on Microscopy for Biologists – Resolution

Posted by , on 15 July 2020

Formal definitions of resolution refer to imaginary objects such as infinitely small sources of light. I will avoid those and instead try to provide a pragmatic explanation.

Practically, the spatial resolution is the size of the smallest structure that can be distinguished in the light coming from a specimen.

All sensors and components of the microscope have their own resolution. Therefore, the word ‘resolution’ is colloquially used to mean a variety of things and it is important to be aware of the differences. Typically, the optical resolution is the limiting resolution of a microscope because other things such as sensors are chosen such that they do not make the resolution worse.

Optical resolution

This is the resolution of the components in optical path of the microscope – the objectives, mirrors, laser scanner etc. The resolution of the objective is the limiting factor compared to the other components.

The resolution (R) of the objective is dependent on the refractive index of the medium, numerical aperture (NA) and the wavelength of the light going through it (λ). There is a physical limit to how much resolution one can get in an optical system called ‘Abbe’s diffraction limit’. We won’t get into the theory, but will just know that it is a limit one cannot cross.

In general :

    Abbe's Diffaction limit : R = 0.5*λ/NA
    Realistically, for an oil or water objective : R = 0.61*λ/NA

Let’s take an common example where one is using a laser of 488 nm with 1.4 NA oil objective. The resolution of this setup is:

    (0.61 * 488/1.4) nm = 213 nm. 

This means anything smaller than 213 nm will simply look like single a blob of light of 213 nm. Even if there are 10 different things of only 20 nm that are emitting light, they will look like a single blob of light.

Some things to note here :

  1. The magnification (63x, 40x) is irrelevant to the resolution of the objective.
  2. This is the maximum resolution of the objective. But real life is not perfect and in practice, it is very hard to get this resolution.
  3. Resolution is given in length units, but is a inverse relationship. A higher resolution means a smaller number in length units, because it means a smaller structure can be distinguished.

Sensor resolution

The light coming from the objective falls on a sensor of some kind that measures it. There are a couple of sensors types that are used:

Single Pixel Detectors : Photomultipliers tubes (PMTs), Avalanche Photodiodes (APDs), GaAsP detectors etc. are examples of single pixel detectors. Single pixel detectors have no concept of length and width, they only measure how much light falls on them, not where it is coming from.

The position of the light source is determined by other means. In a confocal microscope, the laser illuminates only a tiny spot at any given time. The scanner moves the laser spot over the sample to shift the spot, line-by-line until all of the sample has been covered. The size of the spot defines and how much it is moved every time defines the scanner resolution. If the spot is too large , say 1000 nm and contains 2 different 500 nm structures , the laser will illuminate both of them at the same time. It won’t matter that your objective can distinguish 213 nm structures. The light from both those structures will fall on the single pixel detector at the same time. And it will record a single structure that has twice the brightness. Suddenly, your resolution will be much worse than what your objective is capable of.

Notice what has happened – in the case of a single pixel detector, the length and width of the sample is transformed into the time it takes to image the sample. The larger the sample you’re imaging, the longer it will take on a single pixel detector. How much resolution you get depends on how fast the detector can work. Using our earlier objective, if the laser moves 213 nm in 1s than our detector must be able to take a recording ever 1s.Fortunately, microscopes are made to ensure that the detectors are fast as the laser spot can move. The spot is also made as small as possible so that it is much smaller than the objective resolution.


Array detectors : CCD cameras, CMOS cameras , Fibre optic arrays etc. are examples of array detectors. These detectors have a sensor composed of many single pixel detectors laid out in a grid. Depending on which of the individual pixel detectors on the sensors gets the oncoming light, the position of the light source can be determined. The number of single pixel detectors that make up the entire ‘sensor array’ determines what resoluton you will have in images from these detectors.So lets say we have a camera with a sensor of size 50 µm x 50 µm, composed of 100 single pixel detectors.This means the size of each single pixel detector is
50/100 = 0.5 µm = 500 nm

If the light from the objective was allowed to simply fall on this sensor , the light from 500 nm of the specimen would fall on a single pixel of this array , and it wouldn’t matter that your objective has a resolution of 213 nm, as all structures smaller than 500 nm would fall on a single pixel of our sensor array.

Once again fortunately, microscopes are made to ensure there is a good match between the objective resolution and the size and number of pixels on the sensor array. This is commonly done with a trick of limiting the zoom and field-of-view of the objective such that it is not possible for the user to choose parameters that make the resolution worse than what the objective can provide.

Notice the advantage of the array detectors – because they have more than a pixel , they can detect light simulataneously from multiple parts of the sample. They are much faster than PMTs and APDs. As long as the size of each pixel on this array is smaller than the objective resolution, there will be no resolution loss here. A sensor is chosen by microscope manufacturers such that this is the case.

Recording ( or Image) resolution

So far, we have looked at things that affect resolution but these are things that must be thought of when building (or buying) an objective or a microscope. The user cannot change any of the things discussed so far while operating the microscope.

There is however a final parameter in acquiring an image that the user can change – at what resolution should the information coming from the sensors be recorded. This is usually given in terms of pixels of the image that must be recorded in the software. Some typical values are 512 x 512 , 1024 x 1024 , 2048 x 2048.

There is a chance for optimization here. Let’s say we are using our 1.4 NA oil objective with 488 nm light again. This objective has a resolution of 213 nm. Lets say we are also imaging a sample of 50 µm x 50 µm. The sensor has been chosen such that it does not become a bottleneck.

If one records a 2048 x 2048 pixel image of a 50 µm x 50 µm sample , we are recording at a resolution of:

    50µm / 2048 pixels = 0.024 µm = 24 nm per pixel

This is quite silly – our objective through which the light is coming only has a resolution of 213 nm and cannot distinguish structures smaller than that. By recording 2048 x 2048 pixel images, we are dividing those 213 nm blobs into about roughly eight 24 nm blobs. They are still blobs…we are not seeing anything better. We are just unnecessarily using up memory and space and imaging time. If it is a fluorescence image you are taking, you are shining a lot more light on the sample and might be bleaching it. This is called oversampling.

If we really wanted to use the best recording resolution, we should see that our objective has a resolution of 213 nm and so for a 50 µm x 50 µm image, we should record at:
( 50 µm x 50 µm ) / 213 nm = 235 x 235 pixels.

It is possible to also go the other way – to get the best resolution, we must record at 235 x 235 pixels. But we could also set the software to record at 64 x 64 pixels. This is called undersampling. It simply means that the objective is giving us more information about the sample, but we are throwing it away by not recording it.

So it becomes clear that what image size to record at depends on the size of the sample you are imaging. There are sometimes good reasons to oversample or undersample an image – but one should be aware of what the effect on resolution is.

Reasons to oversample or undersample

This brings us to another point. Despite having said that oversampling is silly, there is one good reason to oversample by a specific amount if you are concerned about accurate images.

The formal theory of all this, called the Shannon-Nyquist sampling theory, tells us that in order to faithfully reproduce structures at maximum resolution allowed by the objective, one must record at twice the resolution of the objective.

Practically, this means that if you are recording a 50 µm x 50 µm sample (one single biological cell), you must use a recording resolution of 512 x 512 pixels. You can work up from there.

There is also a couple of good reasons to undersample. When you undersample, you lose detailed structures in the samples as they become blurred, but it also means that all the light from the sample gets pooled into fewer pixels in your recorded image. This makes the image brighter. Blurrier, but brighter. This is sometimes called pixel binning. If you are using a single pixel detector like a PMT, it also means the sample is finished faster, since there are fewer pixels to record and the laser can make larger jumps of the spot.

Take-home messages

1. Try to optimize parameters, instead of maximizing them.

Try to oversample by about two times the objective resolution. Anymore more oversampling than that is pointless, wasteful, uses much more time and shines much more light on the sample.
If you don’t much care about the detailed structures, but want faster images, or have a dim sample – you can choose to undersample.

2. Change as little as possible between different days.

A stable, predictable , even if inefficient, microscopy workflow is better than a fluctuating, unpredictable one.

3. Do not trust your eyes – they are the least sensitive component of a microscopy experiment.

Your eye and brain were shaped by evolution to solve very different problems. Microscopy today has gone beyond the wavelengths, intensities and even characteristics of light that we can see.

Strike the balance based on what you information you want, not how the picture ‘looks’ to you.

1 Star (7 votes, average: 1.00 out of 1)

doi: https://doi.org/10.1242/focalplane.2529

Tags: , ,
Categories: Discussions, How to, Education

Leave a Reply

Your email address will not be published. Required fields are marked *

Get involved

Create an account or log in to post your story on FocalPlane.

More posts like this

Discussions
How to
Education

Filter by