top of page
  • Ed Dozier

The History of MTF50 Resolution Measurement


I thought it might be fun to give you a little insight into how some really smart people figured out how to use computers and math to automatically measure lens resolution. Believe it or not, some of the techniques being used date back to the early 1800’s!

It all began with a guy called Jean-Baptiste Joseph Fourier, who was born in 1768. Fourier started looking at how you could combine different combinations of “sine waves” to approximate virtually any curve with a repeating pattern. So, what’s a sine wave?

A sine wave, using “radians”

The picture above shows the simplest sine wave, which is a “trigonometric function”. This is a function that smoothly changes value as you travel around a circle and varies between positive one and negative one. You’d call this a “single cycle”, or a wave (sort of looks like a water wave cross section). If you imagine the hour-hand on a clock (with a length of 1 inch) running backwards, think of horizontal as zero height (3 o’clock and 9 o’clock). It’s “+1 inch” at 12 o’clock and it’s at “-1 inch” at 6 o’clock. That’s the basic sine wave function. By the way, there’s a closely-related function called “cosine”. The cosine is basically the same, except that the wave is shifted by -90 degrees relative to the sine wave, which is called a “phase shift”.

Radians, by the way, are just another way of measuring rotation around a circle. “Pi” radians (3.14159) are the same as 180 degrees. Radians are used more in math and physics, because they’re a more “natural” unit of measure.

A sine wave with twice the ‘frequency’

Now, I’m showing you a wave with twice as many oscillations as the first one, or twice the frequency. It varies between the same values (plus one to minus one), but twice as often. More waves within the same distance are what are called “higher frequency”. Taller waves are said to have higher “amplitude”, or higher “intensity”.

Add the two sine waves together

Fourier noticed what a weird result you can get when you add together multiple waves (a series of sine waves). He discovered that he could construct a line shaped like anything he wanted, if he added enough sine waves (each with a different frequency), together. This “Fourier series” he invented (and announced in 1807) has morphed into a “Fourier Transform”, and it’s used in many fields that one way or another relate to “waves” and frequency analysis. Fourier discovered that making functions that rise up and then fall down more steeply took higher-frequency sine waves to replicate that shape.

Think of the Fourier Transform as a technique to break down a function into its component frequencies.

There’s a deep connection with the way nature works and Fourier’s multi-frequency wave addition. White light, for instance, is comprised of a continuous spectrum of different frequencies of electromagnetic radiation. Brighter light doesn’t mean higher frequency; it actually means that the waves have higher amplitude, or intensity. This also means that you don’t have to worry about how bright the light is when you try to measure resolution.

Some smart guys (Cooley and Tukey) figured out how to write algorithms that implement Fourier transforms in a very fast way, so of course they called them “Fast Fourier Transforms” or FFT’s. These FFTs get used today in lens resolution analysis programs (and in many other places, too). It turns out that Carl Friedrich Gauss in Fourier’s time actually invented the FFT, but it was lost to history and re-discovered in 1965.

Many disciplines in math, science, and even photography discovered how useful Fourier transforms could be. They could transform “spatial domains” (positional information) into the frequency domain. The next discovery, called the “inverse Fourier transform”, lets you go the other way, from the frequency domain back into the spatial domain.

The old ‘manual’ way to estimate resolution

Take a look at the resolution chart above. This is an example of how resolution was estimated before computers and modern measurement techniques. You would photograph the chart, and then try to figure out where the converging lines would turn to mush, and call that your lens resolution. On the plus side, this works as well for film as it does for digital cameras. On the negative side, you now have to control how far away you are when you photograph the chart, it’s slow and tedious to use, and you only get an idea of lens performance in a couple of places in the field of view.

One thing that’s made of “waves” is light. The job of a camera lens is to gather and re-direct light waves onto a camera sensor. A really good lens can efficiently react to variations in light, such as the edge of a black square against a white background. If you have a lens with ‘perfect’ resolution, then a photo of a black square against white won’t show any ‘gray zone’ between the black edge and the white background. Reality steps in and rears its ugly head, however, and your photo shows a small zone of gray between the white background and the black square.

Plot of light intensity between black square edge and white background

If you were to graph a plot of light intensity as you move from the white background onto a black square, you’d notice that good lenses have a plot that lowers quickly (spanning a small number of sensor pixels), whereas with poor lenses the plot would lower much more gradually.

If you continuously plot moving over this edge back-and-forth, the plot would look similar to the sine-wave patterns above, but with a steeper rise and fall than those low-frequency waves have. I mention the ‘back-and-forth’, because you’ll recall that the Fourier series only works with repeating patterns (ala waves).

Combined intensity plots with a flip in-between. Becomes a ‘repeating pattern’.

If you were to perform a Fourier analysis on this repeated rise and fall pattern of light cycles, you could discover how it requires higher-frequency sine waves in the series to approximate the original pattern. A good lens would require a higher frequency set than a poor lens to model the response; we call the response “cycles per pixel”. It generally takes several camera pixels to contain an entire dark-to-light transition cycle of an ‘edge’ photo, so the number of “cycles per pixel” is a value that’s less than one. Lo and behold, you now have a way to evaluate resolution in “cycles per pixel”, thanks to Fourier.

The real magic of using these Fourier Transforms is that you can perform the analysis given only a single edge.

As a side note, if your lens is out of focus, then the light-to-dark transitions are less steep. This would result in a lower resolution measurement. It’s very important to have your lens in sharp focus while testing it, or else you’ll get a wrong resolution measurement. Subject or camera motion can also mess up resolution measurements, but probably more in one direction than the other.

Once you know the “cycles per pixel” resolution and the dimension specifications of your camera sensor, you can easily convert this number into other measurement units, like “line pairs per picture height”.

Now, imagine you photograph a series of lines, like a picket fence. A good lens/sensor combination would enable you to record a full transition from light-to-dark (a light “modulation”) on the edges of each picket. If the pickets get too close to each other, however, the light-to-dark doesn’t get to finish before the sensor sees the neighboring picket. If the light-to-dark transition only gets half way to “dark” between closely-spaced pickets (or 50% contrast), we’ll call that the limit of the modulation we will be willing to tolerate. We also call this an MTF50, or “modulation transfer function” of 50. The MTF50 can have units such as “cycles per pixel” or “line pairs per millimeter”, once the size of each pixel is known.

What if you want more accurate resolution measurements?

If you photograph a vertical square against a white background, the best resolution measurement you can get would be limited by the size of pixels on your digital camera’s sensor. How can we measure with better precision than that? Enter the “Slanted Edge”. It turns out that you can put a slight tilt on those squares and then gather readings from a series of sensor rows that all cross the same edge. If you consider all of those readings in each row, you get a much better idea of the change in brightness across that edge (down to fractions of a pixel). As a matter of fact, the brightness measurement resolution is a function of the “sine” of the angle of the tilt. For instance, the sine of 5 degrees (instead of radians) is .08716, and this represents a fraction of about 1/12. If you tilt a square by 5 degrees, you get about 12X better resolution (or 1/12 of a pixel) in the measurement of the light variation across the edge.

That pesky ‘sine’ function is just showing up all over the place.

Slanted edges with “cycles per pixel” measurements

The shot above shows part of a resolution test chart that has resolution measurements drawn over each (slanted) edge in blue. Those measurements got drawn on the picture by the resolution measurement program I used, called MTFMapper, which is explained further at this link . The measurements shown are in units of “cycles per pixel”. The cycles per pixel relate to how many light-dark transitions can be recorded per pixel (which is less than 1). More cycles-per-pixel mean higher resolution.

Notice that the squares (trapezoids) are oriented such that their edges either point toward the center of the lens (sagittal) or are perpendicular to that direction (meridional or tangential). Lenses are typically better at resolving in one direction than the other, so it’s a good idea to measure in both directions. A really good lens would measure the same in either direction.

An MTF contrast plot using a D7100 camera with 3.92 micron pixels.

Nearly all camera lens manufacturers give you lens “MTF” data separated into meridional (tangential) and sagittal readings. This data is typically presented in the form of “percent contrast” at a couple of different line pitches; these are what are known as “MTF contrast plots”. These plots are a bit different (and less informative) than the “MTF50 resolution plots” being discussed here. The plots are usually only shown at the lens widest aperture; the contrast gets better as a lens aperture gets stopped down (until diffraction sets in). I have more information on these MTF contrast plots at this link.

An MTF50 resolution plot, line pairs per millimeter units

Computer programs, such as Imatest and MTFMapper use this “slanted edge” technology. These programs are far more efficient than the old method of photographing closely-spaced lines to estimate where the lines-per-millimeter turn to mush. You are finally able to get comprehensive resolution information covering your entire camera sensor. The MTFMapper program, by the way, is free.

Conclusion

There’s a lot of technology that goes into modern programs that measure resolution via the “slanted edge” technique. It’s based upon knowledge that has been built up literally over centuries.

If you were to manually attempt to perform a “slanted edge” lens resolution analysis like what has been shown, it would take you ages (if you could do it at all). Modern computers and algorithms, combined with digital cameras, make it a snap.

I think that learning about innovations from scientists, engineers, and mathematicians of the last few hundred years is a humbling experience.

483 views
Recent Posts
Archive
bottom of page