Search Results
271 results found with an empty search
- Ultimate Landscapes and Moon: Nikon Z8 Pixel-shift and AutoStakkert
I thought that Nikon’s Z8 pixel-shift shooting would be the golden ticket for sharp landscapes, but I was disappointed. Even with no wind, heat-shimmer (unstable atmosphere) ruins the merged pixel-shifted shots. Nikon’s NX Studio is used to merge the pixel-shifted photographs (I usually combine 16 or 32 shots). For indoor work using a good lens, the resulting resolution is absolutely amazing (180 MP). NX Studio is ‘dumb’, though, when it comes to dealing with any subject movement between shots. I also found out that photographing the Moon doesn’t work with pixel-shift shooting, even when the Z8 takes the photos at 9 frames per second. Both the atmosphere and the Earth’s rotation spoil the results. The software I’m going to discuss isn’t limited to the Moon or the planets, although that’s what it was designed for. It can also help with any distant terrestrial landscape shots, as long as your subject holds still. The key to sharpness is based on statistics. Most of the time, details of your subject are in the same location, but with a shimmering atmosphere, sometimes they move a bit. If you take several shots of the same subject and look for details that are “usually” present in each of the photos, you can combine these shots into a single sharper picture. If you look close enough, you’ll find that some shots are sharper than others. The software also recognizes this, and is capable of automatically only selecting the “best” shots it locates in a series (a ‘stack’). The program I’m going to describe is called “AutoStakkert”, version 4.0.1 for 64-bit Windows. I’m using it on Windows 11. It’s available on other operating systems, too. This free program can be located here. The programs’ Dutch author is Emil Kraaikamp. Emil has kept up with making this program smarter over the years. I wrote an article about this program several years ago, before pixel-shifting was available, which you can look at here. The program hasn’t changed very much over the years, so the old tutorial is still mostly valid. This new article is only for the very pickiest of photographers, who really, really want to get the sharpest landscapes or moon shots. The AutoStakkert program isn’t for the faint of heart or the lazy people. Bear in mind, though, that it can’t cure a windy day; if leaves are blowing around, then stacking can’t fix that. You can also forget about shots with moving water; it won’t work for those, either. I converted my raw photos into 16-bit TIF files to use the program, but it accepts a variety of image formats. It doesn’t accept raw formats, though. There are many, many options available with this program, but I’ll describe a couple of recipes that work for me. Keep in mind that the intended users of this program are astronomers, not photographers. I have had best success when using at least 20 pictures in a stack. Since the Z8 pixel-shift feature can shoot up to 32 frames at a time, this is ideal. I’ve seen extreme examples where users have processed more than 10,000 shots in a stack (frames from a video) with this program! The more atmospheric shimmer, the more shots you’ll need to counteract that shimmer. The Z8 lets you shoot a series of pixel-shift shots, so you can easily go beyond the 32 limit. Before I forget to mention it, AutoStakkert can output a ‘sharpened’ photo, but I don’t like the result (totally over-sharpened with haloes). I use the unsharpened output and post-process it with my favorite photo editor instead. Finished result, after using AutoStakkert. 500mm PF The cropped shot above, using the Nikkor 500mm PF f/5.6 looks more like it was taken through a telescope. This is a 13-shot stack, using some of the pixel-shift raw shots from the Z8 after converting them into compressed TIF format. No atmospheric distortion seen here! This shot has received no post-processing. Same merged 16 shots using NX Studio to make NEFX file: disaster! The very slight orbital motion of the Moon ruined the pixel-shift merge, even with the shots taken at 9 frames per second (the pixel-shift ‘interval’ was set to zero seconds). Using the AutoStakkert Program Part 1: the Moon Run the program “AutoStakkert.exe” as an Administrator (right-mouse click on the file to do this). I believe the program author is from the Netherlands, hence the unusual program name. This program doesn’t like raw format, so you’ll need to convert your photos into any of a variety of image formats (I use 16-bit tiff with LZW compression). AutoStakkert: Moon uses “Planet” option, landscape uses “Surface” For my moon shots, I don’t bother to re-center the moon in the frame to counteract the Earth’s rotation. The software takes care of that, when you choose the “Planet (COG)” Image Stabilization option. You get the same effect as you would from using a motorized equatorial mount. If you’re shooting distant landscapes, you need to use the “Surface” Image Stabilization option instead, where your subject isn’t moving. If you don’t use a tripod for this, then you might as well stop reading the article at this point. The screen shot above shows some of the settings when shooting the Moon. I’ll show an example later that demonstrates some suggested ‘landscape’ settings. Click the “1) Open” button, and browse to the folder with your (TIF, JPG, etc.) multiple shots to process. Use the “control” or “shift” buttons to select the desired photos to process as a stack. Selecting the pixel-shifted files to stack Review the photos for alignment Scroll to center your subject in the window before reviewing the shots. The moon, even at 500mm, isn’t very large in the photographs. After clicking on the “1) Open” button and selecting the 16-bit TIF photos, I click the “Play” button to see if the automatic rough alignment was successful. This rough alignment counteracts the rotation of the Earth between the shots, assuming you don’t bother to realign the moon in your viewfinder. The “Play” button starts a slide show running. Image quality grading numbers get displayed next to the “F#” (frame number) on the photo-display dialog upper left side. You can click in the “Frames” progress bar to manually step through the image stack, too. This lets you easily compare how sharp each shot is, relative to each of the other shots. Click “Stop” to halt the slide show. If you have selected “Planet (COG)”, the stack of photos should already be roughly aligned with each other. If you set your camera pixel-shift ‘interval’ to 0 seconds, the alignment isn’t much of an issue anyway. Screen shot after photo stack analysis, before clicking “Place AP grid”. Click the “2) Analyse” button next. This will perform an initial quality assessment of the selected pictures, and then decide which are the sharpest photos. It generates a plot of the shot quality as well. The program will place your shots in order of decreasing sharpness. The gray line in the plot is in the same order as the input photo file stack, and the green line is the sorted order of the frames. Click on the “Frames” button to switch between sorted or original input frame order, and use the slider to switch from frame-to-frame (or else type in the desired shot number). The “Frames” button turns green when this feature is available. If you place the mouse pointer over the slider area, the tool-tip text will indicate the active sorting order (“The frames are now sorted by quality”). “Frame” slider/input box to view stack images and their quality rating Note the “F# below the slider, such as “F#3 [15/16]”, which indicates the 3rd frame of 16 is the fifteenth sharpest photo, and the third shot (file) in the stack. This example frame is in the “top 93.3 % ” of the entire stack, and has a quality rating of “Q 3.3%”. You generally want a photo quality rating of 50% or better in your final stack. Frame #3 shouldn’t be included in the stacking. There is a zoom slider and horizontal/vertical sliders to magnify and shift the view of the selected photo in the stack. This is an under-appreciated program feature. You might have hundreds of photos, and it would be a terrible chore to manually figure out which ones are the sharpest. This feature automatically finds them and sorts them. You’ll get an error (!#@Anchor) if your shots aren’t aligned well enough for analysis. You’d probably get this error if you did a whole moon shot but selected “Surface” instead of “Planet (COG)”, and the moon was in a different location in each shot. I presume “!#@Anchor” is some form of Dutch swearing. Alignment Point setting If the Analysis looks good (view the graph for a nice continuous plot showing gradual decrease in image quality of the sorted shots), you’re ready to select the final alignment points. For quality ‘planet’ input images, select a “small” alignment point size (AP Size) of 24. For lesser quality images, select a larger number. I have experienced alignment mistakes when using larger alignment point sizes. I’d suggest you use the automatic alignment point creation, which will put many points on your image. Lots of points are needed for quality alignment of the shots in the stack. There’s a manual placement option (“Manual Draw” checkbox), although I haven’t had good success with it. After Analysis, there will be a red rectangle over your displayed photo. If you want to try placing manual alignment points, don’t put any points outside of this rectangle, since some of your shot details go outside of this rectangle. Place the Aligment grid Click the “Place AP grid” button next. This is the automatic way to get the alignment point grid added to your displayed photo. This is fast, easy, and lazy, which I’m all for. It will put a grid of points over the entirety of your subject, but avoids the black background (if you’re shooting moon shots). There’s an “Alignment Points” “Clear” button, if you decide you’re unhappy with your detail selections (and you want to start over). You can try changing the alignment point size, if you wish to experiment with that option. I have a value of “80” (green box) for the “Frame percentage to stack” in the section labeled “Stack Options”. This will cause the program to only use the best 80% of the shots in the final processed shot, and it will throw out the worst (most blurred) shots. Use the “Quality Graph” and “Play” results to help you decide on the percentage of sharp shots you want to retain for the final stacking process. The “Normalize Stack” option will enforce a consistent brightness level for each shot, and isn’t typically needed unless you have a non-black sky with your moon. The “Drizzle” option was originally developed for the Hubble telescope. It is intended to take under-sampled data and improve the resolution of the final image. This option doesn’t seem to help my shots any. It will really slow down the stack crunching if you select it. I selected “TIF” for the output format of the final processed shot (under “Stack Options”), which will be placed in this case into a folder next to your input photos, and called “AS_P80”. This folder name indicates it was created by AutoStakkert, and has the results of selecting “80 Percent” of the input shots. I left the “Sharpened” checkbox un-selected and the “Save in Folders” selected. I’m not a fan of the sharpened results from this program, but it can still be a useful evaluation tool, even if it’s not good “art”. You’ll get an extra output file with “_conv” add to its name if you select “Sharpened”. Notice in the screen shot shown above that the program automatically added 1801 alignment points onto the photo after clicking the “Place AP grid”, and added the text “1801 APs”. When I have used less than 300 points, I have noticed occasional alignment errors in the final results. Now, click the “3) Stack” button. And wait. Then, wait some more. You’ll get some progress messages with little green check marks and how much time each of them took as they complete. Expect several minutes to elapse before the stacking is complete. The finished output files will be in TIF format if you matched my TIF output format selection. A fast computer is really handy here. Unfortunately, this program doesn’t take advantage of a GPU to speed things up. The resulting pictures include an unsharpened image and also a sharpened image (with “_conv” at the end of the file name) if that option was selected. As I mentioned, I don’t like how this program does sharpening, so I would post-process the unsharpened stacking result in another photo editor. The finished result (TIF) file has “_lapl4” and “_ap1801” as a part of the file name, because in this example I used the “Laplace” delta, noise robust 4, and created 1801 alignment points. Note in the shot above that you can see green checkmarks with timing measurements. This section gets filled in as the program progresses. Finished results (TIF files here) go into the “AS_P80” folder, since 80% percent was selected for the “Frame percentage”. If you had chosen 70 percent, you’d have an “AS_P70” folder instead. You’ll find that the program is smart enough to not only shift your photos for accurate alignment, but it also applies rotation correction! Impressive. Like I said, this guy’s an astronomer. Single (unsharpened) shot example crop. NOT a stacked photo. The picture above is the best single-shot photo I had to work with, which has not been post processed. It is actually missing some subtle details and also has some ‘false’ details, all due to (minor) atmospheric shimmer. It’s pretty good as-is, but can still stand some improvement. The un-cratered “mare” are particularly noisy and contain some misleading ‘false’ detail. You’ll be doing yourself a favor if you take your photos with the Moon high in the sky, so that you aren’t shooting through as much atmosphere. Autostakkert final processed shot detail, no sharpening. The cropped shot above (magnified a few hundred percent) shows the result of using the best 80% of my stack of 16 original shots. It still needs post-processing for any brightness, contrast, or other alterations. If I had shot many more photos for the stack, the quality would improve even more. This crop is from the photograph at the top of this article. If you compare the details between the “single shot” and the finished AutoStakkert stacked result, you can see several extra details that show up in the stacked picture. Note the smooth surfaces are starting to show subtle shading, which is missing in any of the single shots. This program really does work. If I had shot many more photos, then the results would improve even more. I’m certainly not an expert at using this program, but it’s clear to me that stacking photos can absolutely increase the level of detail that moon (and general landscape) shots contain. It’s almost like getting a better lens than you really have. You could, if you’re inclined to do so, even shoot a movie of your subject (converted to AVI) and Autostakkert can use that as input, too. But this article is about using the pixel-shift feature. Part 2: Landscapes If you photograph a distant subject, especially on a warm day, heat shimmer can be severe. Using the “Surface” option (instead of “Planet”), you can dramatically improve subject detail if you use a tripod and take at least a few dozen shots for stacking. Distant landscape “Surface”, with many alignment points The screen shot above shows the selected options for processing a stack of distant (10 km, or about 6 miles!) landscape ‘Surface’ shots. Unlike moon shots, you must keep your subject framed exactly the same shot-to-shot for “Surface” processing. If you look carefully, you’ll notice that the auto-alignment grid shows 58574 points (!). Notice that I set the “AP Size” to 48 instead of the 24 used with the Moon. It placed the alignment points all over the photo, except in the places that were really out of focus, after clicking the Place AP grid. Just like moon shots, you can “Play” the stack of frames to evaluate sharpness and alignment. Try to stack only the frames that have a quality rating of 50% or better, and rid any frames that don’t align well relative to their neighboring frames. Mid-stacking progress screen, using 60% of 32 photos Stacking has finished (16-shot example) with 58,574 alignment points Stacking has finished (32-shot example) with 60,410 alignment points My best single RAW shot in the stack, 100% magnification Plenty of shimmering air turbulence here. The antenna structures are really distorted. Antenna detail, single RAW (NEF) shot Pixel-shifted NEFX merged 16 shots, NX Studio The NX Studio merged picture looks a bit better than the raw shot in this case (many times it’s actually worse), but details are fuzzy. NEFX ‘merged’ shot detail AutoStakkert from 16-shot pixel-shift tif photos, 60% used All of the details are a bit clearer than the NEFX results. Using only 60% of 16 shots is about the minimum number you should use for this program. More is better. AutoStakkert 16 shot series detail AutoStakkert from 32-shot pixel-shift tif photos, 60% used AutoStakkert 32 shot series detail The more shots you use, the better the results using AutoStakkert. You can always make a series of pixel-shifted photos, if you want to get the results even sharper. The sharpness differences aren’t vast, but you do get better resolution using AutoStakkert, and the sharpness increases with more shots taken. Conclusion If you’ve got the time and motivation to get the very best out of your gear, then give this program a try. You might just find AutoStakkert becoming a welcome part of your tool kit. If you’d like to read more explanations of this software, here’s a handy link . This program does a superior job at handling pixel-shifted shots when compared to the Nikon NX Studio, although it’s definitely slower and much more difficult to use. Once again, photos and science make a perfect blend for your art. Thank you so much, Emil Kraaikamp!
- Pixel Shift Shooting Analysis of the Nikon Z8
The latest firmware (2.0) for the Nikon Z8 includes the ability to pixel-shift. You can supposedly get resolutions up to about 180MP from its 45.7 MP sensor. Is this true? It’s time to find out. First of all, the final resolution in a photograph is a combination of the lens resolution and the camera sensor resolution. That means that a crappy lens won’t get you any more resolution on the high-resolution sensor than on a lower resolution sensor. A high-resolution lens, however, will show higher resolution in the photographs when switching to a higher-resolution sensor. I’m going to do some tests using my Nikkor 24-120mm f/4 S lens, which has pretty good resolution. I’m going to perform the tests using f/5.6, which is peak performance for my lens. Shots using pixel-shifting require you to use a tripod, since it takes the camera some time to shoot each individual shot in the pixel-shift sequence. Pixel-shifted shots are generally only useful for static targets, such as landscapes or product shots. It's my understanding that the 'shift' amount is about a half-pixel, shifting toward each neighboring pixel. This shifting provides data about the neighboring pixel color. When shooting more shots (8, 16, 32) it gathers additional 'noise' data that can get averaged into a better-quality result. Now for some disappointing news: the Nikon pixel-shift feature doesn’t produce a single high-resolution raw photograph. Instead, you must combine the series of photographs made while pixel-shifting using NX Studio (version 1.6.0). Most camera companies do this same sort of thing, forcing you to create the high-resolution shot using an editor. The most disappointing aspect of this is that NX Studio won’t let you create a conventional raw output result; it makes an ‘NEFX’ file, which you can only export as either jpeg or tiff. You should of course select “16-bit TIFF” for export if you’re interested in quality. At least you can then use this TIFF file in your favorite editor, such as Lightroom, Capture One, or ON1. Update: The newer Adobe DNG converter (I'm using 16.1) DOES understand that an NEFX file is in fact a raw file, and can convert it into DNG! Update2: As of February 7, 2024 Capture One Pro 16.3.5 announced that they now support the NEFX file format for both the Nikon Z8 and the Nikon Zf. (I don't have this version to try it out). Update3: The Adobe DNG version 16.1 creates DNG files from the NEFX files that are defective. They came out with version 16.2 as of 2-22-2024 that fixes this problem. I performed a resolution analysis of the pixel-shifted results file to find out just how good these TIFF files are. As you may know, TIFF files have some embedded sharpening applied to them, so you get bogus resolution numbers when compared to using raw-format photos. I came up with a procedure that lets me quote resolution measurements that are comparable with raw-format photographs, even though they’re provided in TIFF format. How to use Pixel-Shifting Shooting In order that I don’t put the cart before the horse, a discussion on how to make the pixel-shifted shot is in order. To make using this feature easier, I started by assigning pixel-shift shooting to my “i-menu”. If you don’t want to do this, then you have to delve into the ‘photo-shooting’ menu to use this feature instead. Pixel-shift shooting is assigned to my Z8 “i” menu. When you use the "i" menu, you can then control some of its settings using the rear and then the front control dial. The ‘Pixel shift shooting’ menu Once the settings are configured to your liking, you activate it by setting the ‘Pixel shift shooting mode’. How many shots to combine Select the ‘Number of shots’ to configure how many photographs will get combined into the final pixel-shifted file. The higher the number of shots you select, the more potential resolution you can get. It will also of course take quite a bit longer to perform the entire pixel-shift operation when you pick a larger number. I tried the 16-shot option, and the resulting NEFX file was nearly 1 gigabyte! The number of shot choices are 4, 8, 16, or 32 How long to delay before starting the shooting How many seconds between each shot: 0 is okay and FAST I measured 9 frames per second when the interval is set to zero. The screen is blacked out when shooting at this speed. Select a single pixel-shifted shot sequence or multiple shots If you select a ‘single photo’, then the camera leaves pixel-shift shooting mode as soon as the ‘number of shots’ for the combined shot is finished. How to combine the shot sequence NX Studio version 1.6.0 “Pixel shift merge” After collecting the pixel-shifted shots, it’s time to merge them together using NX Studio. Hopefully there will be other editors in the future that can do this same operation, but merge them into a raw format such as DNG. Begin by multi-selecting all of the shots in the pixel-shift sequence (4, 8, 16, or 32 shots). Next, click the “Pixel shift merge” feature as shown above. Create your “NEFX” merged high-resolution photo Browse to your photo collection, select the group of raw files representing the whole pixel-shifted photo, and then merge them together into an NEFX file. Note that NX Studio can generally figure out how the shots are grouped, so that you can just click the checkbox on the groups and then start the merging. Of course nobody except Nikon presently knows what an NEFX file is. Maybe Adobe will eventually know, so that it could make a DNG file from it. Update: Yes, Adobe now knows how to convert NEFX into DNG! Convert your NEFX file into something useful Once the NEFX file is created, you can export it into either jpeg or tiff (8 or 16 bit). It is of course possible that you can stick with NX Studio for further editing, but most photographers will prefer at this point to make a 16-bit TIFF file to edit in other editors. Update: Now that Adobe can convert the NEFX into DNG, you can bypass any editing with TIFF, and simply import the DNG version into other editors, such as Lightroom, Capture One, and ON1. Analyzing the Pixel-shifted Result The purpose of this article is to find out just how good the final pixel-shifted file is. I used the MTFMapper program to do this operation. I photographed a large resolution target, using my Z8 with the 24-120mm f/4S lens. I chose to shoot the target at f/5.6 and zoomed to 61mm for the test. The pixel-shifted resolution result Hold your horses. Before you go bragging about how your resolution has nearly doubled from 75 lp/mm to 137 lp/mm, a little reality check is in order. The plot above is using a 16-bit TIFF file (exported from the NEFX file). I always do resolution analysis using un-sharpened raw-format (either DNG or NEF file). Before we really know how good pixel-shifting is, we need to compare apples to apples. I took a raw shot out of the pixel-shifting series and did a resolution analysis on it. I did a resolution analysis using both the NEF raw file and also a TIFF version of the same file. By knowing how the resolution numbers change going from NEF to TIFF, I can then know how good the NEFX file really is. A TIFF file taken from the pixel-shift sequence The peak resolution from the TIFF version from one of the pixel-shift sequence has a resolution of 108.6 lp/mm. A raw-format file taken from the pixel-shift sequence Analyzing the same raw-format photograph (un-sharpened) in the series gives a peak resolution of 72.6 lp/mm. This means that converting from NEF format into TIFF format changed the resolution from 72.6 to 108.6 lp/mm. Since the TIFF-format resolution of the pixel-shifted NEFX file is 137.1 lp/mm, the same percent change in resolution would mean that in fact the real resolution would instead be 91.65 lp/mm if it was converted into a raw-format NEF (or DNG) file. Update: I will be re-analyzing my NEFX file results converted into 'DNG' raw format, after I installed the latest Adobe DNG Converter. If there are any resolution result changes, I'll add them here... Another DNG shot, 8256 X 5504 pixels 77.6 lp/mm peak Pixel-shifted (16 shots) DNG shot, 16512 X 11008 pixels 74 lp/mm peak In the above pair of shots, I used the new Adobe DNG converter on the NEF and the NEFX shots. The single-shot DNG version has a peak resolution of 77.6 lp/mm at 5504 pixels tall. The DNG version of the pixel-shifted 16 merged shots has a resolution of 74 lp/mm at 11008 pixels tall. So why in the world does the pixel-shifted shot seem to have slightly lower resolution? Because it has twice as many equivalent pixels in both the horizontal and vertical directions! Another way to express resolution is in units of line pairs per picture height (lp/ph), where you multiply the line pairs per millimeter by how many millimeters tall the sensor is. With pixel-shifting, you essentially double the number of millimeters in the sensor, so the Z8 sensor would change from 23.9X35.9 to 47.8X71.8 millimeters! This means the resolution changed from 1855 lp/ph to 3537 lp/ph! Definitely improved resolution! The percent change in resolution is actually about 90 percent! This resolution is the equivalent of an MTF50 148 lp/mm from a non-pixel-shifted sensor with the 23.9X35.9 dimensions. Update 2: I saw unusual results using the DNG files made from the NEFX file via the Adobe DNG Converter. The external editors only saw the middle section of the DNG, but were okay using the exported TIFF file. If this happens to you, then you'll need to stick with the exported TIFF file from the NX Studio application. Update 3, 2-22-2024: As of now, Adobe just put out their 16.2 version of their Adobe DNG Converter. This fixes the problem with other editors seeing only the middle section of the DNG merged pixel-shift file. Now, other editors work correctly with the NEFX-->DNG merged file! Real Life Example So what's this mean in a real-life example? Check out the following shots (both observed in raw format inside the NX Studio editor). The first (regular raw NEF picture) shot was zoomed to 400%. The 16-shot NEFX merged shot was zoomed to 200%. You need the zoom difference between views, because the pixel-shifted NEFX photo has twice as many pixels in both the vertical and horizontal directions. Raw-format single shot at 400% zoom NEFX shot at 200% zoom This kind of result is golden for photgraphers doing product shots in a controlled environment. It really is like getting a new (medium format) camera. The shots above were photographed in essentially 'deep shade', in order to see how the colors were handled. Notice that the pixel-shifted NEFX shot has vastly better color handling in the reddish-colored label details. Summary The pixel-shift feature, taking 8 shots and combining them into a single shot, resulted in a 26.2 percent increase in resolution. While this may seem underwhelming, it is in fact quite good. This is only looking at TIFF-format pictures. The EXIF data analysis indicates that both the 4-shot and 8-shot sequences yield 8256X5504 pixels (45.4 MP). The 16 and 32-shot sequences both yield 16512X11008 pixels. Update: After getting the new Adobe DNG converter and doing an analysis using all DNG raw photos, the resolution change using 16 combined shots was about a 90 percent increase! I didn't try the 32-shot pixel shifting yet (the file size would be truly gigantic). Is pixel-shift shooting worth it? Heck yeah, as long as your subject is completely stable (which includes the air in front of your subject). The resolution increase that pixel-shifting creates depends upon a few factors, including which lens you test and your choice of 4, 8, 16, or 32 shots being combined. For my 16-shot test, what's 16512 X 11008? It's 181,764,096 or 181MP. Beware that shooting landscapes when there is wind, moving water, or 'heat shimmer' will result in the pixel-shifted shot being worse than a single raw shot. Air turbulance plays havoc with the shot-merging software. The merging of the shots into an NEFX will not go well. The smoothing of color noise may be a bigger factor than resolution improvements. The Nikon Bayer sensor is also called 'RGGB', referring to neighboring pixel color sequences. The pixel-shifting operation changes this into something more like Sigma's Foveon sensor, that stacks all of the color information under a single pixel. The 8-shot and 32-shot sequences add more color-noise smoothing, compared to the 4-shot and 16-shot sequences. Next, look forward to seeing this feature show up on the Nikon Z9, right?
- Does Your Computer Monitor Need Calibration?
Is your photo editor telling you the truth? Probably not. How can you know for sure? One of the last things that photographers concern themselves with is having a calibrated computer monitor. You paid good money for that monitor, therefore what you see on the screen is correct, right? If you have gotten your photos printed and discovered that the prints don’t look like what your screen shows, you probably need monitor calibration. If you have two different monitor models and pictures look different on each display, then you need monitor calibration (perhaps on both of them). Room lighting is important. Bright or unusual-colored lights will affect the viewing experience on your monitor. Most people have room lights that are too bright to be used for accurate photo viewing/editing. This article will show you an example of how you can calibrate your computer monitor. You may think that your computer screen is operating just fine, but chances are that it isn’t displaying your photos correctly. Can you get your monitor calibrated without using special hardware? Nope. Is calibration hardware expensive? Nope. I own monitor calibration hardware called Spyder5 Pro, which also comes with the necessary software to control the hardware. Newer versions of this hardware are now available. There are of course other products on the market for this purpose, and probably any of them can accomplish the same goal. I have used my same calibration hardware for several years on multiple computers without any issues. I’m not trying to sell you anything; I’m just going to show you a typical monitor calibration experience. The Spyder5 Pro software that is included with my hardware will produce an .ICM (Image Color Matching) file, which will get loaded each time you boot up your computer. The proper brightness levels of red, green, and blue will be automatically adjusted using this .ICM file information. Once the monitor is calibrated, the Spyder hardware used in the calibration process can be disconnected. Computer displays can drift over time, so regular checking and recalibration of monitors is also recommended. Computer monitors have different capabilities; my main monitor only has a brightness control. I have other monitors that allow manual control over things such as the color temperature and gamma. Generally, monitor controls over things such as the hue will be overridden by the calibration data contained in the ICM file. Room Lighting The lights in your room can ‘contaminate’ what you see on your computer monitor. It is recommended that you have fairly low room illumination; a light dimmer switch can help. It’s also helpful to close any window curtains to keep room light levels lower. My Spyder hardware has a feature that measures room illumination separately from the screen illumination. The calibration process includes analysis of the room lighting. Measuring room light level The sensor just under the “Spyder5” text shown above gets used for this measurement. The screen sensors are on the bottom of the unit, (facing the desktop) in the shot above. You don’t need to close the rear cap under the Spyder 5 to take room light measurements unless it’s on a glass surface that transmits light. Preparing for room light measurement Room lighting result Room lighting measurement is conducted before any monitor calibration. You’d be surprised at how low the recommended room illumination levels are. I have worked with people that always keep a hood over their workstation when doing critical photography editing and viewing. If the screen brightness is wrong, then photo prints won’t have the correct ‘lightness’ in them. Monitor Calibration Before calibration Prior to calibrating the monitor, the program reviews what needs to be done. The monitor should be warmed up to get the display stable; the colors might be a little different from when you first turn your computer on. The room lighting needs to be checked, and you need to know what controls are available on your monitor hardware, such as brightness and the color temperature. Specify your available monitor controls Setting up the calibration process Before performing monitor calibration, the program needs to be told what to do. In the screen above, I have requested that monitor brightness be adjusted (via buttons on the monitor) and room lights will be on. To actually calibrate the monitor, the hardware needs to be physically placed onto the screen. After plugging the device into a USB port, the Spyder hardware is hung down from the top of the monitor and aligned to the target displayed on the screen. Kind of like a spider hanging on a thread of silk. Now you know how the Spyder people came up with their name. Aligning the Spyder hardware on the monitor The Spyder hardware is capable of analyzing both screen brightness and colors. Measuring screen brightness The program will guide you through measuring/adjusting screen brightness, if you requested that feature. If your monitor doesn’t allow brightness adjustment, then you can skip this step. In the shot above, I was able to adjust the monitor brightness to get within 2 cd/m^2 of the goal of 180. The “cd” stands for “candles”, which is a measure of illumination. Monitor brightness is out of adjustment After screen brightness adjustment is done, the program will then proceed to automatically measure the screen red, blue, and green colors at many different brightness levels. Calibration complete When the program finishes measuring the different screen colors at various brightness levels, it will let you know it’s done. At this point, the Spyder hardware can be removed from the monitor and unplugged from the USB port. The calibrated screen view After calibration, you get to see a set of sample photos using the new calibration. This program offers a “Switch” button to toggle between the calibrated and un-calibrated view of the same sample photos to compare them. Calibrated sRGB actual display gamut The monitor actual sRGB gamut can be displayed after calibration. The shot above shows that my monitor can display 100% of the sRGB color space. Calibrated AdobeRGB display gamut The monitor actual AdobeRGB gamut can be displayed after calibration. The shot above indicates the calibrated monitor is displaying 98% of the AdobeRGB color space, or "gamut". Summary If you’re serious about the quality of your photography, then don’t ignore your computer monitor. You also can’t ignore the lighting conditions of the room your computer is in. It’s neither expensive nor overly complicated to calibrate your computer monitor. Using calibration hardware and software can take your photography to the next level.
- Nikon Z Cameras Fix Spherical Aberration Focus Shift
Most camera manufacturers designed their mirrorless cameras to focus with their apertures wide open. Nikon doesn’t do this with their Z cameras; they autofocus at the shooting aperture instead. Who’s right? Huge spherical aberration: Nikkor 85mm f/1.4 AF-S All DSLRs autofocus with the lens aperture wide open, because the partially-silvered mirror over their focus sensors causes really dim light. Dim light causes slower or failed focus. Wide-open apertures give a camera the best chance for fast or successful autofocus. Why wouldn’t all manufacturers always autofocus this way? Read on. Most high-speed lenses (f/1.4, f/1.2 or faster) suffer from something called spherical aberration. With this type of lens, the focus will shift when the aperture changes. This focus shift ruins shots, particularly at close focus distances. Spherical aberration As shown above, the best focus happens at the location of what’s called the “circle of least confusion”. This circle location shifts as the lens aperture changes, blocking light from the outer fringes of the lens. The circle of least confusion typically doesn’t shift much after the aperture is stopped down beyond roughly f/5.6. The Nikon Z (mirrorless) cameras stop the lens down to the shooting aperture to autofocus, up through f/5.6. They don’t stop down the aperture beyond f/5.6 while focusing, in order to retain acceptable focus speed. They of course stop down to the requested aperture when the shot is captured. Because of the way the Nikon Z cameras autofocus, focus is always correct when using lenses that have spherical aberration, no matter which aperture is selected. At apertures beyond f/5.6 (such as f/8) any additional focus shift gets “repaired” due to the large depth of focus that masks the focus-shifting error. It turns out that focus speed is not a problem with a stopped-down lens until the ambient light levels get really dim. Nikon has determined that this is a good trade-off, especially since most photographers will open up their lens apertures in dim light anyway. I have the Nikkor 85mm f/1.4 AF-S lens, which has pretty severe spherical aberration, and therefore severe focus shift problems. I had nearly abandoned using the 85mm when I would want to shoot at any aperture other than f/1.4. The pictures were always out of focus at other apertures, since I had calibrated focus at f/1.4 on my DSLRs. I actually carried around notes that indicated what calibration values to use for which apertures on which cameras! Very, very irritating. Using my Z cameras, focus is nailed every single time at any aperture. I have never had any complaints about autofocus getting sluggish at any aperture with my Nikon Z8 or Z9, until the ambient conditions get really dim. Since I use wide apertures in dim light anyway, Nikon’s choice to focus at the shooting aperture is optimal for me. I recognize that there is a theoretical advantage of always focusing with the lens wide open, but for me Nikon’s method is the preferred design choice. A side benefit of always having the lens stopped down to the shooting aperture (again, through f/5.6) is that I always view the actual depth of focus in the viewfinder as well. I can’t see ever using my DSLRs with my fast lenses again. They’re fine for many shooting applications, but this definitely isn’t one of them.
- How to Make Panoramas with Moving Subjects
This article explains how you can create a panorama that actually captures mild action. The thing that keeps photographers from making successful motion-freezing panoramas probably isn’t what you think it is. Panorama with ‘frozen’ water ripples When I first attempted to shoot panoramas that could freeze moving subjects, I knew that it would take a camera that could produce a fairly high frame rate, such as at least 10 frames per second. You need to sweep your camera across the whole scene in less than a second, or else moving objects won’t align from one frame to the next. You also need about a third of each frame to overlap with its neighboring shot, or else your panorama-stitching software will probably fail to combine the photos. I quickly found out that you’re probably going to need something like a 20 fps frame rate to get a decent shot overlap, unless you are using a wide angle lens. I prefer to shoot panoramas in portrait orientation, which requires even higher frame rates than landscape orientation. When you pan the camera at a slower pace to accommodate slower frames per second, image motion between frames will cause moving subjects, such as water ripples, to no longer line up in the stitched panorama. With a bit of practice, you can learn to quickly sweep the camera across the field of view and get the necessary shot overlaps. If you stick with panoramas of about a dozen shots in portrait orientation, this means that you can go up to roughly a 120mm focal length at 20 fps. I tried using shutter speeds around 1/2000 and 1/3000 to “freeze” the action. Looking up close at my shots, I found out something that was very disappointing. The photos had terrible motion blur. What’s going on?? I hadn’t stopped to think that the subject motion while quickly panning the camera is on a whole other level. It turns out that you need shutter speeds typically beyond a 1/10,000 of a second to rid this motion blur. My Nikon Z9 and Z8 cameras can go up to 1/32,000 second, so no problem. I now standardize on using at least 1/13,000 shutter speed to reliably rid any blur, but it of course depends upon just how fast you pan the camera. Seamless motion capture In the shot above, I was using a 120mm focal length in portrait orientation. I swept my Nikon Z8 in an arc that took 0.6 seconds to complete, using a 20 fps frame rate. I shot in aperture-priority mode, and each of the 12 frames was taken at between 1/13,000 to 1/16,000 second shutter speed. I got decent frame overlaps at this pace, even in portrait orientation, and motion blur was eliminated. This shot is a demonstration of how the water ripples are seamlessly stitched together (using the Capture One editor). Even at pixel-level magnification, there is no motion blur. The Z9 and Z8 cameras can go up to 120 frames per second, but only when shooting jpegs. I’d rather have the quality of raw photos and sacrifice a little speed. For capturing really fast action in a panorama, you would be forced to go this jpeg route, however. Summary If you want to pursue doing this kind of photography, it means that you’re going to need a camera capable of producing both a very fast shutter speed and a high frame rate. I’d recommend plenty of practice shooting, to get the hang of achieving the correct shot overlap while whipping the camera around in a fraction of a second. You can of course use shorter focal lengths (and landscape orientation) to be able to shoot at a lower frame rate, but the shutter speed still needs to be quite fast to avoid motion blur. Photographs of this type simply weren’t possible to create before the introduction of very high performance cameras (or else synchronized multiple camera setups).
- Remove all Reflections Using Double-Polarized Light
How do you get rid of every single annoying reflection, even from glass and metal? Product photographers are particularly interested in having the ability to completely control all reflections from the objects they need to photograph. The answer is polarized light. I’m not just talking about putting a polarizer on your camera lens; that’s only half of the battle. To totally rid all reflections, you also need to have your light source emit only polarized light. Annoying reflections that obscure your subject Everyone is familiar with the issue of not being able to photograph a shiny subject without having it partly obscured by lighting reflections. Most photographers are aware of using circular polarizing filters over their lenses to minimize reflections. Some subjects seem to defy every effort to totally rid reflections off of them, no matter how carefully you adjust the lighting or the shooting angle. The shot above shows an annoying halo of reflected light from a circular artificial light source (at between 2 and 3 o’clock) without using a polarizer. Circular polarizer The image above shows a typical circular polarizer. Years ago, you could only buy “linear” polarizers, which turned out to mess up the autofocus/exposure meters on DSLR cameras. They started making circular polarizers, which fixed this issue. These filters really help to minimize reflections, such as from pond surfaces or windows. Using a polarizer filter over the camera lens As seen above, a polarizer over the lens was rotated to minimize the reflections, and it really helps. But there is still an unwanted reflection at about 2 o’clock on the outer dial of the watch. There's also a sheen over much of the face of the watch that I'd like to eliminate. Double-polarized light: no more reflections! In the shot above, I placed a polarizer over the light source itself, being careful to stop any light leaks coming from around the edges of the polarizer sheet. I made sure that there were no other lights on in the room. I then rotated the lens polarizer filter until I observed the removal of any reflections. You can buy inexpensive polarizer film sheets (‘linear’ polarizers will work for this application) to cover larger lights or flash units. Just make sure you don’t have any light leaks, because they can cause reflections. If you want to use multiple lights, you may have to ensure that each light polarizer is rotated individually so that the polarization is in the same direction. Summary If you didn’t know this little trick, it could drive you crazy trying to get rid of reflections. Double-polarized light can seem like magic, and drastically improve shots of things such as jewelry.
- Toggle Your Nikon Z9, Z8 Shooting Settings with a Button
There’s a trivial-sounding feature that can be accessed only through assigning a custom control called “Recall Shooting Functions”. This feature is available on only a few Nikon ‘pro’ models, beginning with the D5. This is in fact a major and wonderful feature. In this article, we’ll explore just what you can do with this capability. Recall shooting functions feature On many ‘amateur’ Nikon bodies such as the D7000 series, they provide a dial with user settings called “U1” and “U2”. With these settings, you can switch most of the camera shooting configurations by merely rotating the dial. This makes it trivial and fast to switch between things like manual landscape settings and automatic sports shooting. This is an awesome feature that I love. On the top-end Nikon pro bodies, they instead have provided the tedious ‘settings’ banks, which are then sub-divided between “photo shooting menu banks” and “extended photo menu banks”. I have always hated this scheme, but have had to live with it. “Recall Shooting Functions” has changed that. Now, you can merely assign a button that you press to toggle between two independent sets of shooting features. If anything, this is even better than having to rotate a dial to use the “U1” and “U2” shooting setups. Your eye doesn’t need to leave the viewfinder to switch between two camera shooting identities, as long as your finger can find the assigned button. A word of caution, though. There are many actions that will ‘cancel’ the recall feature, such as cycling camera power. If this happens, then just press the “Recall Shooting Functions” assigned button again to re-activate the settings. First, let me explain how to configure this feature. Locate the ‘Controls’ menu F2 Custom controls (shooting) menu Pick a button to assign the feature (video record button) Select the “Recall shooting functions (hold)” option Pick the options to save (screen 1) Pick the options to save (screen 2) Pick the options to save (screen 3) Note that there are many functions that you have the option of saving for recall, such as the White balance and AF subject detection options. In my own selections, I decided to not save the White balance (no checked box) and I did decide to save the AF subject detection options (checked box). For convenience, you can just select “Save current settings” to save all of the present camera settings for each menu option at once. Sample setting: AF subject detection options Since I did decide to save the AF subject detection options, I pressed the right-arrow and was presented with the screen shown above to select which option I wanted to save (Auto). The “Video Record” button As shown above, I decided to use the Video Record button for assignment, because it’s easy for my finger to locate it while looking through the viewfinder, and it doesn’t affect video recording, since this Recall feature is only used while shooting stills. Sample shooting setup BEFORE pressing button Shown above is the shooting screen before pressing the assigned Video Record button. This screen shows that I am in “people detection” subject detect mode and 5 fps, for instance. Sample shooting setup after pressing button Note above that after pressing the assigned Video Record button, I got switched into Recall Shooting Functions mode. The subject detect mode is now “Auto” instead of “People”, and single-frame shooting is selected instead of 5 fps. Also note the icon that indicates that Recall Shooting Functions is active. This icon is displayed in both the rear LCD screen and the viewfinder. Get used to confirming that the little Recall Shooting Functions icon is displayed, since several camera operations can cancel this mode. This icon gets displayed even when you choose a display mode that doesn’t show any other viewfinder information. Summary You can’t save and toggle all camera shooting settings this way, but at least the most important features can be saved. Try out this feature. I bet you’ll decide that it’s the superior method to swap out shooting functions when you don’t have time for wading through those irritating Shooting Menu banks.
- The Importance of Focus Precision
Sharp photos depend upon sharp focus. You might be very surprised at just how sensitive your lens can be to focus changes. I wanted to show you an experiment that gives very precise numbers on how the resolution changes with errors in focus. Nikon Z8 camera mounted on a linear slide As shown above, I start by mounting my camera onto a linear slide. This slide can be moved with a micrometer in very small steps, so that I can shift my cameras’ focus very precisely. I conducted these tests using a 135mm lens at f/2.8 mounted on my Nikon Z8. Note that this isn’t a particularly fast lens, but even at f/2.8 you’ll find that focus precision matters enormously. Using the MTFMapper program created by Frans van den Bergh, I repeatedly photographed a new utility knife blade at different distances and then processed the photos in his software. I focused the lens only once, while the linear slide was near its midpoint (12mm), before starting the test. I could have of course used a more conventional focus chart to get the resolution measurements, too. A utility knife blade in silhouette The subject, shown above, is the edge of a very sharp and straight knife blade. The software doing the analysis is capable of analyzing a single edge that you specify. To get the best results, the edge should have high contrast; I used a light to make the blade show up in a silhouette. If you look very carefully, you can see the little number 37.6 shown on top of the blade edge, which is where the software made the resolution measurement. Since MTFMapper uses LibRaw to decode raw files, it uses zero sharpening (sharpening would falsely increase resolution measurements). For raw formats that LibRaw doesn’t support (such as the Z8/Z9 high-efficiency raw), I use the Adobe DNGConverter to make DNG raw files; these files also have zero sharpening applied. The downside to this DNG converter program is that it strips out some exif data, such as the focus distance. Resolution versus focus distance As shown above, I made a plot of the measured resolution of the blade edge photographs at different distances. I had attempted to focus the lens while the camera was placed at a setting of 12mm on the linear slide rail. The measurements show that in fact the sharpest photo was at a position of 15mm, where I got an MTF50 resolution measurement of 37.6 lp/mm. The entire range of the focus testing shown is only about 1 inch (27mm). I had missed focus by only 3 millimeters, while my subject was at a distance of 7 feet (2.13 meters). The MTFMapper program is able to tell the difference in resolution even with a 1 millimeter focus error! I had used “focus peaking” with a magnified view and manual focus to get the best focus I could manage. The camera focus-peaking feedback (set on ‘low sensitivity’) got me to within about 3% of optimal focus. Granted; these resolution differences are finer than what you can probably perceive yourself, unless you need to crop or print big. Also, telephoto lenses are far more sensitive to focus errors. Summary Image sharpness is more sensitive to focus than most people could imagine. This little exercise shows why people that measure lens resolution have to be so careful in controlling focus (and vibrations), or else their measurements are just wrong. In a more general sense, you want those feathers, hairs, and eye lashes/reflections to be totally sharp. The best lens you can buy won’t give you that unless you also nail the focus. A cheap lens that is correctly focused will usually give better results than an expensive lens that is slightly out of focus. I have found that my mirrorless cameras achieve more accurate autofocus than my DSLR cameras, and my lenses don't need focus calibration on mirrorless, either.
- Nikon Z9 ‘Bird’ Subject Detection: This is Golden!
I had heard a rumor that the Nikon Z9 “Bird” subject detection was usable for more than just birds. I have decided after my own testing that this is an understatement. You need this mode. Switch to this mode. Make sure you update your firmware to version 4.10 (or newer) to get this new option. For every animal, bird, or insect I tried, this mode was either superior or equal to any other subject detection mode. Except... BIF (bee in flight) not what you expected? Here’s a caveat, though. The Bird subject detection mode is worthless for people (I know, they’re animals too). Use either the ‘Auto’ (generic) or ‘People’ subject detection mode for people. I found it kind of amusing how the bird-mode frequently refused to focus on the eye of a person. Artificial intelligence is funny that way. For myself, I would rather use the “stupid” modes, such as dynamic-area or single-point autofocus for occasional people/landscape shots (assigned to different camera buttons), and just leave the subject detection mode almost permanently on “Bird”. If I were shooting sports (soccer, football, track, etc.) however, then I would of course switch to ‘People’ subject detection to cope with tracking rapidly-moving athletes. For Nikon Z8 owners: too bad. Maybe next year Nikon will get around to this firmware upgrade. ‘i’ menu for quick autofocus-area and subject-detect selection For quicker selection, I have set up my “i” menu to include the “AF-area mode”. This way, I use the back camera scroll wheel to select the AF-area mode, and the front camera scroll wheel to pick auto/people/animal/car/plane/bird. Subject detection is available when you use Wide-area AF(S)/Wide-area AF(L)/3D-tracking/Subject-tracking AF/Auto-area modes. The ‘i’ menu icon will show the latest AF-area mode selection type (3D in this case), but it doesn’t give you any hints about the subject type. Quick front/rear camera wheel selection in the ‘i’ menu The menu above shows that I have chosen ‘Bird’ subject detection and 3D-tracking AF-area mode. Setting up autofocus area modes outside of the ‘i’ menu Nikon doesn’t force you to set up the ‘i’ menu, of course. The regular menu-diving technique will also work perfectly fine to set up mode/subject combinations, after you navigate to the “AF-area mode” and the “AF subject detection options” in the “Photo Shooting Menu”. The little icon above shows that the present AF-area mode is ‘single-point’. Picking subject detection mode outside of the ‘i’ menu Available subject selections now include birds Note that “Auto” above doesn’t mean “automobile”, but “generic” instead. My own most-used AF-area mode: 3D-tracking I definitely use the 3D-tracking mode the most, so I assigned that to my AF-ON button. Thankfully, bird-detection is allowed in this focus mode. I also like to use the custom wide-area AF, and bird-detect works there, too (assigned to another button). Summary Give this new ‘bird’ mode a try. If you photograph any kind of non-human animal, I bet you’ll like it. Artificial intelligence is quite fickle, though, so I’m sure there are animals that will fool this detection mode. Choice is a great thing, and multiple button assignments are, too. I'll bet that each new firmware revision will alter the subject detection capabilities, because Nikon is training its AI with more and more subject samples.
- Panorama Prowess: Lightroom vs ON1 vs Capture One
Do all of the photo editors create panoramas that are roughly equal? This article explores how well some popular editors make panoramas, or at least how they try to make them. A sample panorama made from 5 vertical shots I did a little comparison between Lightroom, Capture One 2023, and ON1 2023. I wanted to figure out if I have a preferred editor for making panoramas, among my most-used photo editors. For starters, I gave each editor the same set of 5 photographs that have plenty of overlap between them, so it shouldn’t be too challenging to stitch them together. ON1 Photo RAW 2023 First up to bat is the ON1 editor. Pick the shots to combine from the ‘Browse’ tab To make panoramas in ON1, just click the “Create Panorama…” after selecting the shots in the “Browse” tab. Create Panorama dialog with “Auto” By default, ON1 will offer the “Auto” option to automatically select how to create the panorama. Unfortunately, this selection is a big failure; the last shot in the set of 5 shots was omitted. ON1 “Collage” option Selecting the “Collage” option, the results are even worse! This time, it skipped the last shot and couldn’t even align the left side properly. ON1 “Spherical” panorama success? ON1: A glitch in the stitch At first glance, the ON1 “Spherical” mode seemed to do the trick. Upon closer inspection, I found a mistake in the stitching that I indicate above. I’m out of options with ON1 panorama stitching, so it has failed. Three strikes. Capture One 23 Next up is Capture One 23. Capture One 23: Combine the shots in the “Library” tab As shown above, select the photos in the “Library” tab, then select “Image | Stitch to Panorama…” Capture One “Cylindrical” option Capture One “Spherical” option Capture One “Perspective” option Capture One “Panini” option Capture One cropped and light-adjusted panorama All of the Capture One options succeeded, but I need to mention that this program is slow in stitching the finished panorama, unless you have a pretty fast computer. In the shot above, I did a little editing to touch up the picture to taste after generating the panorama. You might notice that it’s actually a double rainbow. Lightroom Finally, let’s see what Lightroom can do. Lightroom: Photo merge panorama from the “Library” tab Lightroom “Cylindrical” option Lightroom “Spherical” option Lightroom “Perspective” option Lightroom cropped and light-adjusted panorama Similar to Capture One, Lightroom made no mistakes in any of the projection options for the panoramas. I didn't try to exactly match the light in my Capture One version of the panorama; this version is very close to what my eyes saw. Multi-row panoramas Since ON1 is out of the running, I decided to see if Lightroom and Capture One could handle multiple-row panoramas. Both programs failed when I tried the “perspective” projection method, but both programs succeeded when trying either “spherical” or “cylindrical” projection. It’s easy to have several shots lost in the final stitch, if your goal is to end up with a rectangular photo. You have to be careful to go well beyond what you think might be okay for the stitched area. I’d recommend using a tripod for any multi-row panorama efforts. It’s too difficult to control the shot overlaps in both the horizontal and vertical directions when hand-holding the camera. Against my own recommendations, I hand-held all of the panorama shots in this article... Lightroom multi-row, using “spherical” projection Capture One, “spherical” projection Note how the un-cropped result has the tree tops well inside the stitched panorama, so you think all is well… Capture One, “spherical” projection cropped to a rectangle Dang it, the tree tops got lost after all. Should’ve brought a tripod along. There's a school of thought that you should just leave your panoramas un-cropped and get away from rectangular format; I just can't go there yet. Summary I noticed that Lightroom created the panos a bit faster than did Capture One. ON1 was the fastest editor of the three I tried, but it doesn’t count when the panoramas have defects. Capture One had the most projection options; it kind of depends upon the subject matter which projection method looks best for a shot. I can’t say that either Lightroom or Capture One wins; they are both very competent at making panoramas. For multi-row panos, I have historically had slightly better success using Lightroom. It’s a good thing that I didn’t buy ON1 for its panorama capabilities (I got it mainly for the sky-swapping feature). ON1 2023 struck out for this particular task.
- Lens Resolution: Are My Measurement Results Bogus?
I have read claims on the internet that printed test charts are nearly worthless for use in measuring lens resolution. I have also read that a sharp razor blade or utility knife blade can be used to get really accurate resolution measurements. Which claim is true? Both? Neither? I use the MTFMapper program to analyze lens resolution. NASA has used the MTFMapper program to analyze lenses that they sent to Mars onboard their Rover Perseverance. I don’t think that this program is providing bad results. The software can be obtained from here. You should be very skeptical of internet sites that don’t tell you how they arrived at their resolution numbers. A sample resolution test chart MTFMapper provides printable files, which I used to make my test chart targets. This same software provides a way to use things such as back-lit razor blades or utility knives for resolution targets. The target edges being measured are all on a slant; the measurement mathematics doesn’t like edges that are vertical, 45 degrees, or horizontal. I believe that MTFMapper uses LibRaw to decode raw files, which uses zero sharpening (which would increase resolution measurements). For raw formats that LibRaw doesn’t support, I use the Adobe DNGConverter to make DNG raw files; these files also have zero sharpening applied. The downside to the DNG converter program is that it strips out some exif data, such as focus distance. For sites that use other camera photo file formats, particularly jpeg, resolution measurement results are worthless. Just about all of these formats add some level of sharpening. Depending upon the amount of sharpening, you can make the resolution measurements as high as you wish. I use a printed test chart that measures 40 inches by 56 inches. The chart is printed at 1200 dpi on heavy-weight paper with a fairly glossy finish. The chart is dry-mounted and placed into a frame to keep it perfectly flat, and I temporarily mount a mirror to the center of it using magnets to align my camera and get it perfectly parallel to the camera sensor. The chart is clamped into position to eliminate any movement, and the camera is on a heavy tripod. I use either a wired shutter release or a self-timer, using either live view or a mirrorless camera to eliminate vibrations. Chart lighting needs to be even, and it’s best to keep illumination levels above at least EV 10. Surprisingly, the ISO value has little effect upon resolution measurements; it’s still best to keep the ISO low. A sample setup to use a quality blade edge for a resolution target I conducted some tests to compare resolution measurements using my chart and a pristine utility knife blade. I can’t prove resolution results in an absolute sense, but I can at least compare results from two entirely different test methods. When I first started doing resolution testing, I tried using small (11” X 17”) printed charts, both with inkjet and laser. I also tried matte/glossy/satin surfaces and single/double weight papers. I determined that laser prints weren’t quite as good as inkjet, and that satin-like surfaces worked best. Small charts are poor for testing lenses at realistic shooting distances (you want to fill the frame with the chart if possible). I’m forced to use laser prints for infrared testing; my inkjet ink is invisible in infrared! I always struggled to accurately align the test chart to the camera until I began attaching a mirror to the chart surface (using powerful magnets). When you can see your own reflection looking through the viewfinder and the lens center reflection is in the middle of the viewfinder, you’re perfectly aligned. Rotation is easy; just align the chart edge to the viewfinder edge. The MTFMapper program will change the color of the resolution measurements to yellow for edges that end up at a poor angle. Camera (Nikon Z8) mounted on an accurate sliding linear rail Accurate focus is an absolute requirement to get the best resolution measurements. It’s possible, using contrast-detect focus or a mirrorless camera with autofocus, to get reasonably good focus. I can get a bit better focus using low-sensitivity focus-peaking and image magnification. It’s necessary to take several shots, with focus set both in front and behind the target before re-focusing. You need to pick the sharpest results from the many test shots, and make sure the focus is done at the shooting aperture The best way to get optimal focus is to use a linear rail and move the camera in very small (1 mm or so) increments starting in front of the subject correct focus and taking shots until you’re behind the subject focus plane. Again, pick the sharpest result (highest measured resolution). Select where to take the measurement The photo above shows how to pick where to take a resolution measurement along the knife blade. Choose a location and orientation to match a similar edge in the resolution test chart. Selecting a different section of the blade will probably give slightly different measurements, because the MTFMapper program is supremely sensitive to edges. The illumination behind the blade only needs to be even in the selected region of interest that is being measured. You want to do this in a dark place, to maximize the silhouette contrast. Note that every position and orientation in the camera’s field of view will likely give a different resolution reading. Life and physics isn’t as simple as what is portrayed at most photography websites. A single resolution number is nearly meaningless (as is a single center/edge/corner number). Also keep in mind that different camera sensor resolutions will give different answers as well, because the resolution measurement is actually a combination of the lens and the camera sensor. Blade Placement Blade placed near to chart target edge in viewfinder As shown above, I placed the blade edge in a similar location to a chart measurement that I was interested in comparing. I tried to select the section of the blade edge in MTFMapper that would roughly match the length of the chart target edge (about half of the blade edge length). Blade placement relative to test chart placement As you can see above, I have drawn in roughly where I placed the blade in the camera viewfinder that I measured in red. The little cyan numbers are the MTF50 resolution measurements on every target edge in the chart, as calculated by MTFMapper. A potential upside to using a blade edge is that you can focus on it wherever you place it in the frame. For lenses with field curvature, this will probably get you a higher resolution measurement than simply using a chart that you probably just focused in the frame center. A downside to using the blade is that you get a single measurement from your photograph, versus over 700 measurements by using a chart like that shown above. Comparison Results I decided to use my Nikkor 24-120mm f/4 S lens on a Nikon Z9 camera to compare resolution test results between my printed chart and the blade. The lens was zoomed to 34.5mm (the lens barrel marking was 35mm). The MTFMapper program was configured to provide resolution measurements in units of MTF50 lines pairs per millimeter. I like to use these measurement units, since you can get the same answer using any size of camera sensor. MTFMapper measurement of blade edge: 34.5mm, f/4 Pretty comparable measurements between the blade edge and the chart! Summary Although none of what I have written can absolutely prove that I’m getting correct lens resolution measurements from my printed test chart, I think it shows that the measurements are at least in pretty close agreement between these two very different test methods. A site that I use to compare by own lens resolution results against the same lens models is Lenstip, found here. They also measure resolution in units of MTF50 lp/mm, and our results are typically very comparable (when using similar camera sensors). Lens sample variation is a real thing, so you should never expect to see the exact same results between any two lenses.
- Nikkor Z 24-120mm f/4 S Lens Review
Nikkor Z 24-120mm f/4 S, mounted on Nikon Z9 I have heard so many good things about this lens for so long that I finally got one. Since I already have the very professional AF-S Nikkor 24-70 f/2.8 ED VR, it would seem a mostly redundant acquisition. Yes and no. I have a dedicated infrared F-mount camera, and that 24-70 is nearly always parked on it. Since the Z 24-120 has a different mount, that IR camera won’t ever see this lens; pity. Before I digress too far, let’s get back to the subject at hand: the 24-120 f/4 Z lens. This is a true walk-about lens which goes wide, telephoto, AND macro. I’ll show some shots later that prove the point. By definition, 5X zooms produce images that are crap. Right? Not this one. It beats my 24-70 f/2.8 F-mount lens at every f-stop and focal length for resolution. It is even slightly sharper than my Micro-Nikkor 105mm f/2.8 AF-S VR, at least in the central part of the image. I got this Z lens for HALF the price of what the 24-70 f/2.8 AF-S lens goes for these days (which I actually paid $2,400 for). So what’s missing here? 24mm setting Note that you can see the entire range of zoom settings above (about a quarter turn). I included the lens hood; you should, too. 120mm setting Notice the two telescoping pieces that make up the zooming portion of the lens. Nikon claims that the lens is still entirely weather/dust sealed nonetheless. Shown: lens controls Shown above left-to-right: Focus ring Zoom ring Lens function button (L-Fn) Lens control ring A/M switch What’s NOT included with this lens? This Z lens has no VR, but my Z9 and Z8 bodies both have IBIS; for me this is a “don’t care”. For you, it may be important. This lens has no focus scale. It messes up my ability to measure focus speed by filming a slo-mo video of the lens focus scale in motion, but otherwise it’s a “don’t care”. No quality lens case. It comes with a nearly useless flimsy pouch without even a drawstring. I have several really good lens cases, so again I don’t care. Lens Specifications · Weight: 1.39 lbs., 630g. (24-70 f/2.8 F-mount is a huge 1067g for comparison) · Dual stepping motors for internal autofocus (almost perfectly silent and super fast) · 77mm filter threads · ARNEO/Nano crystal/fluorine coating: repels dirt; very little flare. · 9 rounded blades, electronic aperture (circular out-of-focus lights) · 3 ED glass, 3 aspherics, 1 ED/aspherical combination lens element. Sharp! · Total lens elements: 16, 13 groups. · Constant-aperture f/4 (all focal lengths). Minimum aperture f/22. · 1 programmable lens function button, e.g. “AF-ON”. · A/M focus switch. “M” will stop autofocus behavior. · 1 programmable lens control ring, e.g. a real aperture control! · Metal lens mount, mostly high-quality plastic exterior. · Moisture/Dust sealed. No Nikon refunds for H2O damage… · Minimum focus: 35cm/1.15 ft. (0.42X at 120mm, measured) near-macro! · Length: 118mm, 84mm diameter · HB-102 plastic petal bayonet lens hood General Impressions The zoom ring on this lens is stiffer than any lens I have ever used. Getting to the nearest millimeter is a challenge. It takes about a quarter turn to go through the whole zoom range, so you can zoom very quickly. A sort of giveth and taketh away. The dual-telescoping zoom action has NO wiggle. This lack of wiggle is necessary to obtain the very high resolution at all focal lengths. This is probably why the zoom action is stiff. The astonishing close focus distance and 0.42X magnification at 120mm has enabled me to mostly abandon using my 105mm macro lens. I rarely need to get all the way down to 1.0X magnification, and the super high resolution allows for significant cropping. The intensely fast auto-focus lets me get macro action shots that I’d miss with my slower-focusing 105mm Micro Nikkor. With my usual editors (Lightroom, Capture One 23, ON1 Photo Raw 2023) the photos don’t show any vignetting or image distortion. The information embedded in the Raw images (I’m using either the “High Efficiency Raw” or DNG format) has distortion-correction information and vignetting-correction information. The editors auto-correct the images without asking. For my old version of Lightroom, I use the latest Adobe DNG converter to use my Nikon Z8 raw files as DNG and still get automatic distortion correction. Focus Speed Due to the lack of a focus scale and internal focusing, I haven’t figured out an accurate way of providing actual focus speed numbers. Suffice it to say that those dual stepping motors make focus really fast. A crude test I performed involved focusing at minimum distance on a close subject at a focal length of 120mm, start video recording at 120 fps, and then press the AF button while panning to focus on a distant scene. When I reviewed this video, it took roughly 57 frames, which is 0.48 seconds (in sunlight). Keep in mind that this lens focuses closer than most, so minimum-to-maximum focus range is much further than most lenses. For normal photography, you'll find the focus to be blazingly fast. Sample Shots 120mm f/4 1/800s bokeh example Bokeh circles can show slight edge brightness, and the frame edge highlights become non-round. Even the $8000 Nikkor 58mm f/0.95 Noct has non-round highlights at the frame edges, so don’t use that as a pass/fail test. After you stop down the aperture, the non-round edge highlights become circular, although of course they’re smaller. 120mm f/4 1/800s pixel-level crop from the shot above Note how sharp those feathers and eye reflections are in this 100% crop. This lens is sharp. I haven’t seen any “onion skin” in the highlights, which is something which really bugs me when it’s present in photos. Nikon Z8, 120mm f/5.6 crop from a close-up To my eye, this is as sharp as a good macro lens. I cropped some, but the resolution really holds up well. 24mm f/9 1/400s 24mm f/9 1/1600s 120mm f/7.1 1/1000s. Packard hood ornament 24mm f/7.1 1/500s (license plate altered) 80mm f/8 1/400s converted to black and white Infrared Performance For those of you that are interested in infrared photography, I tried out this lens with an 850nm infrared filter. This is very deep infrared. The lens passed with flying colors (except that ‘color’ is undefined in this part of the spectrum). No dreaded hotspots seen. 850nm deep infrared, 120mm f/5.6 180 seconds I had to use an infrared filter over the lens, since this Z lens won’t mount onto my F-mount infrared camera. Super long exposure, because the image sensor cover of the Z9 used in this test really screens out infrared. Lens Optical Characteristics 24mm f/4.0 “hidden” barrel distortion and vignetting I was able to “uncover” the actual optical distortion of this lens by converting the raw file into the Adobe DNG format, and then using my image-analysis software MTFMapper. As shown above, there’s hefty barrel distortion at 24mm. As you’ll see later, there’s pincushion distortion that’s pretty evident at 120mm. Again, you’ll probably never see this distortion in your photos, since most photo editors will automatically remove it. Relying on photo editors to remove optical distortion is getting more common all the time, and isn’t necessarily bad. Lenses would always be bigger, heavier, more expensive, and more complicated to totally rid this distortion purely through the glass. Lens designers are going to just embed the mathematics of the geometry corrections into the photo file (correction profile), so that editors can straighten curves, and even adjust transmission loss (vignetting). It only gets ugly when you’re using an image editor that doesn’t understand this embedded information. Resolution, Contrast, and Lateral Chromatic Aberration I use the MTFMapper program to perform resolution tests, which you can get here: https://sourceforge.net/projects/mtfmapper/ This image analysis program was used to measure the Z-cam lenses on board the Mars rover Perseverance. My resolution chart size is 40” X 56” to get a better working distance. My tests were done using unsharpened raw-format shots using a 45.7 MP Nikon Z8. The contrast plots are real contrast plots, and not the theoretical ones that lens manufacturers put out. They include the camera sensor effects, since you’re going to be using the lens with a real sensor. The MTF50 resolution plots, measured in line pairs per millimeter, are shown in both the sagittal and meridional direction across the whole field of view. Resolution is a 2-dimensional thing, and not a simple single number. I stop measuring after f/16, because diffraction destroys the resolution. MTF50 lp/mm resolution, 24mm f/4.0 Peak resolution, central = 76.4 lp/mm (3652 lines per pic. height) Peak resolution, worst edge = 57.9 (2768 l/ph) Peak resolution, worst corner = 43.2 (2065 l/ph) MTF Contrast plot, 24mm f/4.0 There’s definite astigmatism here, since the sagittal/meridional lines don’t overlap very closely as you get further from the lens center. The meridional (tangent) direction has less contrast and resolution than the sagittal (wheel spokes) for this lens at most apertures and focal lengths until the lens is stopped down typically beyond f/11. Lateral chromatic aberration, f/4 The worst (blue vs green) chromatic aberration is about -5.7 microns. The sensor has 4.35 micron pixels, so it’s 1.3 pixels worst case. MTF50 lp/mm resolution, 24mm f/5.6 MTF50 lp/mm resolution, 24mm f/8.0 MTF50 lp/mm resolution, 24mm f/11.0 MTF50 lp/mm resolution, 24mm f/16.0 MTF50 lp/mm resolution, 34.5mm f/4.0 Peak resolution, central = 72.7 lp/mm (3475 l/ph) Peak resolution, worst edge = 39.1 (1869 l/ph) Peak resolution, worst corner = 32.4 (1549 l/ph) (Like I said, zooming to an exact millimeter is very difficult on this lens.) MTF50 lp/mm resolution, 34.5mm f/5.6 MTF50 lp/mm resolution, 34.5mm f/8.0 MTF50 lp/mm resolution, 34.5mm f/11.0 MTF50 lp/mm resolution, 34.5mm f/16.0 MTF50 lp/mm resolution, 50mm f/4.0 Peak resolution, central = 77.4 lp/mm (3700 l/ph) Peak resolution, worst edge = 50.5 (2414 l/ph) Peak resolution, worst corner = 43.2 (2065 l/ph) MTF50 lp/mm resolution, 50mm f/5.6 MTF50 lp/mm resolution, 50mm f/8 MTF50 lp/mm resolution, 50mm f/11 MTF50 lp/mm resolution, 50mm f/16 MTF50 lp/mm resolution, 70mm f/4 Peak resolution, central = 65.2 lp/mm (3117 l/ph) Peak resolution, worst edge = 41 (1960 l/ph) Peak resolution, worst corner = 38.3 (1831 l/ph) MTF50 lp/mm resolution, 70mm f/5.6 MTF50 lp/mm resolution, 70mm f/8 MTF50 lp/mm resolution, 70mm f/11 MTF50 lp/mm resolution, 70mm f/16 MTF50 lp/mm resolution, 120mm f/4 Peak resolution, central = 66.5 lp/mm (3179 l/ph) Peak resolution, worst edge = 42.7 (2041 l/ph) Peak resolution, worst corner = 41.2 (1969 l/ph) 120mm f/4.0 “hidden” pincushion distortion and vignetting MTF Contrast plot, 120mm f/4.0 Lateral chromatic aberration, f/4 MTF50 lp/mm resolution, 120mm f/5.6 MTF50 lp/mm resolution, 120mm f/8 MTF Contrast plot, 120mm f/8 MTF50 lp/mm resolution, 120mm f/11 MTF50 lp/mm resolution, 120mm f/16 Summary It never occurred to me that I would use this lens for macro photography. The 0.42X magnification, good working distance, and fast focus has made it my go-to for most macro work. Most close-up photography doesn't really need to go all the way down to 1.0X. This lens really does compete with many prime lenses. It’s in the same ballpark for sharpness, and the bokeh isn’t that bad. And you just can’t beat being able to zoom over such a large and useful range. I didn’t fully appreciate how much better it is, compared to my 24-70 f/2.8 zoom, for sharpness, focus speed, focal range, and close focus. It would of course be nice to have the same f/2.8 aperture, but you can't have everything. Telephoto zooms are famous for being much worse at their maximum focal length. Not this guy. For those times that you can take only a single lens on a trip, this is it. It will handle everything except the really big glass required for wildlife. And it’s maybe a little long for architectural interior shots. The Z lenses, especially the ‘S’ line, have a reputation for being overpriced, but here’s a case where what you get is a real bargain. I got it along with my Z8, so I got an even better bargain. Sample Shots 95mm f/8 1/100s 96mm f/5.6 1/400s 120mm f/8 1/2000s











