top of page

Search Results

268 results found with an empty search

  • Nikon D500 Focus Bug

    I was exploring some D500 focus options when I discovered something was definitely amiss. It turns out that I’m not alone. Nikon has, for many years, offered auto-focus options that help you keep the subject in focus, even when the subject leaves your selected focus sensor. One set of focus options, in continuous-focus mode, is called “dynamic-area AF”. These modes have names such as [9-point, 21-point, 39-point] and [25-point, 72-point, 153-point] depending upon the camera model. Mode selection The general idea is to choose a bigger-numbered point when the subject gets harder to track. For a very erratic-moving subject, you’d want to use the D500 “153-point” dynamic-area AF mode, since it covers nearly the entire field of view. Right? Wrong. If you choose the 153-point mode and your subject moves off of your selected focus sensor, the D500 (and the D5) will simply re-focus on the background as soon as the time has elapsed according to your selection of the “Blocked Shot AF Response” and “Subject Motion” values in the “A3 Custom Settings” menu. This is an obvious bug! It’s easy to test, by simply pointing the lens slightly off-target while continuously focusing. The focus will shift to the background nearly immediately, ruining your shot. The selected in-viewfinder focus sensor never updates with dynamic-area mode, so you have to view the photograph to see which sensor was used (little red square). You’ll find that the camera isn’t switching focus sensors to keep up with the subject. I’m seeing the same problem with “D72”; it won’t track the subject when it moves off of the selected focus sensor but is still within the ‘box’ of 72 sensors. I always use the back-button for focus (the AF-ON button or whatever button you assign focus to). I haven't seen this focus bug on any other Nikon models, so it appears to only affect the D5/D500 models. I am using the firmware version 1.13 (the latest to date). I found out that Nikon was informed of this problem many months ago by multiple users, but each time has responded like they never heard of the problem. Here are some of the links on this bug I found from other users, after I decided to go on a web search: DpReview FredMiranda Note that in the “FredMiranda” link that his sample shots mention “d513” when he really means “d153”. A simple little dyslexia-type mistake. Steve used the Nikon D5 to demonstrate the exact-same bug I’m seeing with the D500. We can only hope that enough complaints will shame Nikon into finally fixing this firmware bug. By the way, their other camera models don’t have this bug. Sample shot by Steve Perry (from Fred Miranda link above). D5 has the same “d153” focus bug. The best substitute for 153-point 3D-tracking is the closest (functional) option to follow subjects all around your viewfinder. You start by putting your desired beginning focus sensor like usual (in continuous-focus mode) over the subject. Then you start focusing (presumably with the AF-ON button). 3D-tracking will use color information and then show you the automatically-selected focus point anywhere in the frame as the subject jumps around in your viewfinder image. If your subject moves quickly, they recommend you also set “3D-tracking watch area” to “Wide” in the “Custom Settings” (pencil) A5 menu. For quick response, Nikon also has the “Custom Settings” A3 menu to set both “Blocked Shot AF Response” and “Subject Motion” values. For these settings, a lower number is for quicker reaction to changes, although in 3D-tracking note that 1=2=3 for the “Blocked Shot AF Response” setting. If your subject is the same color as its background, then this mode will probably fail. On the D5/D500, this 3D mode has the advantage of actually showing you the selected focus sensor as it tracks the subject around the frame. I think their dynamic-area mode should do the same thing, since it gives you real-time feedback about what it is doing (or not doing). Update 9-24-2017 Steve Perry (mentioned above and in the Fred Miranda link) has been pursuing this issue, and wrote more updates about the focus problem. He thinks that Nikon fundamentally changed how dynamic-area AF works on the D5/D500, but didn't document it. Rather than paraphrase Steve, I'm including his comments (from page 9 of the Fred Miranda site link) below (in blue font). Steve attempted to reverse-engineer what the focus algorithm must be doing. OK, I think I finally have an answer. Before I lay it out though, I wanted to thank everyone who helped by posting to this thread and PM’ing me. A extra-special thanks to Snapsy and Keith for their help on this. Literally couldn’t have done it without them So, here’s what I think is happening, not sure if it’s 100% correct or not, but it seems to fit the facts and behaviors as we know them. Also, I reserve the right to revise this as time goes on : ) First, we know for sure that the D5/D500’s Dynamic area is not the same as the previous generation of bodies, no question there. In the past, you would typically acquire the target with the primary AF point, and then if the target slipped off that point, another AF point would jump in and take over – and it would track like that indefinitely. The new system on the other hand seems to let go of the target and go for whatever is under the primary AF point – almost like Dynamic wasn’t even there. This tends to appear broken since when viewing the images in View NX-i or on the back of the camera, the system never seems to move the AF point – it always shows the selected point. (In the past, you could see the point it used.) According to the EXIF data though, it actually is selecting different AF points as the subject leaves the primary point. However, it’s reporting it like Group AF does – just showing you where the selected area (point) is and not the actual AF point that was used. As Snapsy said, you can verify this with the EXIF Tool. The camera is unquestionably selecting different AF points as needed. So, after looking at far too many lines of EXIF code and finally seeing a pattern, here’s how the new system works (I think ) It locks on with the primary AF point and begins tracking. If the subject falls away from the primary AF point, the system will switch to one of the auxiliary points in the selected Dynamic area. However, unlike the old system, the new system has a bias for the primary AF point. After a brief delay, the camera tries the Primary AF point again. If there is a good target under the primary AF point, it will go for that. If there is not a good target under the primary point, it will go back to using the auxiliary points. It will continue to go back and forth like that until it can get a lock with the primary AF point again – or you stop focusing. Two notes - Note that it MUST be a good AF target for the system to switch – just a target that it can technically focus on isn’t good enough. I have tested this with poor targets the camera could just barely focus on. While the camera could technically get a lock, it would stutter a little trying to keep it. I would then switch to Dynamic and focus on a printed box with the poor AF target in the background. When I move my primary AF point over the poor AF target, it would stay with the first one indefinitely. Field tests also seem to confirm that it needs a good target in order to switch points – sadly, there are a LOT of those out there. Delay time – In the past, the camera would not invoke the delay time (Blocked AF Response) specified under A3 unless the target had completely left the AF area. However, that’s not the case now. The camera will start the countdown as soon as the target leaves the primary AF point (as a poster noted above) and use the auxiliary points until the time runs out – at which point it will try again with the primary AF point. Usage So, if this is the new normal, we have to adjust to the change. For some people, this system is actually an advantage, for others, not so much. The advantage favors more experienced shooters. In the past with Dynamic, if the system switched to a different AF point, it would tend to stick with it – but sometimes that’s a problem. With the old system, if I’m photographing a bird coming at me at a 45-degree angle, I would go for his head. However, if I accidently slide the primary point off, the system would pick a new AF point. If it decides to go to the spot on the bird down by where the wing meets the body, it was an issue. The camera would lock on and just stick there until you refocused – even if for the rest of the sequence you kept the primary AF point on his eye. With the new system, it may still move to the wing, but if you keep the AF point on the eye, the camera will get the idea and switch back to it. The downside of course is that if you really are having a difficult time tracking, in the past Dynamic would really help. Just get the initial lock and fire away. Even if the primary AF point never revisited the subject, it would continue to track and not jump to the background. IMO this is the better method – less experienced shooters can use wider areas and more experienced shooters would use smaller areas to restrict where the camera could focus. So, the bottom line is this – with the new system, you need to do your best to keep the primary AF point on your subject. If you’re having a hard time, set the delay under A3 to 4 or 5. However, keep in mind even at “5” the delay is short. However, just knowing that it’s critical to keep the AF point on target may be enough to help some shooters. -- Steve Perry There was this response, after more than a YEAR from Nikon: I apologize for the delay, and for the confusion. According to our design group at Nikon Corp, the Dynamic Area AF function has been enhanced with the newest AF sensor, particularly for subjects moving toward or away from the camera. Dynamic Area AF (9, 53, 72, or 153 point) does not track the subject, however it will expand the area the subject will remain focused should it BRIEFLY leave the initial focus point. If the subject leaves the selected number of AF points, then the camera will refocus. If the subject leaves the initial focus area and enough time has lapsed before the subject is recentered, the camera will refocus. If peripheral data from initial target area has enough of a difference from the initial target (unspecified), then the camera may refocus as well. This is not the intent of the function, but it may happen at times. Choose the numbered area based on your ability to keep the initial AF area on the subject, and also expected movement path, and always try to follow and center your subject during the burst. If the subject does refocus, and it may, then either let go of the button, reacquire the subject in the center AF area, and continue firing, or, depending on the quality of the subject (ability for the AF sensor to grab and hold), just keep firing and the lens will refocus on it's own. Success is dependent on a combination of subject contrast and user skill and speed of the user and the subject. If you want you to allow the AF area to track your subject around the frame, then select 3D or Auto. 3D will follow around the subject using a single AF point and Auto will use several points. The intended performance improvement, again, was for subjects moving toward or away the camera using information from surrounding points, that is one area where the enhancements were noted during testing with this new system. (See more on this response from here). Conclusion It appears that Nikon chose to change the way dynamic-area focus works, but the majority of photographers who have used it don't like it (including me). You need to use the "exif tool" to query what the focus points are doing (Capture NX-D for instance doesn't give you a clue). There is no official Nikon documentation noting the change or how to cope with it, as of this writing. I'd recommend that you try setting the "Blocked Shot AF Response" to the longest allowable (5), so that it waits a little longer before it picks a different focus sensor. Most photographers seem to prefer "Group Autofocus", although be aware that this mode will always pick the nearest subject. #review

  • UniWB and ETTR: the Whole Recipe

    UniWB (unitary white balance) and ExposeToTheRight. What is it and why should you care? This is all about jamming the maximum range of light into your raw pictures and being confident you aren’t getting any color channel blowouts. Think maximum possible quality. The “Uni” part of UniWB alludes to uniform signal gain being applied to the red, green, and blue pixels on your sensor. The optimal gain (or signal boost) is ‘no’ gain, or a multiple of 1. If no gain is given to your pictures, then the histogram in your camera always speaks the “truth”. If you no longer need to leave a little slop in your exposure to avoid blow-outs, because you can now believe your histogram, that means you can get the maximum light into your shadows. So, there must be a catch. There’s always a catch. Here’s the catch: your pictures all look green. An ugly, sickly green. But not to worry; the green is curable. It's just not curable inside your camera. Here’s the deal. I’m going to show you how you can get set up and start using UniWB at no cost. Except maybe a little sweat. I verified these procedures with a D610, D500, and D7000. Canon (or anybody else) user procedures would be nearly identical. These procedures only apply to "raw" photos, though. What do I need? A grey card. So I lied already. You need to get a grey card. A Windows computer and monitor. You don’t even need a calibrated monitor. Download “Exiftool” for free. Or something else that can look at exif data. Your photo editor you normally use to process NEF (raw) photos. Windows Paint, or something that can draw and display a colored rectangle full-screen. Make a custom color on your computer monitor Open up Windows Paint Click “Resize”, select “Pixels”, de-select “Maintain Aspect Ratio” Assign something like Horizontal 1920, Vertical 1080 or whatever your monitor size is. Click “Color 1” (foreground color) Click “Edit Colors”, then select “Color Solid” Set “Color Solid” to Red 128, Green 64, Blue 128, then “OK” Click “Select All” Click “Fill” (the bucket icon) Left-click inside the big white rectangle to fill it with your new Color Solid. Click “View”, “Full Screen” You now have your monitor displaying a single (pink-ish) color across your whole screen. You will need to do this in a darkened room, so that you don’t see your own reflection. Later, you will probably need to repeat these steps, but change the values of the “Red” and “Blue” colored rectangle as needed in step (6). We will leave the Green value at 64 throughout the tests. Believe it or not, this pink-ish color is what your camera sensor perceives as “neutral”; the R,G,B gain values applied to it should all land near a value of 1. For my monitor, the rectangle on the right is nearly "perfect UniWB color" for my cameras. Set your 'White Balance Preset' to the color displayed on your monitor Put a long-ish focal length lens on your camera, in case your monitor screen distorts brightness/color if you get too close to it. De-focus your lens, and select “release priority” so it will take an out-of-focus picture. With your camera in “Raw” mode, set your white balance selection to “preset” (hold WB button and turn dial to get ‘PRE’). Select which preset number you want with the other dial, still holding the WB button. Fill your viewfinder with the screen color, and hold the “WB” button until the “PRE” flashes in your viewfinder. Press the shutter. You need to see “GD” displayed to know you have a “good” white balance preset. If you get “no GD”, you need to retry. (Too dim?) Now, go outside and take one or more interesting and tasteful photographs, using this new white balance preset. You’ll notice that your pictures are that sickly green I mentioned earlier. Measure the Red and Blue color channels in your photographs Take out the memory card from your camera and copy the photo(s) to your computer for analysis. Drag a green photo onto the “exiftool” to get a text file of the exif data. Exiftool feedback for Red, Blue sensor channels In the picture above, the Red feedback is about 0.95 (less than 1). Blue feedback is about 0.98, also less than 1. If I wanted to try to get closer to the ideal 1.0 gain, then I would DECREASE the Red value in the colored rectangle and therefore force a BIGGER red gain for the next test. If the Red gain had instead been larger than 1.0, I would instead use a larger red value in the colored rectangle for my next attempt. Similarly, the Blue gain is smaller than 1.0, so in a next attempt I would decrease the Blue in the colored rectangle, forcing the gain to get larger. To keep it simple, I named each rectangle I created with the RGB values used. In one of my experiments, the following shows the steps I took: (Start): Rectangle R128G64B128: exif feedback has R gain = 1.38818, B gain = 0.64941 (Need bigger R, smaller B): Rectangle R140G64B118: exif feedback has R gain = 1.15869, B gain = 0.73291 (Need bigger R, smaller B): Rectangle R145G64B100: exif feedback has R gain = 1.0249, B gain = 0.881347 (Need bigger R, smaller B): Rectangle R147G64B80: exif feedback has R gain = 0.9907, B gain = 1.13183 (R is good, need larger B): Rectangle R147G64B86: exif feedback has R gain = 0.9907, B gain 1.0449 For each iteration shown above, I had to go back into “Paint” and make a new rectangle with the adjusted R, B values. While displaying the new rectangle full-screen, I once again set the white balance preset and took another photo for analysis. As the saying goes, rinse and repeat. Once I have a calibrated UniWB with my camera, I can grab a different camera and perform a white balance preset using the screen rectangle color as-is. I have found that the different Nikon cameras are close enough in color response that I don’t need to iterate any more with them. I suspect the same is true with other camera brands. In other words, it was “one and done”. Trust but verify, though. Use your photo editor and grey card to get the correct color Now that your exif feedback is within 5% (R, B range from 0.95 to 1.05) you need to photograph that grey card I told you about, in the SAME light as your tasteful photos using this “good” white balance preset. Photo of grey card using the UniWB preset You can see above how the “grey” card photo looks anything but grey. The UniWB preset feedback in my photo editor (Capture NX2 here) shows 4938K. The histogram peaks aren’t anywhere close to being on top of each other. To get the grey card to look correct, I just move the “Fine Adjustment” color slider from its 4938K to where the histogram peaks are on top of each other. As shown below in Capture NX-D, a gray-point "eyedropper" picker can also be used. Color adjusted to 6457K gets the histogram R,G,B peaks to coincide By moving my color slider until the R,G,B peaks fully overlap, I get the correct color temperature. Now, the grey card looks grey again. I note this correct color temperature for later use. Capture NX-D correcting white balance I thought I’d try to see if Capture NX-D could also fix the white balance. It could, although I used a slightly different mix of “tint” and color temperature to get the best-looking histogram. Capture NX-D will also let you do batch processing to convert many files at once. The “eye dropper” gray point color picker works here, too. Capture NX-D When you have a nice, continuous light spectrum in your photo, then you can use the simple technique of picking a spot on the grey card using the "gray point eye dropper". This is a fast way to get a good white balance. Most photo-editing programs give you a similar option. The only downside, however, is that this 'eye dropper' technique doesn't tell you what color temperature is being used! ETTR (Expose To The Right) The photo above was taken using the "correct" UniWB preset, at 4938K. I could use the camera histogram and note where the now-accurate colors are located. The lighting used here matches the lighting where I photographed my grey card. I can now adjust exposure until I get the brightest color (green here) against the right-hand-side of the histogram, e.g. ETTR. I can expose with confidence that I'm not getting any blown color channels. I now know from my analysis of the grey card, which was also shot at the 4938K preset, how to adjust this green photo to make its color balance correct during post-processing. Adjusted photo All I have to do now is to set the color temperature to match the corrected grey card shot, which in this case was 6457K. If I had hundreds of pictures that all need this color adjustment, then I’d create a batch file and run it against all of the pictures. Conclusion The above procedures should be complete enough that you now know how to get a correct UniWB setting for your own camera(s). From there, you can now make better use of your camera’s histogram for fine-tuning exposure, via ETTR. Although I'm a Windows user, the same principles can be used for Apple, of course. You don’t need a calibrated monitor to use these techniques, but you may be forced to iterate more steps to finally locate the proper UniWB setting. I realize it feels a bit unsatisfying to constantly see green pictures on your camera LCD screen. But at least you can feel confident that your camera histogram will only show you blown color channels that are truly overexposed. You can finally get the absolute maximum amount of light into your shadows. Getting rid of that green is really quite simple, as long as you can wait to process those pictures on your computer. #howto

  • How to make a crowd disappear in broad daylight

    Have you ever been to a popular tourist attraction and simply can’t get a photograph devoid of people? How about you want to get one of those cool landscape shots where you make the ocean waves turn into a cloud of mist, even when it’s high noon and sunny? The most obvious answer for most photographers trying shots like these is to use a really strong neutral density filter. What happens when you only brought along your lens with the 105 mm diameter filter thread and you never got around to buying that $400 filter (they really can cost this much)? How about you at least remembered to bring your tripod, but forgot your arsenal of filters? How about you brought that cool super-wide lens that doesn’t even let you mount a filter? There’s still a way to get that image you crave. You don’t even have to sacrifice using your sharpest aperture or be forced to put a filter over a long lens that has its resolution ruined by filters. The solution to this conundrum is to use image stacking. For me, I turn to the same “CombineZM” free program that I use for focus-stacking my macro shots. (There’s also a version called “CombineZP”) Yes, you probably need a tripod, and yes, you need to take several shots of the same scene. You can download this program from a variety of sources, such as here. The technique is pretty simple. First, take several shots of the scene without moving the camera. To get rid of that crowd, take a series of shots where people aren’t standing in the same place for each shot. Don’t use auto-focus or change exposure between shots. ⦁ Process your shots into a format such as TIF or JPG (CombineZM doesn’t like raw format). ⦁ In CombineZM, select File | *New ⦁ Browse to your series of pictures and select which ones you want to combine (hold shift key to select a range). The pictures should all be in the same folder. ⦁ Select Stack | Replace Groups, Average The CombineZM program expects your exposure to be correct in each frame for this selected stacking option. It will “merge” all the selected photos into one shot, and will also reduce any image noise that might be present. In fact, some people use this technique purely for the noise-reduction feature, primarily for the shadows. For the best results, I’d recommend that you use at least 10 shots. If you use 10 pictures, then a person appearing in one of those shots will only contribute 1/10 to the final picture (90% transparent). 5-shot “average” stack. Still looks a bit ghosty. If a ghost-like shot is what you’re after, that is of course completely possible. You also might want to get a model to stand still while everyone else is moving around, leaving the model standing all alone. It’s all about getting creative, isn’t it? #howto

  • How to Correct an LED “White” Light Source

    I really love my LED ring light, which I use for macro photography. It’s just one example of photographic lighting based upon white LED technology. These lights are great, because they don’t heat up (no wilting flowers), they last a really long time, they’re small, and they consume much less energy. But there’s a slight downside. They put out kind of weird light. None of my cameras can quite figure out a good "auto" white balance for them. The reason for this problem is the blue part of the light spectrum. Most, if not all, LED ‘white’ lights put out an unbalanced spectrum that takes a special white balancing procedure. “White” light LED typical spectrum. Lots of blue, little red. Camera Auto White Balance just doesn’t cut it My camera “auto white balance” isn’t smart enough to get the color of a grey card correct using the white LED light. Using the Capture NX-D gray point “eye dropper” isn’t enough to fix it. Still too blue. It’s tempting to just use the little “gray point” eye dropper to fix the color, but this isn’t quite enough. The histogram above shows how there’s still too much blue. Manually increasing the color temperature helps, but it’s still too blue Additional blue adjustment (10000K plus Levels and Curves adjust) finally looks correct Capture NX2: adjust color temperature and adjust color balance as well Capture NX2 with 10000K and Blue -40 also gets me the correct grey As shown in the pictures above, it’s the norm for LED white lights to put out excessive blue light, while skimping on the red light. The camera single white-balance adjustment isn’t able to fix the problem. The image editor simple gray-point color-picker still leaves too much blue in the shot. In an image editor, however, you can fix the problem with a double adjustment. For this LED light, I start with a high color temperature (roughly 10,000K) to align the Red and Green channels. (Use a grey card image, of course!). Second, shift the blue peak until it lands on top of the red and green peaks. It probably won't be perfect, but it will be plenty close to look good. There are so many occasions where you need to adjust your white balance. Please, please invest in a grey card. They’re really dirt cheap and can greatly improve the look of your pictures, especially when you get into environments with unusual lighting. If I were smart, I’d save these color adjustments as a batch file, so that I could correct all of the pictures shot using the LED light at once. A batch file is also a good idea if you tend to forget after a few months exactly how you corrected the white balance in the first place. Don’t avoid getting a white LED light for photography, just because you’ve heard that their color is bad. Chances are that you can correct the color in post-processing by using a two-step procedure. Chances are less good that you can correct the white balance in-camera, so jpeg shooters beware. My LED ring light in action #howto

  • White Balance for Infrared Photography

    I haven’t owned a camera since the Nikon D60 that would successfully perform a “preset white balance” when I use my Hoya R72 infrared filter. I have no intentions to convert any of my cameras into “infrared only” by getting the sensor filter changed. All I’m after is a decent-looking picture on my camera LCD after I take a shot using the R72. All of the newer cameras screen out infrared light so effectively that regular white-balance measurements don’t work. My old D50 even let me take IR shots hand-held while using the Hoya R72! If you stick with “auto white balance”, then your IR shots look totally red on your camera LCD. Yuck. They’re not much better if you try setting the lowest “Kelvin” white balance (2500K on my D500). Ditto for using “incandescent” white balance. So, what to do? I came up with a procedure that “mostly” works to solve this IR-shooting problem. At least I get to see pretty decent images on my camera LCD. My secret procedure involves displaying a special color on my computer monitor, and then setting my camera white balance preset using this displayed color. I previously published an article here explaining how to create special colors and set your camera white balance using this color. The Hoya R72 leaves your photos with very little blue and green color, but a ton of red. It occurred to me that I should be able to create a color that could counteract this, so that you could set a camera white-balance pre-set without using the Hoya R72 at all. The following procedure is what I came up with. Auto White Balance with the Hoya R72. Yuck. The picture shown above is what your camera viewfinder looks like when you select “Auto White Balance”. It’s really hard to see what’s going on. Incandescent White Balance. A teeny bit better. 2500K White Balance. Slightly better. I started working on colors that would emphasize red with much less blue and green, so that I could emulate the color spectrum on my computer monitor. The spectrum I was after was the same one passed through the Hoya R72 IR filter. Red 240, Green 64, Blue 52 If I displayed the above color on my computer monitor and successfully performed a “preset white balance” against it, then I could use that in my camera while shooting with the Hoya R72. It turns out that going beyond the colors shown above made my camera stop accepting the color as a “good” white balance preset. Preset white balance from R240, G64, B52 screen color with Hoya R72 As you can see above, I’m definitely on the right track. Now, my camera screen shows pictures that make a lot more sense. I still need to post-process these pictures to get better white balance, but at least I’m not seeing red while shooting. I realize that color really has no meaning in infrared. But I think you would agree that pure red tones definitely aren’t what you want. Post-processed shot. Used a gray-point to get color closer to what I wanted. As you can see above, I was able to get a workable color pallet from the “R240G64B52” preset color. The “correct” RBGG white balance gains for the Hoya R72 from a D60 file My goal in getting an optimal white balance preset was to achieve the gain values shown above (the 4 numbers are Red, Blue, Green, Green). This Nikon D60 file shows the results of a “good” preset, based upon using lawn grass in full sun as a target. When I was trying different computer screen colors to WB preset against, I could never drive the red high enough to get the “0.507” gain before the camera WB preset operation would fail (showing “no good” feedback). I had to stop at the Red value of 240, instead of driving it to the maximum of 256, and its gain of 0.5649 versus the 0.507 goal. Exif data showing the R, B, G, G gain values I obtained Nevertheless, I’m now getting greatly improved rear LCD screen feedback on my D500 when I shoot infrared with the Hoya R72 filter and my custom white balance preset using the computer monitor. By the way, another article I wrote mentions how the D7100 and D610 cameras are terrible for infrared photography, with both having unacceptable internal reflections unless you use the little DK-5 viewfinder eyepiece blocker. The D500, on the other hand, is a very good camera for infrared photography, even without using its built-in eyepiece shutter. Many cameras still won't produce a good white balance preset using this procedure, but give it a try.

  • Nikon D500 Focus Point Map Decoded

    I was looking at the EXIF information from a D500 file (I use the Exiftool program to see this information) and saw a mention of “Primary AF point” followed by a “C9”. What’s that? I didn’t have a particular need to understand the entry, so I just moved on. I was recently trying to understand how a D500 uses focus points in controlling lens focus, and found out that none of the image editors use very much (and sometimes none) of the focus point information. Exif data, however, keeps track of what’s going on with the focus points. You can find Nikon-provided information that discusses which focus points will work with which lens; not all focus points work with every lens. This will greatly complicate figuring out the logic behind how the focus algorithms use the focus points. A picture is in order: Nikon D500 focus sensors I found out that Nikon saves focus sensor information in the picture EXIF data much like a chess board. The middle focus sensor, for instance, is called “E9”. You can only select focus sensors that belong to the rows labelled “A, C, E, G, or I”. You can only select focus sensors in the columns labelled “1, 3, 5, 7, 9, 11, 13, 15, or 17”. Note that “cross” sensors are in red, while the less-capable “line” sensors are in black (they all look black in your viewfinder, of course). You can only select the sensors with a little box around them, either red or black (as shown above). You can’t even see the non-selectable sensors in your viewfinder. The auto-focus algorithms that get executed while trying to keep your subject in focus, however, can make use of ALL of the focus sensors (at least with large-aperture lenses). Using something like “Dynamic-Area 25” AF can make use of up to 25 total focus sensors, or two “concentric” boxes of sensors around your selected sensor, which is a mix of both selectable and non-selectable sensors. The D5 includes “Dynamic-Area 9”, but it’s missing on the D500. The D500 in auto-area AF mode, visually appears to ignore any focus sensors except the one that the user selects (called the “Primary AF Point”). Even viewing the photos in an editor with “Show Focus Point” selected will only ever show you the originally-selected focus point, and not what it actually used to focus. The selected focus point shown in Capture NX-D EXIF data for the picture above. As shown above, the EXIF data indicates that I selected the center E9 focus sensor, and it only used that sensor at the time of the exposure. Focus algorithm used more sensors In the above example, several focus points were active at the time of exposure. At the time of this writing, the only way to reverse-engineer what the focus algorithms must be doing requires use of this EXIF information. Best of luck figuring out the focus algorithms. I suppose hammering out 10 fps while wiggling the camera around various targets could get you enough EXIF data to figure out “how they do it”, but you’d better have a lot of time on your hands.

  • MTF Contrast Plots: How Useful are They?

    The only camera manufacturers that presently show the public actual measured lens performance data are Leica and Zeiss, and possibly Sigma. The other manufacturers only show “theoretical” performance, typically in the form of an MTF contrast plot. These idealized plots typically are calculated at both 10% and 30% contrast, and separated into meridional and sagittal directions. Another aspect of these theoretical plots: they don't consider the camera sensor being used. Since I typically mount the lens on a camera to use it, I'm kind of interested in the results of the whole combination. This begs the question: do the theoretical MTF plots have any basis in reality? What if you could estimate your car’s pollution output for the DMV instead of them making you get it measured? I thought so. I figured I’d try to answer these questions. As always, trust but verify. I use the MTFMapper program to measure lenses (mounted on cameras). The recent versions of this program let you display (measured) MTF contrast plots, so that they look just like the manufacturer plots. I’m presently using MTFMapper version 0.6.7, which is for 64-bit Windows. The download site for this free software is here. Before you can make lens measurements, you will need to print, mount, and photograph a resolution chart. My chart is about 40” X 60” in size, so that I can take measurements at realistic focus distances. The measuring program can use a few different resolution chart designs, and I am using the newest design. Chart used to make resolution/contrast measurements Another thing: most manufacturers only show their theoretical MTF contrast plots at the widest lens aperture. Most photographers want to know what aperture gives them the best resolution and contrast, plus how much quality difference there is between aperture settings. To answer these MTF questions, I picked on a pair of pretty good lenses: the Nikkor 85mm f/1.4 AF-S and the 105mm AF-S Micro f/2.8 G. These lenses should have decent quality control, plus they're primes, so the theoretical MTF values should have the best chance to match real measurements. I used a Nikon D610 and un-sharpened 14-bit RAW files for the tests. All pictures were taken using a heavy tripod, Live View mode, “mirror-up”, remote release, and contrast-detect focus. I picked the best results from each aperture, out of a minimum of 10 shots at each aperture. I used “cloudy bright” daylight illumination. I used this full-frame camera so that I could get the same range of information as that from the Nikon web site data. Nikkor 85mm f/1.4 AF-S Lens 85mm MTF Contrast Plot from Nikon Web Site (theoretical plot) I grabbed a screen shot from the Nikon site, showing you how the 85mm lens should perform at f/1.4. This plot would assume that their manufacturing plant is capable of flawless lens assembly and their parts exhibit no process variation. Here, the “S” stands for “sagittal”, or the “spoke direction” from the lens center. The “M” stands for “meridional”, or tangential direction. “S10” is for 10% MTF contrast measurements in the sagittal direction, while “S30” is for 30% MTF contrast (represents resolution) measurements in the sagittal direction. 85mm MTF Measured Contrast Plot, f/1.4 (peak MTF50 41.8 lp/mm) The theoretical and actual MTF contrast plots look quite a bit different. Surprisingly, some aspects of the measured plot actually look better than the theoretical. The pink-ish and blue-ish bands around the plot lines show the actual spread of measurements taken; the dark lines are the average of the measurements. Note that the measurements stop at about 18mm (sensor edge), starting from the lens center. The Nikon-supplied plot has measurements out to the corner, or about 22mm. So, how about the other apertures for this lens? What follows are the measurements at other apertures, to give you an idea of how much the lens improves as you stop it down (until diffraction starts to mess up the resolution). 85mm MTF Measured Contrast Plot, f/2 (peak MTF50 45.2 lp/mm) 85mm MTF Measured Contrast Plot, f/2.8 (peak MTF50 55.2 lp/mm) 85mm MTF Measured Contrast Plot, f/4 (peak MTF50 58.6 lp/mm) 85mm MTF Measured Contrast Plot, f/5.6 (peak MTF50 56.9 lp/mm) 85mm MTF Measured Contrast Plot, f/8 (peak MTF50 53.5 lp/mm) 85mm MTF Measured Contrast Plot, f/11 (peak MTF50 45.2 lp/mm) 85mm MTF Measured Contrast Plot, f/16 (peak MTF50 36.8 lp/mm) Notice how the astigmatism vanishes at about f/8 (no more separation between sagittal and meridional measurements). By f/4, even the edge performance is excellent. When you only see the wide-open MTF plot, you don’t get any of this insight. 105mm AF-S Micro f/2.8 G Lens 105mm f/2.8 MTF Contrast Plot from Nikon Web Site (theoretical plot) Above, I show the Nikon web site plot of the 105mm f/2.8 Micro Nikkor (at f/2.8). Now, it’s time to see how this compares to reality. 105mm MTF Measured Contrast Plot, f/2.8 The f/2.8 measured plot differs a bit more from the theoretical plot than the 85mm did. Nothing in the measured plot is as good as Nikon’s claims. I don’t have another 105mm lens to compare to this data, but I’ll bet it would be different from the above data, too. 105mm MTF Measured Contrast Plot, f/4 105mm MTF Measured Contrast Plot, f/5.6 105mm MTF Measured Contrast Plot, f/8 105mm MTF Measured Contrast Plot, f/11 105mm MTF Measured Contrast Plot, f/16 Although the 105mm measurements don’t stack up to the Nikon claims, try to keep in mind that measurements higher than 0.5 (50% contrast) at MTF 30 lp/mm are really good. Again, keep in mind that these plot measurements extend to the frame edge, versus Nikon’s frame corner, when comparing plots. 2-D plot of MTF50 performance, 105mm @ f/2.8 I added an MTF50 plot at f/2.8 to show how much more informative that style of plot is, compared to the mere MTF contrast plot. You get to see the performance all over the surface of the sensor. Conclusion I pretty much expected that the real measured lenses wouldn’t look quite as good as Nikon’s fantasy plots would imply. The measurement plots bear this out. I’ll bet that Canon et al. would show a similar trend. Personally, I still think that the 2-D plot measurements in an MTF50 chart give much better information about lens performance than these MTF contrast plots. The MTFMapper program is capable of providing both kinds of information, so you get to choose. I have attempted to provide all the information that you will need to make similar measurements for yourself. No two lenses are going to be identical, so you need to always keep that fact in mind when looking at lens measurements. #howto

  • D500 Electronic Front-Curtain Shutter Analysis

    I did a little analysis of the effectiveness of the “electronic front curtain”, or EFC, on the Nikon D500. The EFC is a feature presently available only on Nikon’s high-end cameras, and only available with “Mirror Up” (Mup) mode. When using EFC, the front curtain of the shutter doesn’t move during the exposure, and therefore doesn’t cause any vibrations. EFC camera menu Nikon D500 User Manual EFC explanation Is this feature one of those marketing gimmicks, or is it truly useful? You probably won’t notice much effect until you get to really show shutter speeds and/or really long focal lengths, where vibrations become a severe problem. There are two substantial sources of vibration, even when your camera is mounted on a heavy tripod. The first vibration source is the mirror, which slaps up out of the way of the shutter. Because of this, you need to either stay in Live View mode or wait about 3 seconds before tripping the shutter. The second vibration source is the shutter itself, which is divided into the sudden front shutter curtain motion, followed by the rear curtain motion. When EFC is active in “Mup” mode, the camera will open the front shutter curtain, but not electronically enable the sensor. When you trigger the completion of the photograph, the camera first electronically enables the sensor (and begins the exposure) and then closes the rear shutter curtain to finish the exposure. To test this EFC feature, I set a Sigma 150-600mm lens on 600mm and stopped the lens to f/22 at ISO 100, so that the shutter speed was on 1/30 second. I disable lens vibration reduction during the testing. This is normally a very problematic shutter speed with this long of a lens, but my goal was to force a vibration issue. Note that vibrations are even worse around ½ through 1/15 second. I used a very heavy tripod while testing, but I know that vibrations are still a big problem when using this long of a focal length (900mm effective). I also used a remote shutter release. I shot a resolution target at 16.8 meters, and then used the MTFMapper program to analyze the results. Since the results can vary from shot to shot, I did about 15 photos with and then without EFC active. I got measurements in both the meridional and sagittal directions, since I figured there might be a directional bias to the vibrations. For the non-EFC mode, I ended up with an average MTF50 of 16.6 lp/mm in the sagittal and 21.3 lp/mm in the meridional directions. For the EFC-active mode, I got 23.6 lp/mm in both the sagittal and meridional directions. For the sensor target area I used for measuring the MTF, the sagittal direction was horizontal and meridional was vertical. Subject motion blur was easily visible in the photographs without EFC active; EFC really makes a difference! The percent increase in resolution when activating EFC was (23.6-16.6)/16.6 * 100 or 42% for the sagittal (horizontal) direction, and (23.6-21.3)/21.3 * 100 or 11% in the meridional (vertical) direction. As I stated earlier, the vibrations would have been even worse at slower shutter speeds than this. Using EFC makes a substantial difference in resolution (at slow shutter speeds or high magnifications). There’s really no reason not to enable EFC if your camera has it; note that the D500 does have a shutter speed limit of 1/2000 while using EFC mode. Again, the EFC mode is only available in conjunction with the “Mirror-Up” mode, although you can still decide if you want to use phase-detect focus or switch to Live View and use contrast-detect focus. Either way you use EFC, you still want the mirror up for at least 3 seconds prior to taking the photo. For my camera, EFC mode is disabled by default, which I think is crazy. I’m not sure if other camera models have the same default, but I bet they do. If your camera supports it, then please, please enable EFC right now. Sigma 150-600 at 600mm on D500. Used EFC to rid any vibrations #review

  • Sharper Moon Shots with AutoStakkert

    When you want to get to the next level in getting really sharp distant object photos, like the moon, what do you do? Do you really need to get that $16,000-plus 800mm Nikkor? There’s an enemy that keeps you from your sharpness goal, no matter how much you spend on gear. It’s called the atmosphere. So how do you minimize atmospheric “shimmer”? Here’s where software (and science) can come to the rescue. Shooting the moon can be frustrating, for many reasons. After you get your big lens and really stable tripod, you quickly find you’re not done quite yet. You flip up the camera mirror, use a remote release, and even invoke the Electronic Front Curtain shutter. Even at a motion-freezing high shutter speed, you still aren’t getting satisfactory resolution. Evidently, elimination of subject motion and vibration still isn’t enough. Your next step to sharpness is based on image stacking. You might think that you need a motor-driven “equatorial mount” to counteract the Earth’s rotation to successfully combine your multiple shots, but actually you don’t. The software can fix that. The software I’m going to discuss isn’t limited to the moon or the planets. It can also help with any distant terrestrial landscape shots, as long as your subject holds still. The key to sharpness is based on statistics. Most of the time, details of your subject are in the same location, but with a shimmering atmosphere, sometimes they move a bit. If you take several shots of the same subject and look for details that are “usually” present in each of the photos, you can combine these shots into a single sharper picture. Your camera’s focusing system is another sharpness culprit. As soon as your focus system thinks the focus is “good enough”, it stops trying to focus further. As a result, you’ll find that some shots are sharper than others. The software also recognizes this, and is capable of automatically only selecting the “best” shots it locates in a series (a ‘stack’). The program I’m going to describe is called “AutoStakkert”, version 3.0.14 for 64-bit Windows. I’m using it on Windows 10. It’s available on other operating systems. This free program can be located here. The program author is Emil Kraaikamp. There are other astro-stack programs available, of course. Learning their usage nuances can be really time-consuming, so I can in no way claim that this AutoStakert is the best one. I just know that it is capable of doing what I want it to. I converted my raw photos into 16-bit TIF files to use the program, but it accepts a variety of image formats. It doesn’t accept raw formats, though. There are many, many options available with this program, but I’ll describe a couple of recipes that work for me. Keep in mind that the intended users of this program are astronomers, not photographers. I have had best success when using at least 20 pictures in a stack. I’ve seen extreme examples where users have processed more than 10,000 shots in a stack (frames from a video) with this program! The more atmospheric shimmer, the more shots you’ll need to counteract that shimmer. With newer cameras starting to offer 4K video, this is something to keep in mind. Before I forget to mention it, this program can output a ‘sharpened’ photo, but I don’t like the result (totally over-sharpened with haloes). I use the un-sharpened output and post-process it with my favorite photo editor instead. Finished stack result, after applying an un-sharp mask. Using the Program Run the program “AutoStakkert.exe” as an Administrator (right-mouse click on the file to do this). I believe the program author is from the Netherlands, hence the unusual program name. This program doesn’t like raw format, so you’ll need to convert your photos into any of a variety of image formats (I use 16-bit tiff). For my moon shots, I don’t bother to re-center the moon in the frame to counteract the Earth’s rotation. The software takes care of that, when you choose the “Planet (COG)” Image Stabilization option. If you’re shooting distant landscapes, you need to use the “Surface” Image Stabilization option instead, where your subject isn’t moving. If you don’t use a tripod for this, then you might as well stop reading the article at this point. For the other “Image Stabilization” options, I used the “Dynamic Background” but I honestly don’t understand its impact on the results. Leave the defaults in the “Quality Estimator” section. These are “Laplace” delta, “Noise Robust” 4, and “Local”. The “Noise Robust” value should get increased for more noisy or dim subjects and decreased for more quality input photos. For really high quality shots, a Noise Robust value of 2 is suggested. I leave the “Expand” option alone (it will change to “Crop” if you click it). This will leave the output large if you leave it as “Expand”. The “Local” setting uses each alignment point to further assess each frame quality, versus “Global” to use the entirety of each frame. Click the “1) Open” button, and browse to the folder with your (TIF, JPG, etc.) multiple shots to process. Use the “control” or “shift” buttons to select the desired photos to process as a stack. After clicking on “1) Open” and selecting the 16-bit TIF photos, I press the “Play” button to see if the automatic rough alignment was successful. This rough alignment counteracts the rotation of the Earth between the shots, assuming you don’t bother to realign the moon in your viewfinder. The “Play” button starts a slide show running. Image quality grading numbers get displayed next to the “F#” (frame number) on the photo-display dialog upper left side. You can click in the “Frames” progress bar to manually step through the image stack, too. This lets you easily compare how sharp each shot is, relative to each of the other shots. Click “Stop” to halt the slide show. If you have selected “Planet (COG)”, the stack of photos should already be roughly aligned with each other. If you’re trying to stack a landscape and selected the “Surface” radio button instead of “Planet”, you might want to alter the “Image stabilization anchor” location and window size (green X with green rectangle). While in the right-hand dialog showing you one of your photos, you should probably press the “9” key to get the largest “anchor point” area (a green rectangle) The smallest anchor rectangle uses a value of “1”. Smaller number selections will decrease the anchor rectangle selection size. Hold the control button and click on the desired anchor center, which should include a detail that exists in every shot of your (landscape surface) stack. If your rough alignment doesn’t succeed, then unfortunately further stacking operations will likely fail as well. You can delete any shots where the subject moved too far and then try again. Screen shot after photo stack analysis, before clicking “Place AP grid”. Click “2) Analyse”. This will perform an initial quality assessment of the selected pictures, and then decide which are the best ones. It generates a plot of the shot quality as well. The program will place your shots in order of decreasing sharpness. The gray line in the plot is in the same order as the input file stack, and the green line is the sorted order of the frames. Click on the “Frames” button to switch between sorted or original input frame order, and use the slider to switch from frame-to-frame (or else type in the desired shot number). The “Frames” button turns green when this feature is available. If you hover the mouse pointer in the slider area, the tool-tip text will indicate the active sorting order (“The frames are now sorted by quality”). “Frame” slider/input box to view stack images and their quality rating Note the “F# below the slider, such as “F#2 [9/24]”, which indicates the 9th frame of 24 is the ninth sharpest photo, and the second shot (file) in the stack. This example frame is in the “top 34.8% ” of the entire stack, and has a quality rating of “Q 59.9%”. You generally want a photo quality rating of 50% or better in your final stack. There is a zoom slider and horizontal/vertical sliders to magnify and shift the view of the selected photo in the stack. This is an under-appreciated program feature. You might have hundreds of photos, and it would be a terrible chore to manually figure out which ones are the sharpest. This feature automatically finds them and sorts them. You’ll get an error (!#@Anchor) if your shots aren’t aligned well enough for analysis. You’d probably get this error if you did a whole moon shot but selected “Surface” instead of “Planet (COG)”, and the moon was in a different location in each shot. I presume “!#@Anchor” is some form of Dutch swearing. If the Analysis looks good (view the graph for a nice continuous plot showing gradual decrease in image quality of the sorted shots), you’re ready to select the final alignment points. For quality input images, select a “small” alignment point size (AP Size) of 24. For lesser quality images, select a larger number. I have experienced alignment mistakes when using larger alignment point sizes. I’d suggest you use the automatic alignment point creation, which will put many points on your image (see the little blue rectangles with red dots in the image below). Lots of points are needed for quality alignment of the shots in the stack. There’s a manual placement option (“Manual Draw” checkbox), although I haven’t had good success with it. After Analysis, there will be a red rectangle over your displayed photo. If you want to try placing manual alignment points, don’t put any points outside of this rectangle, since some of your shot details go outside of this rectangle. Click “Place AP grid”. This is the automatic way to get the alignment point grid added to your displayed photo. This is fast, easy, and lazy, which I’m all for. It will put a grid of points over the entirety of your subject, but avoids the black background (if you’re shooting moon shots). There’s an “Alignment Points” “Clear” button, if you decide you’re unhappy with your detail selections (and you want to start over). You can try changing the alignment point size, if you wish to experiment with that option. In the left-hand dialog above, I have a value of “30” (green box) for the “Frame percentage to stack” in the section labeled “Stack Options”. This will cause the program to only use the best 30% of the shots in the final processed shot, and it will throw out the worst (most blurred) shots. Use the “Quality Graph” and “Play” results to help you decide on the percentage of sharp shots you want to retain for the final stacking process. The “Normalize Stack” option will enforce a consistent brightness level for each shot, and isn’t typically needed unless you have a non-black sky with your moon. The “Drizzle” option was originally developed for the Hubble telescope. It is intended to take under-sampled data and improve the resolution of the final image. This option doesn’t seem to help my shots any. It will really slow down the stack crunching if you select it. I selected “TIF” for the output format of the final processed shot (under “Stack Options”), which will be placed in this case into a folder next to your input photos, and called “AS_P50”. This folder name indicates it was created by AutoStakkert, and has the results of selecting “50 Percent” of the input shots. I left “Sharpened” un-selected and “Save in Folders” selected. I’m not a fan of the sharpened results from this program, but it can still be a useful evaluation tool, even if it’s not good “art”. You’ll get an extra output file with “_conv” add to its name if you select “Sharpened”. Autostakkert after “Analyze" and “Place AP grid” is done Notice in the screen shot shown above that the program automatically added 1002 alignment points onto the photo after clicking the “Place AP grid”, and added the text “1002 APs”. When I have used less than 300 points, I have noticed occasional alignment errors in the final results. Now, click on “3) Stack”. And wait. Then, wait some more. You’ll get some progress messages with little green check marks and how much time each of them took as they complete. Expect several minutes to elapse before the stacking is complete. The finished output files will be in TIF format if you matched my TIF output format selection. The result pictures include an unsharpened image and also a sharpened image (with “_conv” at the end of the file name). As I mentioned, I don’t like how this program does sharpening, so I post-process the unsharpened stacking result in another photo editor. The finished result (TIF) file has “_lapl4” and “_ap1002” as a part of the file name, because in this example I used the “Laplace” delta, noise robust 4, and created 1002 alignment points. Stacking has completed. Note in the shot above that you can see green checkmarks with timing measurements. This section gets filled in as the program progresses. Finished results (TIF files here) go into the “AS_P50” folder, since 50% percent was selected for the “Frame percentage”. If you had chosen 70 percent, you’d have an “AS_P70” folder instead. You’ll find that the program is smart enough to not only shift your photos for accurate alignment, but it also applies rotation correction! Impressive. Single (sharpened) shot example detail. NOT a stacked photo. The picture above is the best single-shot photo I had to work with, which has been post processed. It is actually missing some subtle details and also has some ‘false’ details, all due to (minor) atmospheric shimmer. It’s pretty good as-is, but can still stand some improvement. The un-cratered “mare” are particularly noisy and contain some misleading ‘false’ detail. I shot this picture with the moon higher in the sky to avoid atmospheric effects. Cold air and higher elevation would have helped, too. Autostakkert final processed shot detail, no sharpening. The shot above shows the result of using the best 50% of my stack of 24 original shots. It still needs post processing (contrast adjust and an unsharp mask). If I had shot many more photos for the stack, the quality would improve even more. Autostakkert final processed shot detail, sharpened Shot detail using Registax wavelet processing If you compare the details between the “single shot” and the finished AutoStakkert stacked (and sharpened) result, you can see several extra details that show up in the stacked picture. Note the smooth surfaces are starting to show subtle shading, which is missing in any of the single shots. The Registax program with layered wavelet sharpening can enhance details slightly better, as well, although it starts to look artificial to me. I added this shot just for fun; I don't think the Registax results look enough like "art" to be useful to me. Autostakkert really does work. If I had shot many more photos, then the results would improve even more. I’m certainly not an expert at using this program, but it’s clear to me that stacking photos can absolutely increase the level of detail that moon (and general landscape) shots contain. It’s almost like getting a better lens than you really have. You could, if you’re inclined to do so, switch to Live View and even shoot a movie (4K or 8K, please) of your subject (converted to AVI) and Autostakkert can use that as input, too. Landscapes If you photograph a distant subject, especially on a warm day, heat shimmer can be severe. Using the “Surface” option (instead of “Planet”), you can dramatically improve subject detail if you use a tripod and take at least a few dozen shots for stacking. Distant landscape “Surface”, with many alignment points The screen shot above shows the selected options for processing a stack of distant (about ½ mile!) landscape shots. Unlike moon shots, you must keep your subject framed exactly the same shot-to-shot for “Surface” processing. If you look carefully, you’ll notice that the auto-alignment grid shows about 27,000 points (!). Just like moon shots, you can “Play” the stack of frames to evaluate sharpness and alignment. Try to stack only the frames that have a quality rating of 50% or better, and rid any frames that don’t align well relative to their neighboring frames. My best single shot in the stack, sharpened, 100% magnification, 600mm The shot above shows more dramatic heat shimmer, due to the extreme distance. This is actually the best of many frames I shot. Fine branch details are obliterated. Stacked result detail, sharpened, 100% magnification Comparing the above pair of detail shots, you’ll notice that the stacked result brings out really fine details that no single shot can deliver. This example used 10 shots of the stack; more would have been better. If there had been more atmospheric shimmer, the differences between single shots and the stacked result would have been more substantial. You'll need to crop the edges of your finished stack result, much like when you do macro focus-stacking. Keep this in mind when framing your landscape shots. Two miles away, 600mm Sharpest single shot detail. LOTS of heat shimmer at 2 miles Stack of sharpest 40% from 106 total shots. HUGE difference! Conclusion If you’ve got the time and motivation to get the very best out of your gear, then give this program a try. You might just find Autostakkert becoming a welcome part of your tool kit. Don’t hold your breath for Photoshop or Lightroom to include features like these. If you’d like to read more explanations of this software, here’s a handy link. The moon photos in this article were made using a Sigma 150-600mm Contemporary at 600mm f/8.0 1/500s ISO 3200 (VR off) using a Nikon D500 with Electronic Front Curtain shutter. I converted the raw shots into 16-bit TIF, with noise reduction, for Autostakkert to use. I’ll bet you didn’t think this lens was as good as it is, did you? Once again, photos and science make a perfect blend for your art. #howto

  • Stack Star Shots with CombineZP

    How can you make one of those cool star field shots, without making the stars turn into streaks? Is there a way to take these pictures without having to buy special hardware? Yes. Star shot made from multiple photos, using CombineZP. There are few things you will need to make good star field pictures. Not surprisingly, the better (and larger) your camera sensor is, the better chance you’ll have to produce quality results. A stable tripod is a must. A lens with a wide aperture will really help. Get a remote release (or a cell phone app) to trigger your shutter. Finally, you’ll need software to align and combine multiple exposures. What you won’t need is a motorized mount that rotates your camera to track the stars; that’s what the software is for. There are many programs that “align” multiple exposures via a simple shift, but the list gets pretty short when you add the constraint to fix rotation. The Earth rotates, causing the stars to appear to move in an arc. I have been using a (free) program called CombineZP that can fix rotation, scale, and shift changes when combining pictures. The CombineZP program was written by Alan Hadley; he’s a really smart guy, but is a little bit challenged by grammar and spelling (to say the least). In case you’re interested, the program name refers to “stacking/combining photos in the ‘Z’ direction” and the “P” is short for “pyramid”. He uses a “pyramid” algorithm for some of his photo stacking operations, which is really great for solving many issues involving overlapping hairs on bug close-ups when doing focus-stacking. Intense stuff. His program’s Help system explains this and many other things. Alan’s program can do much more than bug shot stacks, as I’ll show you. Here's a link to his free program. The CombineZP program works with Windows10 and many earlier versions of Windows; I use it in Windows 10 x64, although it’s a 32-bit program. I almost forgot to mention that you also need a really dark sky. City lights and the moon will generally ruin your results. The higher altitude and less humidity you can get, the better. The kind of photography I’m talking about here doesn’t work for night landscapes (with a horizon), because you can’t mix a fixed horizon with moving stars. This article is about pure star shots. When I photograph stars, I will typically use my Nikon D610, which has a really, really good full-frame (FX) sensor. My go-to lens is my Tokina 11-16mm f/2.8 (DX), even though it’s not supposed to work on a full-frame camera. It works just fine at 16mm, although I typically crop the edges a bit to rid some vignetting and frame-edge astigmatism/coma. If I owned something as snazzy as the Nikkor 14-24 f/2.8, then I’d definitely use that instead. To get my star photographs, I will typically set my camera on manual exposure, ISO 3200 (or less), f/2.8, and a shutter speed of 10 seconds for 16mm shots. Shutter speeds longer than 10 seconds at 16mm will result in star streaks. This kind of photography requires manual focus on infinity (it’s smart to pre-focus while it’s still daylight). These shots will be under-exposed, but the CombineZP (and some other post-processing software) will brighten things up in the final picture. If you choose a longer focal length lens, they you’ll need to use shorter shutter speeds to avoid getting streaks instead of points of star light. Take a test shot and zoom in on it to view how much streaking you see. I’d recommend you take a minimum of 4 shots to combine. The more shots you have, the better results you can get. Don’t wait too long between shots. Manual method using CombineZP for stacking shots: Convert your Raw star shots into 8-bit Tiff, LZW compression, with an image editor of your choice. CombineZP won’t accept Raw format or 16-bit. Start CombineZP.exe Click “Enable Menu” icon to see the menu system. Click File | New Select the TIFF photos (as-shot order), then wait until each shot is loaded into the stack. Select Stack | Size and Alignment | Auto (Shift + Rotate + Scale), OK. (This will align and replace each shot in the stack with the aligned shots.) Your screen will probably look black after the alignment is done, but that’s normal. Select Stack | Enhanced Average to Out Lowlight Gain (0=none) enter a value between 0 and 50. Press OK. Highlight Attenuation (0-1000, 0=none) enter 0. Press OK. Brighten (1000=stay same) enter 2000 (for 1 stop brighten, 3000 for 2 stops, etc.), Press OK. The “Enhanced Average” step lets you tune the exposure adjustment of the photos, and then combines them (and reduces noise via averaging the shots). When processing is done, mouse-drag a rectangle around what you want saved. In this case, it’s as if you’re using a crop tool. Click File | Save Rectangle As | myStarShot.png (You can choose an output format from jpg, tif, bmp (24 or 32 bit), gif, png.) Now, your “stacked” shot is ready for final adjustment in your favorite photo editor. You will probably want to do additional noise-reduction, Levels and Curves adjustment, white balance adjustment, and apply an un-sharp mask. Create a Macro to Automate Star Stacks If you’re a little more ambitious, you can create a macro to do your star stacks, once you settle in on a recipe you like. Alan explains how to make macros for his program, but here’s a Cliff’s Notes version if you want to try it out. The CombineZP program has several collections of macros, saved in files that have the .CZM extension. Inside these collections, you can have up to 10 macros. Macro names that look like “_Macro4”, “_Macro5” etc. are place-holder (inactive) macros without commands in them (unless you put some there). Since the default macro set has 10 active entries, you’ll need to either make your own macro set or alter an existing macro set. Find an appropriate “macro set” (.czm files) that has an available macro via Macro | Load Macro Set (I will choose “Enhancer.czm”) You’ll want to replace a place-holder name in the set with your new macro: Macro | Edit | Macros. Click on “_Macro 3” to alter it (if you used Enhancer.czm). Note that your new macro name cannot begin with an underscore character, or it won’t be runnable via a user click. The Macro Editor, before any changes. Rename an unused macro (starts with “_Macro”) to a name without an underscore. Here, we’ll call the new macro “Star Stack”. Add steps, along with any parameters it needs, by selecting a “Command” in the drop-drown list. For the first command, we want to align the already-loaded stack of photos: Align the stack Click “Update/Paste” to add the Align command to the macro. This command will replace each original star shot in the stack of loaded images with aligned ones (not touching your original .tif files). You now have a new “stack” of images to perform further operations upon. Next, we want to get the “average” of each shot in the stack, to rid noise and atmospheric interference effects. We also want to enhance the light in each shot while combining it with the others. “Enhanced Average to Out” command with (3) parameters Click the “Update/Paste” button to save the averaging command. The “Enhanced Average to Out” command expects to operate on a stack of images (with any number of images in the stack). It will then place the results into the “Out” location, which is visible on the screen. Click the “Save Macro” button, once all of the steps are added. Click the “Ok/Update” to exit the Macro Editor. The finished Star Stack macro The new macro stack Click on the “X” to close the “Edit Macro” dialog. For use in the future, you will want to save this macro set into a new file. Click Macro | Edit | Save Macro Set As | StarStacker.czm Try out your new macro: File | Empty Stack (to clear out everything) File | *New (select the original .TIF files of the star shots) Macro | Star Stack (It should now run and do both the alignment and averaging) Do the usual “save rectangle as” to save your results. After you’re done running the new macro, you may want to restore the system with the default macro set (for focus-stacking). Click “Macro | Restore Standard Macros” The program now looks like it did when you first started running it. You can now do regular focus-stacking operations. To get back to your new star macro, do this: Macro | Load Macro Set | StarStacker.czm This particular example isn’t very sophisticated, but it shows you the way into the world of CombineZP automation. There are a great many more macro sets to explore that are provided with the program. You can use the help system to research the commands in the sample macros to learn more. Now, get out there and shoot the stars. #howto

  • Nikkor 300mm f/4.5 pre-AI Review: A Blast From the Past

    Back in the olden days, before computers were generally available, Nikon was making the nicest lenses you could get. How do these antiques stack up to modern lenses? I thought I’d take a look. The Nikkor 300mm f/4.5 was my very first “good” telephoto. This thing even pre-dates “auto indexing”, although later I got a kit and converted it to AI (AI, or auto-indexing, was invented in 1977). It does have Nikon’s NIC (Nikon Integrated Coating) multi-coating. Auto-focus hadn’t been invented yet (Nikon started in that game in 1986). Internal-focusing lenses were about a year away. Nikon’s “ED” (extra-low dispersion) glass hadn’t quite been introduced yet (it got introduced in the next generation 300mm lens). We’re talking 1975. To even the playing field a bit, I have picked my Sigma Contemporary 150-600mm lens for a comparison, which I’ll zoom to 300mm. This Sigma is definitely not the best lens out there, but I think its representative of what is widely available today (and it’s actually cheaper in today’s dollars than the Nikkor was in 1975). Back in the day, no self-respecting photographer would stoop to use a zoom lens; they were complete crap then. This 300mm Nikkor lens was produced from 1975-1977. The aperture is 6-bladed, which is not very nice for “sunstars” or lights at night. It has a rotating, locking, non-removable lens collar that is excellent for balancing on a tripod. It has a wonderful permanent telescoping lens shade, which I sorely miss on today’s lenses. This lens looks, feels, and acts like its brand-new; I expect it to last well beyond my own lifetime. I can’t sufficiently describe how excellent this lens is for manual focusing. It has precisely the right dampening, rotation range, and smoothness. The ‘feel’ of the focusing hasn’t changed any whatsoever over the life of the lens. Nikon built this metal lens to the highest possible mechanical standards. Don’t get me wrong, though. Manual-focus on a long lens is generally a real pain. Ever since Nikon abandoned the “split-screen” focusing screens, precision and fast manual focus is a thing of the past. You can still get accurate manual focus on a long lens, but it pretty much requires the use of a tripod or really stopping down the aperture. It’s possible to buy focus screen replacements, but I heard Katzeye is out of business, and other makers cause really dark viewfinders. I configure my cameras with the “Non-CPU lens” menu setting, and shoot with aperture-priority (or manual) mode, so auto-exposure isn’t any different from modern lenses (except you turn the aperture ring instead of a wheel). If you haven’t used an AI lens before, note that you still get to focus and shoot with a wide-open aperture. Your camera does need to have an aperture-coupling lever, however (I heard they abandoned this on the D7500). Even though such things are totally correctable in post-processing anyway, I thought I’d mention that vignetting, distortion, and chromatic aberration are minimal on this lens. Oh, I forgot to mention that it has a 72mm filter thread size. Also, the lens only focuses down to 13 feet. Nikkor 300mm f/4.5 AI-converted on Nikon D610 with lens shade extended Resolution Testing I haven’t ever seen any resolution analysis of this lens, so that’s what I’m going to concentrate on in this article. I used my Nikon D610 (24 MP, 5.95 micron pixels). I’m only showing the Sigma results at f/5.6 (where the Sigma resolution is at its worst). The MTFMapper software I used for resolution analysis produces charts showing “smoothed” measurements. It’s possible to get at individual resolution measurements, however, in both the meridional and sagittal directions. I did my testing at 10 meters, which is a realistic shooting distance for 300 mm. Beware of measurements where they shoot a lens of this focal length at maybe 4 or 5 meters. Sigma at 300mm f/5.6 (worst aperture) resolution chart detail, D610 Nikkor 300mm f/8.0 (best aperture) resolution chart detail, D610 Peak Resolution Results The Sigma, at 300mm f/5.6, had peak resolution measurements of 48.5 MTF50 lp/mm, or 2329 lines per picture height. Again, this is at the Sigma’s worst aperture! The Nikkor had the following peak resolution measurements: f/4.5 MTF50 lp/mm = 25.1 (meridional and sagittal) f/5.6 MTF50 lp/mm = 25.1 (meridional and sagittal) f/8.0 MTF50 lp/mm = 40.2 (sagittal), 38.5 (meridional) f/11.0 MTF50 lp/mm = 36.8 (sagittal) f/16.0 MTF50 lp/mm = 33.5 (sagittal) The Sigma totally smokes the Nikkor when comparing the same aperture measurements. The Nikkor at f/8.0 and beyond, though, is quite respectable. Since I’m generally against trying to give a single number that represents resolution, the following section shows you the overall lens results. Full-sensor Resolution Measurements First, I’ll show the Sigma at 300mm and f/5.6 and then we'll take a look at the Nikkor. Sigma 150-600 MTF50 lp/mm resolution at 300mm and f/5.6 Sigma 150-600 MTF10/MTF30 contrast at 300mm and f/5.6 Now, here’s the Nikkor 300mm results. I stopped measuring after f/16.0, although the lens stops down to f/22 (and diffraction is really kicking in to spoil the resolution). Nikkor 300mm MTF50 lp/mm (smoothed) resolution at f/4.5 Definitely not up to present-day resolution standards. Nikkor 300mm MTF10/MTF30 contrast at f/4.5 Nikkor 300mm MTF50 lp/mm (smoothed) resolution at f/5.6 Nikkor 300mm MTF10/MTF30 contrast at f/5.6 Nikkor 300mm MTF50 lp/mm (smoothed) resolution at f/8.0 Nikkor 300mm MTF10/MTF30 contrast at f/8.0 Nikkor 300mm MTF50 lp/mm (smoothed) resolution at f/11.0 Nikkor 300mm MTF10/MTF30 contrast at f/11.0 Nikkor 300mm MTF50 lp/mm (smoothed) resolution at f/16.0 Nikkor 300mm MTF10/MTF30 contrast at f/16.0 Sample picture Full picture sample Crop from near the picture center Conclusion In the right hands, this Nikkor 300mm is capable of making beautiful photographs. The level of effort, skill, and patience required for an old manual-focus telephoto lens isn’t for everyone. And forget about birds in flight. And avoid placing your subject in the frame corners. I suppose I’m just sentimental, but I have no plans for ever letting go of mine. I think of it as a real collector’s item. #review

  • Reverse that Lens for Extreme Close-ups

    When you do close-up photography, there’s a whole new set of rules to get quality results. I’m talking really close up. Believe it or not, your lens will perform better when it’s mounted in reverse. It will also magnify the image more. When you get this close, you’re also going to have to learn about focus-stacking. I have an article on my close-up hardware that is located here. An article on stacking software is located here. A related program I also use is called “CombineZP”, which has similar stacking features, plus a few more. There are many programs that feature focus stacking; I try to stick with recommending stuff that is free. Some lenses that aren’t meant for macro photography can become quite useful when they’re mounted in reverse. My favorite bellows close-up lens has 52 mm filter threads, which fits my “BR-2” lens reverse ring. For my lenses with larger filter threads, I use “step-down” rings to step from the larger thread diameter down to the 52 mm thread size. I haven’t seen any vignetting by doing this, so don’t worry about this being a problem. I’ll be talking about Nikon lenses here. All of their newer lenses have the “G” designation, which means they have the “feature” of no aperture ring. Believe me, you’re going to need their older macro lenses if you want into the larger-than-life game. If you reverse and/or mount a lens on a bellows, you’re going to lose electronic connections with your camera and therefore electronic aperture control. With the Nikon auto-focus lenses that have an aperture ring (mostly the “D” lenses) you can unlock their minimum-aperture setting and have full use of their aperture. For even older manual-focus lenses, their aperture rings “just work” as-is. You’ll always want to stop down the lens (typically to f/8) for best quality. At high magnifications, the depth of field becomes too shallow to be useful, which is where the focus-stacking software comes into play. Most of my macro shots are stacks of typically 20 to 80 shots. I move the lens on the bellows rack by about 0.2 to 0.5 mm per shot, until I’ve photographed my subject from front to back in slices. I also use a ring light mounted on the (now front-facing) rear of the lens, which I slip over my BR-3 ring that’s mounted to the lens rear. A ring light vastly simplifies lighting and also helps with focus. There are flash and continuous-light ring lights; I prefer the continuous light, but vibrations can be a challenge. Stacking photos obviously means that it’s limited to static subjects, such as deceased bugs. Please don’t kill anything just to photograph it; very uncool. 60 mm Micro Nikkor AF-D reverse-mounted. A bee is checking it out. The shot above shows the 60 mm f/2.8 Micro-Nikkor AF-D lens with step-down rings to attach its 62 mm filter threads to the 52 mm BR-2 lens reverse ring. The LED ring light shown slips over the BR-3 ring mounted on the rear of the lens. I use the PB-4 bellows. You can find modern equivalents of this gear on the web, or maybe locate the original equipment on E-bay. I normally use my 60 mm Micro-Nikkor mounted directly on my camera and stick with magnifications of life-sized or less, plus electronic flash. I just wanted to point out that the AF-D lenses have fully functional apertures when reverse-mounted on a bellows, but you need to get step-down rings to do this combination. Nikkor 105 mm f/2.5 Reverse-mounted, including lens shade The photo above shows my 105 mm f/2.5 Nikkor (pre-AI!) reverse-mounted on the PB-4 bellows. This lens allows a magnification range from 0.28X through 1.6X on the bellows. For lower magnifications, the working distance is as large as 16 inches (and therefore allows use of the lens hood). At maximum magnification, the working distance is reduced to about 115 mm. I keep the lens parked at its infinity setting. This lens isn’t as optically good as the modern 105 mm f/2.8 G Micro Nikkor, but at least it has a working aperture ring on the bellows. When you want to try really, really magnified subjects, you can try mounting a short-focal-length lens. I have tried my 20 mm lens, but I don’t like the image quality. My favorite lens on the PB-4 bellows is my old 55 mm f/3.5 Micro-Nikkor. I have many close-up shots in my gallery page taken with it. I can get magnifications anywhere from 1.68X through 4.3X when it’s reverse-mounted. The quality is simply sublime. It has a working distance at a near-constant value of 75 mm at any magnification setting, which works fine with my LED ring light. This is a bit too close for most live bugs, however, since they’re too skittish for this. The LED light also cuts into the working distance range, so I only use it for static subjects. 105 mm f/2.5 Nikkor (pre-AI) reversed 1.5X focus stack While it isn’t optically stunning for macro, the quality of this 105 mm is very good when reversed. 55 mm f/3.5 Micro Nikkor reversed 4.2X It can be fun to try going way beyond life-size with a bellows. Did you know that a light bulb filament has coils within coils? You’d never know it, if you weren’t able to see beyond life-sized. The blue coils are made of tungsten; they can withstand the extreme temperatures inside a light bulb. Beware that vibrations can get outrageous at these high magnifications. I use the “mirror-up” or live-view mode when using continuous lighting. If your camera supports it, then you should also enable electronic-front-curtain shutter mode. I always use either a wired or wireless remote shutter release. Electronic flash will of course freeze the subject motion. I find extreme close-up photography very rewarding, yet challenging. You get to explore things that are otherwise invisible. If you aren't the patient type, then this venue isn't for you. This is yet another example of how science (focus-stacking software and modern computers) enables a whole new area of art. It's a great time to be alive. #howto

bottom of page