Search Results
272 results found with an empty search
- Sharper Moon Shots with AutoStakkert
When you want to get to the next level in getting really sharp distant object photos, like the moon, what do you do? Do you really need to get that $16,000-plus 800mm Nikkor? There’s an enemy that keeps you from your sharpness goal, no matter how much you spend on gear. It’s called the atmosphere. So how do you minimize atmospheric “shimmer”? Here’s where software (and science) can come to the rescue. Shooting the moon can be frustrating, for many reasons. After you get your big lens and really stable tripod, you quickly find you’re not done quite yet. You flip up the camera mirror, use a remote release, and even invoke the Electronic Front Curtain shutter. Even at a motion-freezing high shutter speed, you still aren’t getting satisfactory resolution. Evidently, elimination of subject motion and vibration still isn’t enough. Your next step to sharpness is based on image stacking. You might think that you need a motor-driven “equatorial mount” to counteract the Earth’s rotation to successfully combine your multiple shots, but actually you don’t. The software can fix that. The software I’m going to discuss isn’t limited to the moon or the planets. It can also help with any distant terrestrial landscape shots, as long as your subject holds still. The key to sharpness is based on statistics. Most of the time, details of your subject are in the same location, but with a shimmering atmosphere, sometimes they move a bit. If you take several shots of the same subject and look for details that are “usually” present in each of the photos, you can combine these shots into a single sharper picture. Your camera’s focusing system is another sharpness culprit. As soon as your focus system thinks the focus is “good enough”, it stops trying to focus further. As a result, you’ll find that some shots are sharper than others. The software also recognizes this, and is capable of automatically only selecting the “best” shots it locates in a series (a ‘stack’). The program I’m going to describe is called “AutoStakkert”, version 3.0.14 for 64-bit Windows. I’m using it on Windows 10. It’s available on other operating systems. This free program can be located here. The program author is Emil Kraaikamp. There are other astro-stack programs available, of course. Learning their usage nuances can be really time-consuming, so I can in no way claim that this AutoStakert is the best one. I just know that it is capable of doing what I want it to. I converted my raw photos into 16-bit TIF files to use the program, but it accepts a variety of image formats. It doesn’t accept raw formats, though. There are many, many options available with this program, but I’ll describe a couple of recipes that work for me. Keep in mind that the intended users of this program are astronomers, not photographers. I have had best success when using at least 20 pictures in a stack. I’ve seen extreme examples where users have processed more than 10,000 shots in a stack (frames from a video) with this program! The more atmospheric shimmer, the more shots you’ll need to counteract that shimmer. With newer cameras starting to offer 4K video, this is something to keep in mind. Before I forget to mention it, this program can output a ‘sharpened’ photo, but I don’t like the result (totally over-sharpened with haloes). I use the un-sharpened output and post-process it with my favorite photo editor instead. Finished stack result, after applying an un-sharp mask. Using the Program Run the program “AutoStakkert.exe” as an Administrator (right-mouse click on the file to do this). I believe the program author is from the Netherlands, hence the unusual program name. This program doesn’t like raw format, so you’ll need to convert your photos into any of a variety of image formats (I use 16-bit tiff). For my moon shots, I don’t bother to re-center the moon in the frame to counteract the Earth’s rotation. The software takes care of that, when you choose the “Planet (COG)” Image Stabilization option. If you’re shooting distant landscapes, you need to use the “Surface” Image Stabilization option instead, where your subject isn’t moving. If you don’t use a tripod for this, then you might as well stop reading the article at this point. For the other “Image Stabilization” options, I used the “Dynamic Background” but I honestly don’t understand its impact on the results. Leave the defaults in the “Quality Estimator” section. These are “Laplace” delta, “Noise Robust” 4, and “Local”. The “Noise Robust” value should get increased for more noisy or dim subjects and decreased for more quality input photos. For really high quality shots, a Noise Robust value of 2 is suggested. I leave the “Expand” option alone (it will change to “Crop” if you click it). This will leave the output large if you leave it as “Expand”. The “Local” setting uses each alignment point to further assess each frame quality, versus “Global” to use the entirety of each frame. Click the “1) Open” button, and browse to the folder with your (TIF, JPG, etc.) multiple shots to process. Use the “control” or “shift” buttons to select the desired photos to process as a stack. After clicking on “1) Open” and selecting the 16-bit TIF photos, I press the “Play” button to see if the automatic rough alignment was successful. This rough alignment counteracts the rotation of the Earth between the shots, assuming you don’t bother to realign the moon in your viewfinder. The “Play” button starts a slide show running. Image quality grading numbers get displayed next to the “F#” (frame number) on the photo-display dialog upper left side. You can click in the “Frames” progress bar to manually step through the image stack, too. This lets you easily compare how sharp each shot is, relative to each of the other shots. Click “Stop” to halt the slide show. If you have selected “Planet (COG)”, the stack of photos should already be roughly aligned with each other. If you’re trying to stack a landscape and selected the “Surface” radio button instead of “Planet”, you might want to alter the “Image stabilization anchor” location and window size (green X with green rectangle). While in the right-hand dialog showing you one of your photos, you should probably press the “9” key to get the largest “anchor point” area (a green rectangle) The smallest anchor rectangle uses a value of “1”. Smaller number selections will decrease the anchor rectangle selection size. Hold the control button and click on the desired anchor center, which should include a detail that exists in every shot of your (landscape surface) stack. If your rough alignment doesn’t succeed, then unfortunately further stacking operations will likely fail as well. You can delete any shots where the subject moved too far and then try again. Screen shot after photo stack analysis, before clicking “Place AP grid”. Click “2) Analyse”. This will perform an initial quality assessment of the selected pictures, and then decide which are the best ones. It generates a plot of the shot quality as well. The program will place your shots in order of decreasing sharpness. The gray line in the plot is in the same order as the input file stack, and the green line is the sorted order of the frames. Click on the “Frames” button to switch between sorted or original input frame order, and use the slider to switch from frame-to-frame (or else type in the desired shot number). The “Frames” button turns green when this feature is available. If you hover the mouse pointer in the slider area, the tool-tip text will indicate the active sorting order (“The frames are now sorted by quality”). “Frame” slider/input box to view stack images and their quality rating Note the “F# below the slider, such as “F#2 [9/24]”, which indicates the 9th frame of 24 is the ninth sharpest photo, and the second shot (file) in the stack. This example frame is in the “top 34.8% ” of the entire stack, and has a quality rating of “Q 59.9%”. You generally want a photo quality rating of 50% or better in your final stack. There is a zoom slider and horizontal/vertical sliders to magnify and shift the view of the selected photo in the stack. This is an under-appreciated program feature. You might have hundreds of photos, and it would be a terrible chore to manually figure out which ones are the sharpest. This feature automatically finds them and sorts them. You’ll get an error (!#@Anchor) if your shots aren’t aligned well enough for analysis. You’d probably get this error if you did a whole moon shot but selected “Surface” instead of “Planet (COG)”, and the moon was in a different location in each shot. I presume “!#@Anchor” is some form of Dutch swearing. If the Analysis looks good (view the graph for a nice continuous plot showing gradual decrease in image quality of the sorted shots), you’re ready to select the final alignment points. For quality input images, select a “small” alignment point size (AP Size) of 24. For lesser quality images, select a larger number. I have experienced alignment mistakes when using larger alignment point sizes. I’d suggest you use the automatic alignment point creation, which will put many points on your image (see the little blue rectangles with red dots in the image below). Lots of points are needed for quality alignment of the shots in the stack. There’s a manual placement option (“Manual Draw” checkbox), although I haven’t had good success with it. After Analysis, there will be a red rectangle over your displayed photo. If you want to try placing manual alignment points, don’t put any points outside of this rectangle, since some of your shot details go outside of this rectangle. Click “Place AP grid”. This is the automatic way to get the alignment point grid added to your displayed photo. This is fast, easy, and lazy, which I’m all for. It will put a grid of points over the entirety of your subject, but avoids the black background (if you’re shooting moon shots). There’s an “Alignment Points” “Clear” button, if you decide you’re unhappy with your detail selections (and you want to start over). You can try changing the alignment point size, if you wish to experiment with that option. In the left-hand dialog above, I have a value of “30” (green box) for the “Frame percentage to stack” in the section labeled “Stack Options”. This will cause the program to only use the best 30% of the shots in the final processed shot, and it will throw out the worst (most blurred) shots. Use the “Quality Graph” and “Play” results to help you decide on the percentage of sharp shots you want to retain for the final stacking process. The “Normalize Stack” option will enforce a consistent brightness level for each shot, and isn’t typically needed unless you have a non-black sky with your moon. The “Drizzle” option was originally developed for the Hubble telescope. It is intended to take under-sampled data and improve the resolution of the final image. This option doesn’t seem to help my shots any. It will really slow down the stack crunching if you select it. I selected “TIF” for the output format of the final processed shot (under “Stack Options”), which will be placed in this case into a folder next to your input photos, and called “AS_P50”. This folder name indicates it was created by AutoStakkert, and has the results of selecting “50 Percent” of the input shots. I left “Sharpened” un-selected and “Save in Folders” selected. I’m not a fan of the sharpened results from this program, but it can still be a useful evaluation tool, even if it’s not good “art”. You’ll get an extra output file with “_conv” add to its name if you select “Sharpened”. Autostakkert after “Analyze" and “Place AP grid” is done Notice in the screen shot shown above that the program automatically added 1002 alignment points onto the photo after clicking the “Place AP grid”, and added the text “1002 APs”. When I have used less than 300 points, I have noticed occasional alignment errors in the final results. Now, click on “3) Stack”. And wait. Then, wait some more. You’ll get some progress messages with little green check marks and how much time each of them took as they complete. Expect several minutes to elapse before the stacking is complete. The finished output files will be in TIF format if you matched my TIF output format selection. The result pictures include an unsharpened image and also a sharpened image (with “_conv” at the end of the file name). As I mentioned, I don’t like how this program does sharpening, so I post-process the unsharpened stacking result in another photo editor. The finished result (TIF) file has “_lapl4” and “_ap1002” as a part of the file name, because in this example I used the “Laplace” delta, noise robust 4, and created 1002 alignment points. Stacking has completed. Note in the shot above that you can see green checkmarks with timing measurements. This section gets filled in as the program progresses. Finished results (TIF files here) go into the “AS_P50” folder, since 50% percent was selected for the “Frame percentage”. If you had chosen 70 percent, you’d have an “AS_P70” folder instead. You’ll find that the program is smart enough to not only shift your photos for accurate alignment, but it also applies rotation correction! Impressive. Single (sharpened) shot example detail. NOT a stacked photo. The picture above is the best single-shot photo I had to work with, which has been post processed. It is actually missing some subtle details and also has some ‘false’ details, all due to (minor) atmospheric shimmer. It’s pretty good as-is, but can still stand some improvement. The un-cratered “mare” are particularly noisy and contain some misleading ‘false’ detail. I shot this picture with the moon higher in the sky to avoid atmospheric effects. Cold air and higher elevation would have helped, too. Autostakkert final processed shot detail, no sharpening. The shot above shows the result of using the best 50% of my stack of 24 original shots. It still needs post processing (contrast adjust and an unsharp mask). If I had shot many more photos for the stack, the quality would improve even more. Autostakkert final processed shot detail, sharpened Shot detail using Registax wavelet processing If you compare the details between the “single shot” and the finished AutoStakkert stacked (and sharpened) result, you can see several extra details that show up in the stacked picture. Note the smooth surfaces are starting to show subtle shading, which is missing in any of the single shots. The Registax program with layered wavelet sharpening can enhance details slightly better, as well, although it starts to look artificial to me. I added this shot just for fun; I don't think the Registax results look enough like "art" to be useful to me. Autostakkert really does work. If I had shot many more photos, then the results would improve even more. I’m certainly not an expert at using this program, but it’s clear to me that stacking photos can absolutely increase the level of detail that moon (and general landscape) shots contain. It’s almost like getting a better lens than you really have. You could, if you’re inclined to do so, switch to Live View and even shoot a movie (4K or 8K, please) of your subject (converted to AVI) and Autostakkert can use that as input, too. Landscapes If you photograph a distant subject, especially on a warm day, heat shimmer can be severe. Using the “Surface” option (instead of “Planet”), you can dramatically improve subject detail if you use a tripod and take at least a few dozen shots for stacking. Distant landscape “Surface”, with many alignment points The screen shot above shows the selected options for processing a stack of distant (about ½ mile!) landscape shots. Unlike moon shots, you must keep your subject framed exactly the same shot-to-shot for “Surface” processing. If you look carefully, you’ll notice that the auto-alignment grid shows about 27,000 points (!). Just like moon shots, you can “Play” the stack of frames to evaluate sharpness and alignment. Try to stack only the frames that have a quality rating of 50% or better, and rid any frames that don’t align well relative to their neighboring frames. My best single shot in the stack, sharpened, 100% magnification, 600mm The shot above shows more dramatic heat shimmer, due to the extreme distance. This is actually the best of many frames I shot. Fine branch details are obliterated. Stacked result detail, sharpened, 100% magnification Comparing the above pair of detail shots, you’ll notice that the stacked result brings out really fine details that no single shot can deliver. This example used 10 shots of the stack; more would have been better. If there had been more atmospheric shimmer, the differences between single shots and the stacked result would have been more substantial. You'll need to crop the edges of your finished stack result, much like when you do macro focus-stacking. Keep this in mind when framing your landscape shots. Two miles away, 600mm Sharpest single shot detail. LOTS of heat shimmer at 2 miles Stack of sharpest 40% from 106 total shots. HUGE difference! Conclusion If you’ve got the time and motivation to get the very best out of your gear, then give this program a try. You might just find Autostakkert becoming a welcome part of your tool kit. Don’t hold your breath for Photoshop or Lightroom to include features like these. If you’d like to read more explanations of this software, here’s a handy link. The moon photos in this article were made using a Sigma 150-600mm Contemporary at 600mm f/8.0 1/500s ISO 3200 (VR off) using a Nikon D500 with Electronic Front Curtain shutter. I converted the raw shots into 16-bit TIF, with noise reduction, for Autostakkert to use. I’ll bet you didn’t think this lens was as good as it is, did you? Once again, photos and science make a perfect blend for your art. #howto
- Stack Star Shots with CombineZP
How can you make one of those cool star field shots, without making the stars turn into streaks? Is there a way to take these pictures without having to buy special hardware? Yes. Star shot made from multiple photos, using CombineZP. There are few things you will need to make good star field pictures. Not surprisingly, the better (and larger) your camera sensor is, the better chance you’ll have to produce quality results. A stable tripod is a must. A lens with a wide aperture will really help. Get a remote release (or a cell phone app) to trigger your shutter. Finally, you’ll need software to align and combine multiple exposures. What you won’t need is a motorized mount that rotates your camera to track the stars; that’s what the software is for. There are many programs that “align” multiple exposures via a simple shift, but the list gets pretty short when you add the constraint to fix rotation. The Earth rotates, causing the stars to appear to move in an arc. I have been using a (free) program called CombineZP that can fix rotation, scale, and shift changes when combining pictures. The CombineZP program was written by Alan Hadley; he’s a really smart guy, but is a little bit challenged by grammar and spelling (to say the least). In case you’re interested, the program name refers to “stacking/combining photos in the ‘Z’ direction” and the “P” is short for “pyramid”. He uses a “pyramid” algorithm for some of his photo stacking operations, which is really great for solving many issues involving overlapping hairs on bug close-ups when doing focus-stacking. Intense stuff. His program’s Help system explains this and many other things. Alan’s program can do much more than bug shot stacks, as I’ll show you. Here's a link to his free program. The CombineZP program works with Windows10 and many earlier versions of Windows; I use it in Windows 10 x64, although it’s a 32-bit program. I almost forgot to mention that you also need a really dark sky. City lights and the moon will generally ruin your results. The higher altitude and less humidity you can get, the better. The kind of photography I’m talking about here doesn’t work for night landscapes (with a horizon), because you can’t mix a fixed horizon with moving stars. This article is about pure star shots. When I photograph stars, I will typically use my Nikon D610, which has a really, really good full-frame (FX) sensor. My go-to lens is my Tokina 11-16mm f/2.8 (DX), even though it’s not supposed to work on a full-frame camera. It works just fine at 16mm, although I typically crop the edges a bit to rid some vignetting and frame-edge astigmatism/coma. If I owned something as snazzy as the Nikkor 14-24 f/2.8, then I’d definitely use that instead. To get my star photographs, I will typically set my camera on manual exposure, ISO 3200 (or less), f/2.8, and a shutter speed of 10 seconds for 16mm shots. Shutter speeds longer than 10 seconds at 16mm will result in star streaks. This kind of photography requires manual focus on infinity (it’s smart to pre-focus while it’s still daylight). These shots will be under-exposed, but the CombineZP (and some other post-processing software) will brighten things up in the final picture. If you choose a longer focal length lens, they you’ll need to use shorter shutter speeds to avoid getting streaks instead of points of star light. Take a test shot and zoom in on it to view how much streaking you see. I’d recommend you take a minimum of 4 shots to combine. The more shots you have, the better results you can get. Don’t wait too long between shots. Manual method using CombineZP for stacking shots: Convert your Raw star shots into 8-bit Tiff, LZW compression, with an image editor of your choice. CombineZP won’t accept Raw format or 16-bit. Start CombineZP.exe Click “Enable Menu” icon to see the menu system. Click File | New Select the TIFF photos (as-shot order), then wait until each shot is loaded into the stack. Select Stack | Size and Alignment | Auto (Shift + Rotate + Scale), OK. (This will align and replace each shot in the stack with the aligned shots.) Your screen will probably look black after the alignment is done, but that’s normal. Select Stack | Enhanced Average to Out Lowlight Gain (0=none) enter a value between 0 and 50. Press OK. Highlight Attenuation (0-1000, 0=none) enter 0. Press OK. Brighten (1000=stay same) enter 2000 (for 1 stop brighten, 3000 for 2 stops, etc.), Press OK. The “Enhanced Average” step lets you tune the exposure adjustment of the photos, and then combines them (and reduces noise via averaging the shots). When processing is done, mouse-drag a rectangle around what you want saved. In this case, it’s as if you’re using a crop tool. Click File | Save Rectangle As | myStarShot.png (You can choose an output format from jpg, tif, bmp (24 or 32 bit), gif, png.) Now, your “stacked” shot is ready for final adjustment in your favorite photo editor. You will probably want to do additional noise-reduction, Levels and Curves adjustment, white balance adjustment, and apply an un-sharp mask. Create a Macro to Automate Star Stacks If you’re a little more ambitious, you can create a macro to do your star stacks, once you settle in on a recipe you like. Alan explains how to make macros for his program, but here’s a Cliff’s Notes version if you want to try it out. The CombineZP program has several collections of macros, saved in files that have the .CZM extension. Inside these collections, you can have up to 10 macros. Macro names that look like “_Macro4”, “_Macro5” etc. are place-holder (inactive) macros without commands in them (unless you put some there). Since the default macro set has 10 active entries, you’ll need to either make your own macro set or alter an existing macro set. Find an appropriate “macro set” (.czm files) that has an available macro via Macro | Load Macro Set (I will choose “Enhancer.czm”) You’ll want to replace a place-holder name in the set with your new macro: Macro | Edit | Macros. Click on “_Macro 3” to alter it (if you used Enhancer.czm). Note that your new macro name cannot begin with an underscore character, or it won’t be runnable via a user click. The Macro Editor, before any changes. Rename an unused macro (starts with “_Macro”) to a name without an underscore. Here, we’ll call the new macro “Star Stack”. Add steps, along with any parameters it needs, by selecting a “Command” in the drop-drown list. For the first command, we want to align the already-loaded stack of photos: Align the stack Click “Update/Paste” to add the Align command to the macro. This command will replace each original star shot in the stack of loaded images with aligned ones (not touching your original .tif files). You now have a new “stack” of images to perform further operations upon. Next, we want to get the “average” of each shot in the stack, to rid noise and atmospheric interference effects. We also want to enhance the light in each shot while combining it with the others. “Enhanced Average to Out” command with (3) parameters Click the “Update/Paste” button to save the averaging command. The “Enhanced Average to Out” command expects to operate on a stack of images (with any number of images in the stack). It will then place the results into the “Out” location, which is visible on the screen. Click the “Save Macro” button, once all of the steps are added. Click the “Ok/Update” to exit the Macro Editor. The finished Star Stack macro The new macro stack Click on the “X” to close the “Edit Macro” dialog. For use in the future, you will want to save this macro set into a new file. Click Macro | Edit | Save Macro Set As | StarStacker.czm Try out your new macro: File | Empty Stack (to clear out everything) File | *New (select the original .TIF files of the star shots) Macro | Star Stack (It should now run and do both the alignment and averaging) Do the usual “save rectangle as” to save your results. After you’re done running the new macro, you may want to restore the system with the default macro set (for focus-stacking). Click “Macro | Restore Standard Macros” The program now looks like it did when you first started running it. You can now do regular focus-stacking operations. To get back to your new star macro, do this: Macro | Load Macro Set | StarStacker.czm This particular example isn’t very sophisticated, but it shows you the way into the world of CombineZP automation. There are a great many more macro sets to explore that are provided with the program. You can use the help system to research the commands in the sample macros to learn more. Now, get out there and shoot the stars. #howto
- Nikkor 300mm f/4.5 pre-AI Review: A Blast From the Past
Back in the olden days, before computers were generally available, Nikon was making the nicest lenses you could get. How do these antiques stack up to modern lenses? I thought I’d take a look. The Nikkor 300mm f/4.5 was my very first “good” telephoto. This thing even pre-dates “auto indexing”, although later I got a kit and converted it to AI (AI, or auto-indexing, was invented in 1977). It does have Nikon’s NIC (Nikon Integrated Coating) multi-coating. Auto-focus hadn’t been invented yet (Nikon started in that game in 1986). Internal-focusing lenses were about a year away. Nikon’s “ED” (extra-low dispersion) glass hadn’t quite been introduced yet (it got introduced in the next generation 300mm lens). We’re talking 1975. To even the playing field a bit, I have picked my Sigma Contemporary 150-600mm lens for a comparison, which I’ll zoom to 300mm. This Sigma is definitely not the best lens out there, but I think its representative of what is widely available today (and it’s actually cheaper in today’s dollars than the Nikkor was in 1975). Back in the day, no self-respecting photographer would stoop to use a zoom lens; they were complete crap then. This 300mm Nikkor lens was produced from 1975-1977. The aperture is 6-bladed, which is not very nice for “sunstars” or lights at night. It has a rotating, locking, non-removable lens collar that is excellent for balancing on a tripod. It has a wonderful permanent telescoping lens shade, which I sorely miss on today’s lenses. This lens looks, feels, and acts like its brand-new; I expect it to last well beyond my own lifetime. I can’t sufficiently describe how excellent this lens is for manual focusing. It has precisely the right dampening, rotation range, and smoothness. The ‘feel’ of the focusing hasn’t changed any whatsoever over the life of the lens. Nikon built this metal lens to the highest possible mechanical standards. Don’t get me wrong, though. Manual-focus on a long lens is generally a real pain. Ever since Nikon abandoned the “split-screen” focusing screens, precision and fast manual focus is a thing of the past. You can still get accurate manual focus on a long lens, but it pretty much requires the use of a tripod or really stopping down the aperture. It’s possible to buy focus screen replacements, but I heard Katzeye is out of business, and other makers cause really dark viewfinders. I configure my cameras with the “Non-CPU lens” menu setting, and shoot with aperture-priority (or manual) mode, so auto-exposure isn’t any different from modern lenses (except you turn the aperture ring instead of a wheel). If you haven’t used an AI lens before, note that you still get to focus and shoot with a wide-open aperture. Your camera does need to have an aperture-coupling lever, however (I heard they abandoned this on the D7500). Even though such things are totally correctable in post-processing anyway, I thought I’d mention that vignetting, distortion, and chromatic aberration are minimal on this lens. Oh, I forgot to mention that it has a 72mm filter thread size. Also, the lens only focuses down to 13 feet. Nikkor 300mm f/4.5 AI-converted on Nikon D610 with lens shade extended Resolution Testing I haven’t ever seen any resolution analysis of this lens, so that’s what I’m going to concentrate on in this article. I used my Nikon D610 (24 MP, 5.95 micron pixels). I’m only showing the Sigma results at f/5.6 (where the Sigma resolution is at its worst). The MTFMapper software I used for resolution analysis produces charts showing “smoothed” measurements. It’s possible to get at individual resolution measurements, however, in both the meridional and sagittal directions. I did my testing at 10 meters, which is a realistic shooting distance for 300 mm. Beware of measurements where they shoot a lens of this focal length at maybe 4 or 5 meters. Sigma at 300mm f/5.6 (worst aperture) resolution chart detail, D610 Nikkor 300mm f/8.0 (best aperture) resolution chart detail, D610 Peak Resolution Results The Sigma, at 300mm f/5.6, had peak resolution measurements of 48.5 MTF50 lp/mm, or 2329 lines per picture height. Again, this is at the Sigma’s worst aperture! The Nikkor had the following peak resolution measurements: f/4.5 MTF50 lp/mm = 25.1 (meridional and sagittal) f/5.6 MTF50 lp/mm = 25.1 (meridional and sagittal) f/8.0 MTF50 lp/mm = 40.2 (sagittal), 38.5 (meridional) f/11.0 MTF50 lp/mm = 36.8 (sagittal) f/16.0 MTF50 lp/mm = 33.5 (sagittal) The Sigma totally smokes the Nikkor when comparing the same aperture measurements. The Nikkor at f/8.0 and beyond, though, is quite respectable. Since I’m generally against trying to give a single number that represents resolution, the following section shows you the overall lens results. Full-sensor Resolution Measurements First, I’ll show the Sigma at 300mm and f/5.6 and then we'll take a look at the Nikkor. Sigma 150-600 MTF50 lp/mm resolution at 300mm and f/5.6 Sigma 150-600 MTF10/MTF30 contrast at 300mm and f/5.6 Now, here’s the Nikkor 300mm results. I stopped measuring after f/16.0, although the lens stops down to f/22 (and diffraction is really kicking in to spoil the resolution). Nikkor 300mm MTF50 lp/mm (smoothed) resolution at f/4.5 Definitely not up to present-day resolution standards. Nikkor 300mm MTF10/MTF30 contrast at f/4.5 Nikkor 300mm MTF50 lp/mm (smoothed) resolution at f/5.6 Nikkor 300mm MTF10/MTF30 contrast at f/5.6 Nikkor 300mm MTF50 lp/mm (smoothed) resolution at f/8.0 Nikkor 300mm MTF10/MTF30 contrast at f/8.0 Nikkor 300mm MTF50 lp/mm (smoothed) resolution at f/11.0 Nikkor 300mm MTF10/MTF30 contrast at f/11.0 Nikkor 300mm MTF50 lp/mm (smoothed) resolution at f/16.0 Nikkor 300mm MTF10/MTF30 contrast at f/16.0 Sample picture Full picture sample Crop from near the picture center Conclusion In the right hands, this Nikkor 300mm is capable of making beautiful photographs. The level of effort, skill, and patience required for an old manual-focus telephoto lens isn’t for everyone. And forget about birds in flight. And avoid placing your subject in the frame corners. I suppose I’m just sentimental, but I have no plans for ever letting go of mine. I think of it as a real collector’s item. #review
- Reverse that Lens for Extreme Close-ups
When you do close-up photography, there’s a whole new set of rules to get quality results. I’m talking really close up. Believe it or not, your lens will perform better when it’s mounted in reverse. It will also magnify the image more. When you get this close, you’re also going to have to learn about focus-stacking. I have an article on my close-up hardware that is located here. An article on stacking software is located here. A related program I also use is called “CombineZP”, which has similar stacking features, plus a few more. There are many programs that feature focus stacking; I try to stick with recommending stuff that is free. Some lenses that aren’t meant for macro photography can become quite useful when they’re mounted in reverse. My favorite bellows close-up lens has 52 mm filter threads, which fits my “BR-2” lens reverse ring. For my lenses with larger filter threads, I use “step-down” rings to step from the larger thread diameter down to the 52 mm thread size. I haven’t seen any vignetting by doing this, so don’t worry about this being a problem. I’ll be talking about Nikon lenses here. All of their newer lenses have the “G” designation, which means they have the “feature” of no aperture ring. Believe me, you’re going to need their older macro lenses if you want into the larger-than-life game. If you reverse and/or mount a lens on a bellows, you’re going to lose electronic connections with your camera and therefore electronic aperture control. With the Nikon auto-focus lenses that have an aperture ring (mostly the “D” lenses) you can unlock their minimum-aperture setting and have full use of their aperture. For even older manual-focus lenses, their aperture rings “just work” as-is. You’ll always want to stop down the lens (typically to f/8) for best quality. At high magnifications, the depth of field becomes too shallow to be useful, which is where the focus-stacking software comes into play. Most of my macro shots are stacks of typically 20 to 80 shots. I move the lens on the bellows rack by about 0.2 to 0.5 mm per shot, until I’ve photographed my subject from front to back in slices. I also use a ring light mounted on the (now front-facing) rear of the lens, which I slip over my BR-3 ring that’s mounted to the lens rear. A ring light vastly simplifies lighting and also helps with focus. There are flash and continuous-light ring lights; I prefer the continuous light, but vibrations can be a challenge. Stacking photos obviously means that it’s limited to static subjects, such as deceased bugs. Please don’t kill anything just to photograph it; very uncool. 60 mm Micro Nikkor AF-D reverse-mounted. A bee is checking it out. The shot above shows the 60 mm f/2.8 Micro-Nikkor AF-D lens with step-down rings to attach its 62 mm filter threads to the 52 mm BR-2 lens reverse ring. The LED ring light shown slips over the BR-3 ring mounted on the rear of the lens. I use the PB-4 bellows. You can find modern equivalents of this gear on the web, or maybe locate the original equipment on E-bay. I normally use my 60 mm Micro-Nikkor mounted directly on my camera and stick with magnifications of life-sized or less, plus electronic flash. I just wanted to point out that the AF-D lenses have fully functional apertures when reverse-mounted on a bellows, but you need to get step-down rings to do this combination. Nikkor 105 mm f/2.5 Reverse-mounted, including lens shade The photo above shows my 105 mm f/2.5 Nikkor (pre-AI!) reverse-mounted on the PB-4 bellows. This lens allows a magnification range from 0.28X through 1.6X on the bellows. For lower magnifications, the working distance is as large as 16 inches (and therefore allows use of the lens hood). At maximum magnification, the working distance is reduced to about 115 mm. I keep the lens parked at its infinity setting. This lens isn’t as optically good as the modern 105 mm f/2.8 G Micro Nikkor, but at least it has a working aperture ring on the bellows. When you want to try really, really magnified subjects, you can try mounting a short-focal-length lens. I have tried my 20 mm lens, but I don’t like the image quality. My favorite lens on the PB-4 bellows is my old 55 mm f/3.5 Micro-Nikkor. I have many close-up shots in my gallery page taken with it. I can get magnifications anywhere from 1.68X through 4.3X when it’s reverse-mounted. The quality is simply sublime. It has a working distance at a near-constant value of 75 mm at any magnification setting, which works fine with my LED ring light. This is a bit too close for most live bugs, however, since they’re too skittish for this. The LED light also cuts into the working distance range, so I only use it for static subjects. 105 mm f/2.5 Nikkor (pre-AI) reversed 1.5X focus stack While it isn’t optically stunning for macro, the quality of this 105 mm is very good when reversed. 55 mm f/3.5 Micro Nikkor reversed 4.2X It can be fun to try going way beyond life-size with a bellows. Did you know that a light bulb filament has coils within coils? You’d never know it, if you weren’t able to see beyond life-sized. The blue coils are made of tungsten; they can withstand the extreme temperatures inside a light bulb. Beware that vibrations can get outrageous at these high magnifications. I use the “mirror-up” or live-view mode when using continuous lighting. If your camera supports it, then you should also enable electronic-front-curtain shutter mode. I always use either a wired or wireless remote shutter release. Electronic flash will of course freeze the subject motion. I find extreme close-up photography very rewarding, yet challenging. You get to explore things that are otherwise invisible. If you aren't the patient type, then this venue isn't for you. This is yet another example of how science (focus-stacking software and modern computers) enables a whole new area of art. It's a great time to be alive. #howto
- Panoramas Using Raw Format with Lightroom and HDR Efex Pro 2
To get the best quality panoramas, there’s more to consider than just how well your pictures are stitched together. You want to stick with RAW format for as many of your editing steps as possible. If you use Lightroom 6 or newer, and you install the (still free) Nik HDR Efex Pro 2 plug-in, you can make maximum-quality panoramas and also have a large tool set for creativity. Update 8-8-2020 The plug-ins from Nik aren't free anymore. You can get the latest plug-ins from DXO. When shooting the pictures, try to keep about a 50% overlap between shots. Don’t forget to try vertical format shooting for a slightly taller panorama. Lightroom is also capable of multi-row panorama stitching. It’s best to shoot your pictures with a single manual exposure setting, so that the frames will match up better. If you’re careful, you can even get by without using a tripod (I use viewfinder grid lines as alignment guides). HDR panorama using Lightroom and HDR Efex Pro 2 Select the RAW photos to stitch into your panorama Before beginning panorama creation, you may wish to perform any “lens correction” steps on the individual shots, since Lightroom can still recognize the lens data at this point. The first panorama creation step, after you import your pictures into Lightroom, is to select the range of pictures to stitch together (click the first, then use the Shift key and select the last shot in the range). Merge your shots into a panorama Next, click Photo | Photo Merge | Panorama… You’ll get a dialog box that lets you decide between Spherical, Clyndrical, or Perspective projection. Click each selection to decide which projection type looks best for your panorama. Select the “Auto Crop” to clean up the frame edges. Click on “Merge” when you’re satisfied with the projection type. Select your new panorama After the panorama is stitched, you’ll need to select it. Note the panorama is saved in DNG raw format, which lets you have maximum flexibility for further editing enhancements. “DNG” is the Adobe “digital negative” format that is very close to “raw”. The light range and color range (bit depth) is maintained, allowing for maximum highlight recovery, shadow recovery, and “tone mapping”. I tend to lump HDR and tone mapping together, but many photographers consider single-shot manipulations to be “tone mapping”, while multiple overlapping shots at different exposures are required to be considered HDR (high dynamic range). Now would be a good time to do the usual sharpening, noise reduction, color balancing, highlight and shadow manipulations, etc. Because the panorama is in DNG format, you have the maximum flexibility for editing at this stage. For many panoramas, you might be ready to simply export at this point into the finished file format, such as jpeg. Or maybe you’re ready to try the Nik HDR Efex Pro 2 plug-in. Assuming you want to try HDR and you’ve installed the Nik plug-in collection from Google, you’ll next need to select File | Export with Preset | HDR Efex Pro 2. Show a little patience here; it will take a while before Nik is ready. As an aside, Google no longer supports the Nik plug-in collection. As of this writing, though, it’s still available for free download from their web site here. I use the Nik plug-ins in Photoshop, Lightroom, and Zoner Pro. Try out some HDR selections in HDR Efex Pro 2 Use Zoom and Navigator to inspect HDR shot details HDR Efex Pro 2 output file format selection Try out the various canned options and fine-tune controls in HDR Efex Pro 2. Don’t forget to use the Zoom/Navigator controls to inspect details. Most people either love or hate HDR. I fall into the love category. When you’re happy with the HDR effect you want, click on Save to return to Lightroom. HDR Efex Pro 2 won’t allow you to save your picture in raw format, so you should have finished your other editing steps in Lightroom before the conversion to HDR. Only jpg or tiff formats are available for output. #howto
- The Brenzier Method: Thin Depth of Focus
Here’s a very specialized kind of a post-processing to simulate a lens with an impossibly thin depth of focus, yet a very wide angle. The guy who is credited with developing the technique is Ryan Brenzier, who uses it mainly for wedding portraits. This kind of photo is intended to isolate the main subject, and makes it look as if you used something like a 20 mm f/0.2 lens wide open. This look is achieved by making a multi-row panorama while using a fast lens at a wide aperture. The effect might even remind you of something that a “LensBaby” might produce. This technique is used for when you want a wide shot, yet you want to isolate the subject. I’ll show you the results of using an 85 mm lens at f/1.4. You’re supposed to use a tripod to enable good control for aligning each overlapped shot (overlapped by both row and column) in the whole grid of photos. On purpose, I tried to see what would happen if I instead hand-held the camera (a sort of worst-case scenario). As with any panorama, it’s best to overlap the shots by 30 to 50 percent. A multi-row panorama requires the shots not just overlap side-to-side, but also above and below. You don’t have to take the shots in any particular sequence; you can start with your main subject and expand out from there, too. You might want to count your shots for each row, so that you get more predictable results when they get stitched together. Try to shoot all of the pictures containing your main subject before it (or they) can move. The shots that are out of focus aren’t as critical as far as minor movement is concerned. If your panorama shooting is going to be time-consuming, then do your people subjects a favor and get their shots in the sequence done first. I used Lightroom 6.14 to create my multi-row panoramas. You use the same menu options for creating a multi-row panorama that you’d use for a single row; Lightroom just figures out what to do. You can, of course use any software that handles multi-row panorama stitching. A word to the wise: re-sample your input photos to have lower resolution. When you create a multi-row panorama, the finished picture can be huge and take a really, really long time to stitch. You’ll thank me for this tip. As with any panorama, it’s best to stick with both manual exposure and manual focus. Don’t change either while shooting the collection of pictures you’re going to use. To physically create the panorama, start by selecting the range of pictures and then click on: Photo | Photo Merge | Panorama… If Lightroom is unable to stitch the pictures, you might try one of the other “projection” options before giving up. I have found that “spherical” is the most forgiving. You might find that one projection option gives you very different picture height-to-width proportions compared to the others. All it will cost you is some time to explore various projection effects. It’s often the case that there might be a little re-touching required, ala the “healing brush”, in the stitched panorama. This will be especially true if subjects are moving a little during the shooting operation. I'm getting quite fond of using Lightroom for stitching, because I usually have very little cleanup work to do. You might also need to perform a little “distortion removal”, if you are close to objects or your panoramas are very wide. Avoid backgrounds that have straight lines, if you want to save yourself a little work. I feel compelled to mention that extreme perspective distortion might be exactly the effect you want; some rules beg to be broken. Lightroom panorama stitching via the ‘Develop’ module I found that Lightroom cropped off several of my hand-held shots, primarily from having ragged row widths. I would have had much better success if I had used a tripod to control shot-to-shot alignment. Not too surprising. The shot above is comprised from roughly 3 rows of 5 shots per row, taken using vertical format. I'm glad I tried this hand-held experiment to convince myself that I don't have to avoid attempting this technique, just because I'm somewhere without access to a tripod. Razor-thin depth of focus over a wide angle Conclusion This type of photographic technique probably won’t find itself being useful on a daily basis, but it might be just the ticket for a special portrait. It’s just one more tool that you should be aware of. If you want your shots to stand out from the crowd, give the Brenzier Method a try. #howto
- Create Your Own Planet
Here’s a real power trip: make your own worlds! You’ll need a photo editor that lets you map your photo into ‘polar coordinates’. I use Photoshop to get this fun effect, because it has a distortion filter to project the picture pixels into polar coordinates, which is called “mapping”. The ‘planet’ effect won’t work for all subjects; you need to preplan what you shoot to help yourself out. I will typically start with shots that I stitch into a panorama, although the effect can work with a single photograph, too. Try to find a subject that has similar characteristics on both the left and right sides, such as the same height of sky and foreground. You will be doing yourself a favor if you use manual exposure for stitched photos, so the light will balance best between the opposite sides of the final merged photo. A really crowded planet Let’s begin with a panorama that has roughly matching left and right sides. While I shot the original sequence, I was trying to visualize how well the opposite sides of the view would wrap around and touch each other. Start with a balanced shot I was careful to avoid allowing the main subject matter to extend to the top of the frame, because it will cause an ugly effect that would be difficult to blend after mapping the shot into polar coordinates. I arranged for both the sky and the water in the shot above to be reasonably easy to blend together, making it a good candidate for the planet effect. The left and right sides of the panorama have about the same height of sky and water, and their brightness is similar, too. Make the image into a square Before you can convert into polar coordinates, the picture needs to be in a square format, so you need to make the image width match the image height. Make sure the “Constrain Proportions” isn’t selected. Turn the image upside-down You will also need to rotate the square image 180 degrees prior to conversion into polar coordinates. If you skipped this step, you would end up with a “tunnel” effect instead of a “planet” effect. You should try leaving the shot un-rotated sometime to see what happens; there may be subjects where a tunnel effect looks good. Map the upside-down photo into polar coordinates Now you can convert your picture into polar coordinates (from rectangular coordinates). Click Filter | Distort | Polar Coordinates. Make sure the image scale is small enough to preview the conversion effect (click the little “minus” button). Smooth the seams and picture frame edges Now, you’ll need to use the “healing brush” to get the seams and edges of the frame to blend well. There are always some edge spokes you need to tame. You might use the clone stamp here, too. This is where the real effort takes place. It can take a fair amount of finesse to blend the seams to get the shot to look good. Pre-planning your original photos will help minimize how much time and trouble there is to blend the final picture. Rotate 180 degrees to get right-side-up again Now rotate the picture to get it back to the original orientation. You can of course rotate any amount and then crop the shot to your desired proportions, too. Truth be told, I did a little extra 'healing brush' work after what's shown above using Zoner Pro. I like its more sophisticated healing brush tool a bit more than the one in Photoshop. I don't think any single photo editor is the best at everything. There you have it! Your very own planet. Go easy on this technique; a few planet shots are fun, but turning everything into a planet is a bit much. #howto
- Nikon D500: Multiple Buttons, Multiple Focus Modes
The newer high-end Nikons, including the D500, let you assign different focus modes to different buttons. Why would you want such a thing? It’s all about fast reactions. Many camera models and camera generations have of course allowed you to set the “focus mode selector” switch to auto-focus and then press the “AF-mode” button and spin the main command dial to AF-C for continuous auto-focus. Similarly, many models have the “AF-ON” button, or buttons that can be assigned this focus-on-demand feature. That’s only the beginning. It’s silly to ever select AF-S mode instead of AF-C mode, since all you have to do is stop pressing the AF-ON button (while in AF-C mode) to stop focusing. A much more subtle focus requirement is to do something like ignore objects near your desired subject, or to ignore a branch in front of your subject. As soon as you figure out how to select the desired number of focus points or how to set near-subject focus priority, something changes to spoil your shot. Now, you need to start all over again, because your camera insists on focusing on a near branch, or maybe you can’t keep that single focus point (single-point AF area mode) on your erratically-moving target. The point is, the focus requirements never seem to stop changing and you just can’t keep up. You’re tired of missing those shots. What to do? The newer high-end Nikons let you at least triple your chances of getting the shot. Now, you can assign multiple buttons with auto-focus, and each button can have a totally different focus mode assignment. The Nikon D500, for instance, will let you assign the “Pv”, “Fn1”, “Sub-selector” (joy stick), “AF-ON”, and your battery grip “AF-ON” buttons with different focus modes on each one of them! On my D500, I presently have the following button assignments: AF-ON = D25, thumb control, for ‘general-purpose’ focus. Pv = Group Area, middle finger control, for near-subject priority. Sub-selector = Single-point, thumb control, for precision focus. Grip AF-ON = “=AF-ON” copies whatever the camera AF-ON has. I don’t assign the “Fn1” button for focus, because I think its awkward to press it while my index finger is on the shutter release. More acrobatic users may not have this same issue. For me, I only want to use either my thumb or my middle finger to activate focus. Unfortunately, the Sub-selector button is squirrelly, and I have to use the focus-selector lock lever to prevent the joy-stick from moving to different focus points instead of acting like an AF-ON button. To assign these buttons on the D500, you go to the “Custom Settings” (pencil) menu, “f Controls”, and then “Custom Control Assignment”. For each of the desired buttons, you select “AF-area mode + AF-ON”. Each button sub-menu under this option lets you select “Single-Point”, “D-a AF 25, or D72, or D153”, “Group-Area AF”, or “Auto-Area AF”. Auto-focus options for button assignment Note that not every auto-focus option is available to these button assignments (e.g. 3-D tracking isn’t there). Because this is Nikon, different camera models offer a different set of AF assignment options. The Nikon D5, for instance, has the “D9” available, but the D500 starts at “D25”. When you press your assigned button, the viewfinder will instantly change to show you the corresponding active focus-point pattern. This way, you get visual confirmation that you are using the focus mode that you intended. Single-Point AF viewfinder view (center point selected) 25-Point Dynamic Area AF viewfinder view Group-Area AF viewfinder view The point I want to make here is that your choices aren’t set in stone. You can experiment with different modes assigned to different buttons until you feel comfortable with them. Don’t go overboard with changing the assignments all the time, however; it will totally mess up your muscle-memory. Once you get used to using different buttons to get different focus modes, you’ll wonder how you ever got by without them. You can now react nearly instantly to changing conditions and get those shots that you used to miss. This is one of my absolute favorite things about using my D500. #howto
- High-speed Lens Focus Shift Explained
I really love shooting with high-speed lenses, like my Nikkor 85 mm f/1.4 AF-S. In some ways, these lenses are like finicky race horses; they aren’t always as well-behaved as you’d like. The Nikkor 85 mm f/1.4 is legendary for its beautiful out-of-focus rendering (bokeh), combined with being sharp at the focus plane. This beautiful bokeh is achieved by a lens design that avoids the use of any aspherical lens elements. The price paid for this bokeh is an effect called “focus shift”, caused by spherical aberration. The outer portions of the lens will focus the rays of light a little differently from the inner portions. If you stop the lens aperture down, those outer light rays are cut off, and don’t contribute to the image. The spherical aberration effect results in the best-focus plane to be located at what’s called the “circle of least confusion”. As you stop down the lens, the “circle of least confusion” shifts, until it stops shifting at typically f/4 or so. If Nikon engineers had used aspherical lens elements in their design, they could have virtually eliminated any spherical aberration. This would have given the lens even higher resolution at large lens apertures (with virtually no improvements at smaller apertures). All designs involve trade-offs, though. Lenses with aspherical lens elements translate into worse bokeh, all else being equal. You start to notice that out-of-focus blobs look like sliced onions, with concentric rings of light-dark patterns. Lights that are visible in the background at night really emphasize this effect. The outer edges of light blobs should gradually melt into the background; they shouldn’t show a ring of light around the edge of the blob. This aspherical tendency is only a generalization, however. As computer modeling gets better, lens bokeh is getting better with lenses having aspherical elements. Sigma, for instance, has a single aspherical element in their 85 mm f/1.4 Art lens; its bokeh can’t compete with the Nikkor, in my opinion, but I can’t say it has ugly bokeh, either. Circle of least confusion with spherical aberration The diagram above lets you visualize what happens as you change the lens aperture. The plane of best focus is located at the “circle of least confusion”, where the light rays get focused into the narrowest bundle. This light bundle always has a non-zero diameter, but gets best at around f/4.0 on the Nikkor 85 mm f/1.4 AF-S. I got the diagram above (I added the labels and arrows) from this site. Many thanks to this organization for making a great graphic depiction of the “circle of least confusion”. The circle of least confusion travels from left-to-right in the diagram as you stop the lens down. With a small aperture, the light rays near the outer portions of the lens get cut off, and the remaining rays (which are consistently focused at a point) now predominate. At a small-enough aperture, focus shift stops. If you keep stopping the lens down, then diffraction starts to take over. The light ray bundle starts to expand again, although it no longer shifts. Resolution starts to degrade in proportion to the expansion of the light bundle. Note that spherical aberration is a result of lens design, and doesn’t correspond to manufacturing variation. You won’t find a lens copy that eliminates spherical aberration, so you can stop looking for one. Nikkor 85mm f/1.4 AF-S lens elements The picture above is from the official Nikon web site, showing the lens elements, which are pure “spherical” shapes. Spherical lens elements have a constant radius on each surface, which makes them much easier to grind than an aspherical surface. The constant radius translates into smooth out-of-focus backgrounds. When I do focus fine-tuning on my camera, I have to note the aperture that corresponds to each fine-tune value (from wide-open until about f/4.0). Unless you shoot using contrast-detect (live view), you’ll need to change the fine-tune value to match the aperture, or else your pictures will be slightly out of focus. Phase-detect auto-focus uses the lens at its widest aperture, which is why you get the focus error. Contrast-detect uses the shooting aperture, which is why you don’t get any focus error in that mode. Some sample calibrations for my cameras look like this: D7000 f/1.4 = tune +1, f/4.0 = tune -4 D7100 f/1.4 = tune +12, f/4.0 = tune +8 D500 f/1.4 = tune +3, f/4.0 = tune 0 D610 f/1.4 = tune +7, f/4.0 = tune +2 The focus shift consistently moves away from the camera as the aperture closes, so the “+” focus-tune needs to decrease to compensate as the aperture closes. As a result, the fine-tune value needs to be decreased as the lens is stopped down. After f/4.0, the focus shift is no longer noticeable. Focus calibration chart, f/1.4, left side rotated further away Focus chart, f/4.0 showing focus shifts further to the left (away from camera) You can see in the focus charts above how the plane of focus shifted to the left (away from the camera) when stopping down. To fix this, the focus fine-tune would need to change from [+7 at f/1.4] to [+2 at f/4.0] for this camera (to shift focus toward the camera). The focus chart was rotated to 45 degrees, with its left side further away from the camera. I used the MTF Mapper program to analyze the photos of the focus chart. It makes it really easy to locate where the focus plane is. It also lets me know how much sharper the lens is when I stop down the aperture! In case you thought of this, I tried to press the “depth of field preview” (Pv) button while I focused. The theory here is that the lens would be stopped down for the phase detect, to eliminate focus error. Unfortunately, the camera refuses to focus while the “Pv” button is pressed. Oh, well. I try to pay as much attention to the backgrounds as I give to the main subject in photographs. Bokeh is really, really important to me. That’s why I love my 85 mm lens so much that I’m willing to put up with its annoying focus shift. If only this lens had vibration reduction… #howto
- Coolpix B500 40X Super-Zoom Camera and Lens Review
I was recently lent a Nikon Coolpix B500, which has a 22.5 mm to 900 mm zoom (35 mm format-equivalent focal length). The lens is actually 4 mm f/3.0 to 160 mm f/6.5 and has a macro mode, as well. I haven’t ever seen a detailed resolution analysis of a super-zoom like this, so I thought I’d take up the job myself. I won’t dwell on the camera features too much, but I can’t help myself from at least making a few observations about the camera body as well. Camera Body Highlights The camera itself is purely amateur; it doesn’t support raw-format (just jpeg) or even manual exposure control. Its 16-megapixel sensor is breathtakingly small: 4.62 mm X 6.16 mm, with 1.34-micron pixels. The pixel count is 4612 X 3468. A modern smart phone sensor actually has bigger pixels than this (typically 1.4 micron). But the camera costs around $300.00 which is cheaper than those smart phones. I’m a big fan of lenses that provide great out-of-focus backgrounds (bokeh). Because of the tiny sensor in this camera, depth of focus is huge until you get to long focal lengths. If you can manage to get the background out of focus, the results are actually quite pleasing. For those of you that are interested, it has both Wi-fi and Bluetooth, and supports SnapBridge. You can do the usual remote control from your smart phone, if you wish. It uses 4 AA batteries (I used rechargeables). The camera, incredibly, is capable of shooting 7.7 frames per second in “sports” mode at full resolution. It can only shoot 7 frames at this rate, however, so its buffer is tiny. Given the lack of a viewfinder, however, you’re pretty much out of luck tracking any action if you zoom in to any significant degree. You can shoot HD video, of course. I’m personally not very interested in video, so I won’t discuss it any further. The ISO range goes from 80 to 3200, but image quality would hover near zero at ISO 3200. The shutter range is from 1 second to 1/5000 second (faster than my D610!). This camera is very, very poor at focusing in dim light and low contrast; it gets ridiculously bad if you get anywhere maximum zoom. This is probably my biggest gripe about the camera. The camera grip is really, really nice; it’s deep and has a very tactile rubberized surface. The camera is very light; a bit too light for my taste. I’m used to the weight of ‘real’ cameras, like the D500 and D610, and I even use battery grips with those. I prefer the inertia and balance that DX and FX camera bodies provide (but I might change my tune at the end of a ten-mile hike). The camera’s lens cannot be manually focused or zoomed, either; you focus with the traditional half-press of the shutter button, and you zoom either by rotating the shutter collar or pressing a lever on the side of the lens. There is no viewfinder, either; only a 3-inch, 921k LCD screen. Good luck finding and tracking a subject in the sunshine. Its big brother, the B700, costs about 50% more; it sports a viewfinder and the usual PSAM controls, plus a 60 X zoom. For about $90.00 you can get a Hoodman Loupe to cover and view the LCD screen in sun (and you can use it on your other camera screens in Live View). Not the most convenient solution, but it works. B500 at 160 mm zoom B500 top view. Really deep grip. B500 articulating 921k 3-inch LCD. No viewfinder. The Lens The lens has a minimum focus distance of 12 inches at 4 mm and a minimum focus of about 11 feet at 160 mm. In macro mode (only at 4 mm) it will focus down to about 0.4 inches! The lens has vibration-reduction (which can be turned off for tripod use), and it works amazingly well. There are no filter threads, and there is no lens hood, either. You’ll need to shade the lens with your hand. It’s theoretically easier to design a lens that only has a small image circle, and this camera sensor only needs a really small lens image circle to cover it. I think you’ll agree that this design theory is in fact borne out with this camera/lens combination when you see the finished results. The focus speed is pretty lazy. The zoom is painfully slow, mushy, and approximate. Once the focus and zoom gets there, the shot usually turns out just fine. I realize that I have been spoiled with high-performance cameras and lenses, but my patience was sorely tested at longer focal lengths and in anything other than bright sunlight. I just have to keep telling myself how inexpensive this rig is. Close Focus A quarter with the macro setting (4 mm) They weren’t kidding about the 0.4 inches lens-to-subject distance in macro mode. Lighting at this distance is truly a nightmare. I had to direct a light beam in at a ridiculously steep angle for the above shot. You can get as snarky as you want about the lighting here; I just wanted to see how close I could get. I’d have to call macro mode largely a hoax. If you can’t illuminate it, you can’t photograph it. And it’s not really magnified that much, either. A bug with any sense of self-preservation would be long gone. Flare resistance is pretty good. 5 mm f/6.4 Flare and Chromatic Aberration Take a look at this sample photo. The lens showed remarkable resistance to flare, even though I pointed the lens right into the sun. Impressive. 160 mm (900 mm FX) f/6.5 chromatic aberration Since I can’t shoot raw format, I can’t tell you if the level of color fringing is due to a great lens design or perhaps in-camera processing. There’s a little purple fringing in the bottom left corner, but no too bad. Shots like this really emphasize any lateral chromatic aberration. People really obsess about the 900 mm FX-equivalent maximum zoom, but I was more impressed with the 4 mm (22.5 mm FX-equivalent) end of the zoom. This lens goes really wide (77.3 degrees horizontal), compared to typical kit zooms that only zoom to 27 mm (67.4 degrees horizontal) FX-equivalent. The more expensive B700 lens doesn’t go as wide at this lens does (73.7 degrees horizontal); I’d much rather have this wider angle ability than a longer zoom. Throughout the entire zoom range, there is essentially zero distortion! The pictures below will demonstrate this. The main weakness of this lens is meridional-direction resolution, which approaches what I’d describe as shameful at longer focal lengths; it’s actually not too bad at the wide end. That 40 X Zoom Wide 4 mm. You can barely see the buildings Telephoto 160 mm I added an arrow in the 4 mm shot to show where I zoomed in. There was a fair amount of atmospheric haze, so don’t mistake that for a flare or lens contrast problem. Now that’s a zoom. Lens Resolution The following resolution analysis was done using jpeg with default sharpening. If it was available, I would have shot in raw and performed the analysis without any sharpening. I use the (free) MTFmapper program, which I describe here. I have a 41” X 60” resolution chart to analyze the lens, except at 160 mm. I would have had to be nearly 200 feet away from the chart at that 900 mm-equivalent focal length to photograph the whole thing; instead I used a small chart that’s only 7” X 10”, at about 35 feet. 41” X 60” resolution chart, 4 mm (22.5 mm FX equivalent) Note in the shot of the resolution chart that there isn’t any observable distortion, judging by the chart edges. I didn’t note distortion (or perceptible vignetting) at any focal length. Please be aware in the following measurements that my usual results, measured in “MTF50 line pairs per millimeter” are highly misleading. I’m accustomed to seeing a good “MTF50 lp/mm” lens measurement peak at 40 to 50; seeing measurements around 300 seems astonishing. But here’s the deal: the camera sensor has a lot fewer millimeters in it, so it’s much less impressive than it sounds at first blush. A better unit of measurement for resolution in this case is “line pairs per picture height”, which gives you the total available resolution in the picture. I have decided to provide 4 different resolution measurement units, so that you can take your pick: “cycles per pixel”, “MTF50 lp/mm”, “line pairs per picture height”, and “lines per picture height”. Besides the 2-D plots across the camera sensor, I looked at the low-level data to provide the peak center and corner resolution measurements. Another little fact to note: the “MTF10/30” contrast plots I have included are based on jpeg with in-camera processing. These plots should really be based upon un-sharpened “raw” files to be directly comparable to other lenses. Since I can’t shoot raw with this camera, I wasn’t given a choice. The contrast numbers look too good to be true, and they are. The plots are at least useful to compare center-to-edge differences and meridional-versus-sagittal performance, too. In the MTF50 resolution plots that follow, “red” is good and “blue” is bad. Unlike most other web sites, these plots show nearly 100% of the camera sensor’s field of view. This lets you evaluate lens resolution across the whole field of view instead of just a slice or a single point. Recall that this camera sensor is about 4.6 mm X 6 mm. The resolution measurements were made with the lens wide-open. Stopping down the aperture will make all of the readings even better. I turned the lens vibration reduction off for these shots, and I of course used a large tripod. MTF50 results at 4 mm f/3.0 The MTF50 lp/mm resolution measurements at 4mm show how much better the lens is in the sagittal direction (think spokes of a wheel) than the meridional direction. 4 mm cycles/pixel Mtf50 lp/mm lp/ph l/ph Center 0.49 368 1699 3399 Corner 0.32 240 1110 2220 4 mm f/3.0 MTF10/30 chart The MTF10/30 chart is misleading, since it’s based upon a jpeg image with default sharpening. Again, this makes the resolution and contrast look better than it really is, when compared to raw, unsharpened shots. I don’t have a choice here, however. The lens meridional direction is consistently worse than the sagittal direction at all focal lengths. When the sagittal and meridional resolution differs by a significant amount, you get astigmatism. You will note that astigmatism starts to become a problem at about 2/3 of the way from the lens center when zoomed to 4 mm. 4 mm corner detail (cycles per pixel on each edge) You can see a huge quality difference when comparing the edges that point toward the image center (sagittal) than the meridional edges. This is why it’s important to analyze the edge directions separately. Notice in the picture above the minimal chromatic aberration, which gets emphasized in a high-contrast shot like this. MTF50 results at 17.6 mm (99 mm equivalent) f/4.6 17.6 mm cycles/pixel Mtf50 lp/mm lp/ph l/ph Center 0.38 285 1318 2636 Corner 0.33 248 1144 2289 17.6 mm f/4.6 MTF10/30 chart At 17.6 mm (99 mm equivalent FX) the resolution is just plain spectacular. Astigmatism is very well controlled. MTF50 results at 35.9 mm (202 mm equivalent) f/5.4 35.9 mm cycles/pixel Mtf50 lp/mm lp/ph l/ph Center 0.41 308 1422 2844 Corner 0.34 255 1179 2358 35.9 mm f/5.4 MTF10/30 chart Very, very good resolution at 35.9 mm. MTF50 results at 52 mm (294 mm equivalent) f/5.7 52 mm cycles/pixel Mtf50 lp/mm lp/ph l/ph Center 0.41 308 1422 2844 Corner 0.34 255 1179 2358 52.2 mm f/5.7 MTF10/30 chart Meridional resolution is taking a nosedive in the corners here, but sagittal resolution is excellent. MTF50 results at 70 mm (394 mm equivalent) f/5.9 70 mm cycles/pixel Mtf50 lp/mm lp/ph l/ph Center 0.40 300 1387 2774 Corner 0.31 233 1075 2150 70.0 mm f/5.9 MTF10/30 chart Again, meridional performance in the corners is pretty bad, but the sagittal performance is great. MTF50 results at 160 mm f/6.5 160 mm cycles/pixel Mtf50 lp/mm lp/ph l/ph Center 0.27 203 936 1873 Corner 0.21 158 728 1457 160 mm f/6.5 MTF10/30 chart Performance takes a giant hit at maximum zoom. Be that as it may, check out the shot below of the moon at 160 mm; it may not compete with ‘pro’ monster lenses, but it still looks pretty good. Samples 160 mm (900 mm equivalent) f/6.5 1/1000s ISO 125, un-cropped Rufus hummer detail crop The default in-camera noise reduction slightly smears fine details, even at low ISO and bright light. The majority of the ‘smearing’ is probably the meridional-direction weakness in the lens, however. 160 mm f/6.5 1/250s ISO 125, slightly cropped. The moon shot was hand-held at maximum zoom. Kudos to the VR system in this lens! Considering the maximum zoom being used and that I hand-held the camera, these results are nothing short of fantastic. 5 mm f/3.2 1/125s ISO 125 Summary There are certainly limitations with the B500 camera, but it is capable of very high quality photographs. Keep in mind that this camera/lens combination costs less than many DSLR kit lenses! It’s hard to draw any blanket conclusion on this camera. It has a weird mix of really nice and really irritating features. Personally, I require more direct control (P, A, S, M) than this camera provides, and I really missed raw-format files. The lens, however, exceeded my expectations; with a bit of coaxing, it’s possible to take really nice pictures with the B500. #review
- Remote Camera Control Using digiCamControl
When you control your camera via a cable to your computer, it’s called tethered capture. The two most popular communications cables are Ethernet and USB. This article reviews a tethering program called digiCamControl, which uses USB for remote control. I was also going to review tethered capture using LightRoom, but that program is so pathetic at remote control that I decided to not bother. Most people would use a program like this in a studio environment, but it can also be very useful for situations such as shooting from a blind. It's possible, using USB hubs, to 'daisy chain' USB cables together and extend the cable length. I want to concentrate on this program’s ability to perform automatic focus-stacking, which I’ll cover in some detail. The (free) digiCamControl program has a full complement of camera controls, live view, automatic photo transfer to your computer, exposure bracketing, time lapse (movies), web-server remote control (e.g. your smartphone), focus-stacking, multiple screen control, and motion detection. This is a Windows program, and supports most Canon, Sony, and Nikon DSLR/mirrorless cameras (about 100 models). I’m using it with a Nikon D500 and Windows 10. It can be downloaded from here: There is some online documentation, but I don’t consider it very thorough. digiCamControl startup screen after turning camera on When you first start up digiCamControl and turn on your camera, you’ll see something like the screen above. You could start taking pictures by clicking the “aperture” icon in the top-left, but you don’t necessarily know if the camera is in focus yet. The program seems pretty forgiving about turning your camera on before or after starting digiCamControl. The Nikon camera LCD panel shows “PC” to indicate the camera is connected to the computer (via the USB port). digiCamControl main screen after “Capture” click The main reason to use this program is to see a big, beautiful live image on your computer screen, so I go straight to the “Live View” screen, by clicking the “Lv” button. Don’t invoke live view from your camera. You can still use your camera’s shutter release even while connected to the computer. You have full control over exposure, white balance, ISO, and focus from your “Live View” screen on your computer. Download from camera memory card You can separately download pictures from your camera memory card (the main screen “download” button) which will give you thumbnail views of what’s on your camera. Live View screen in digiCamControl The camera LCD doesn’t display its live view, so it doesn’t get hot or consume unnecessary batteries showing you what’s already on your computer screen. Your “live view” is just on your computer’s screen, where you want it, after you click on the “Lv” button. You get a histogram while in live view, too. Click with the left mouse button on the live view screen where you want the focus point located, and then you can click on the “Autofocus” button to focus there. You should see a green square around the focus point location. By default, your pictures will go directly to your computer, such as C:\Users\Ed\Pictures\digiCamControl\Session1 You can assign the session name where you want the photos to go. Since they go to your computer, you’re essentially unlimited for storage. There is pretty significant battery drain while remotely controlling your camera; it’s handy that the battery level is displayed on the main screen. Use a battery grip to make battery drain less of an issue. Live View Exposure Controls The screen shot above shows how you can set your camera controls from the Live View “Control” dialog. Motion Detection and Intervalometer Motion trigger and intervalometer You can trigger your camera via motion detection from Live View, as well as take a series of timed shots (intervalometer). Focus Stacking When you want to stack photos to increase depth of field (even including a landscape shot), you may just fall in love with this program feature. digiCamControl gives you great control over how to configure the near-limit, far-limit, and number of shots in-between for stacking photos. Session setup Start by clicking the “Session” menu option from the main screen, then “Add new session”. Fill in a session name and browse to the folder where you want to save the focus stack photos. A session lets you organize your shots into logical groups. Once you click the “Lv” button from the main menu, you enter “Live View”. You may need to “maximize” the live-view screen to see it. Click on the live view screen with your mouse where you want the “near focus” focus point to be (a green square). Click on the “Auto focus” button to focus on that spot. Advanced Focus Stack dialog Click the (screen top) “Preview” button and then use the mouse wheel to zoom in on the focused spot to verify critical “near” focus. Click on the Preview Screen “X” to close the “preview” dialog and return to live view. Refocus if necessary. Use the bottom-center buttons to rough-focus (the “<<<”, “<<”, ”<”, ”>>>”, ”>>”, ”>” buttons) where more arrows focus in larger steps. Click on the left-hand “lock” button at the bottom of the screen to prevent the near focus from changing any more. Use the same arrow controls to obtain the “far” focus desired, and then click on the right-hand “lock” button at the bottom of the screen to prevent changing the far focus any more. Now that the focus range is locked in, make sure that you expand the “Focus Stacking Advanced” on the left edge of the Live View screen. Enter the desired number of photos, the focus step size (start with around ‘30’) and the wait time between photos. Click the “Preview” button, just below the “Focus Stacking Advanced” controls. This will let you automatically run through all of the focus steps before actually taking the photos (near-to-far), showing the count number as it steps through your requested number of shots. If it looks good, then click on “Start” to let the photo sequence get captured to your computer. Although the digicamcontrol program includes the “enfuse” plugin and can discover other plugins (my CombineZP, for instance) I got errors when I tried to use the plugins. Personally, I use the free stand-alone CombineZP program directly for my focus stacking. The screen shot above shows the Live View screen while setting up the “Focus Stacking Advanced”. The screen shot was done after locking in the “far focus” position; you can see how the near focus looks very fuzzy. After letting digiCamControl control the camera to take the 9 requested shots and saving them to my requested computer folder, I ran the CombineZP program (very similar to the older CombineZM) to stack them into a single shot, as shown below. If you’re interested, I made an article on focus-stacking here: Stacked result from 9 shots: CombineZP “soft stack” I might mention that if you haven’t used focus-stacking software before, you need to make your photo-framing a little wider than what you want for the finished shot. The photo edges will show some unwanted artifacts that are related to shifting focus in each shot. Note the bottom of the photo above shows a “mirror image” that should get cropped off in your editing software. Conclusion There is plenty to explore in this tethering program. Many of its features, such as bracketing exposure, can be done more easily in-camera. Focus-stacking, however, is what I consider a real forte of this program. #howto
- How to Measure Lens Vignetting
How dark are those corners in your photos, by the numbers? How much do you have to stop down a lens to lighten the corners? You can get the answers for yourself using a variety of image editors. The only special equipment you probably need is a grey card or a subject with neutral tones and even illumination. If you want to explore how vignetting changes from close-focus to infinity, then you can photograph the clear blue sky for a target (unless your lens is a super-wide). I’ll show you in three different image editors how to measure RGB values. Capture NX-D example to get RGB values (resolution chart photo) The mouse cursor location is used for RGB value feedback in Capture NX-D. It is shown on the bottom edge of the window. Zoner Photo Studio Pro shows RGB and cursor X,Y The Editor in Zoner Photo Studio Pro displays both the cursor location and the RGB values while using the “magnifying glass” cursor, for instance. Photoshop example using a grey card target Shown above, you can use Photoshop to sample locations of interest from a photo of a grey card. Here, I selected a point near the center and a point in a corner. I used the “color sampler” to get the RGB values. Make sure you have correct white balance, so that the R,G, and B values match (or they're at least close) when using your grey card. The selected central point in the example has an average of 158 for the RGB, while the corner point has an average of 79. You would probably assume that with values that are half as big, there would be a one-stop difference between the center and the corner. But life isn’t quite that simple. The RGB values are non-linear in response to brightness. The use of a grey card makes viewing the lighting distribution across a photo much simpler. Since the RGB relative values should be pretty close to the same at any selected location with a neutral grey card, the overall evaluation of vignetting is just easier (R=G=B in daylight with proper white balance). My resolution charts also work well for this purpose. The gray card analysis above was done using the Nikkor 18-140 f/3.5-5.6 wide open at f/5.6 and 140mm. This is the lens pretty much at its worst. This happens to be my worst lens for vignetting that I own. Stopping down quickly minimizes whatever vignetting there is, by the way. So, how do we use these RGB values to get F-stop values? As I mentioned above, the RGB values don’t relate in a very straightforward way to F-stop values. One way to solve the problem would be to start by setting your camera on ‘manual’ exposure, and take a set of RAW-format pictures of a gray card. Start with a photo that’s about 3 stops over-exposed, and then change your exposure (aperture or shutter) by a third of a stop for another shot. Keep this up until your last shot is at least 3 stops under-exposed (a total of 19 shots). For Ansel Adams fans, this would be shots from Zone 8 through Zone 2. In your photo editor, you can then read the RGB values of each shot to note the progression. If any shots have a “255” RGB reading, then you’ve got a blown-out photo and you won’t be able to use it. There are few lenses that have more than 3 stops of vignetting. With this exposure shot collection, however, you should be able to use these RGB values for more than just vignette analysis. Your library of 1/3 stop photos and their RGB values will let you later analyze any photo where you want to critically analyze brightness and contrast ranges in terms of F-stops. For my own tests, I changed the shutter by third-stop values all the way from +3 stops through -5 stops (Zone VIII through Zone 0). I used the “ExifTool” program explained here to get the “Light Value” (or “EV”) in each shot, to get a list of decimal numbers for easy math with the “stops”. I then used a photo editor to get the RGB for each shot (it varies a little bit in each shot, so I noted a typical value, where R=G=B). Notice that the Zone VIII shot is close to the 255 maximum. Stops Zone EV RGB +3.0 VIII 2.7 250 +2.7 3.0 239 +2.3 3.3 234 +2.0 VII 3.6 225 +1.7 4.0 215 +1.3 4.3 200 +1.0 VI 4.6 180 +.7 5.0 160 +.3 5.3 140 0 V 5.7 125 -.3 5.9 105 -.7 6.3 85 -1.0 IV 6.6 70 -1.3 6.9 60 -1.6 7.3 44 -2.0 III 7.6 40 -2.3 7.9 37 -2.7 8.3 33 -3.0 II 8.6 28 -3.3 9.0 24 -3.7 9.3 22 -4.0 I 9.6 21 -4.3 10.0 19 -4.7 10.3 15 -5.0 0 10.6 13 Given the EV-RGB list above, let’s get back to the lens vignetting problem. The lens center measured about 158 (R), and the corner was about 79. This relates to roughly EV 5.0 and EV 6.4, for a difference of about 1.4 stops. I noticed that the DxOMark site rated this lens vignetting at “1.2 stops”. Pretty close. Exposure Value versus RGB value You can see how non-linear the RGB values are, compared to the EV. It’s well-known how the numeric separation is large in bright areas and small in dark areas. Conclusion I tried the experiment above on a couple of different computers and in three different image editors; the results were in close agreement on each. You could probably just use the results of my experiment directly for your own lens vignette analysis. #howto











