top of page

Search Results

268 results found with an empty search

  • Panoramas Using Raw Format with Lightroom and HDR Efex Pro 2

    To get the best quality panoramas, there’s more to consider than just how well your pictures are stitched together. You want to stick with RAW format for as many of your editing steps as possible. If you use Lightroom 6 or newer, and you install the (still free) Nik HDR Efex Pro 2 plug-in, you can make maximum-quality panoramas and also have a large tool set for creativity. Update 8-8-2020 The plug-ins from Nik aren't free anymore. You can get the latest plug-ins from DXO. When shooting the pictures, try to keep about a 50% overlap between shots. Don’t forget to try vertical format shooting for a slightly taller panorama. Lightroom is also capable of multi-row panorama stitching. It’s best to shoot your pictures with a single manual exposure setting, so that the frames will match up better. If you’re careful, you can even get by without using a tripod (I use viewfinder grid lines as alignment guides). HDR panorama using Lightroom and HDR Efex Pro 2 Select the RAW photos to stitch into your panorama Before beginning panorama creation, you may wish to perform any “lens correction” steps on the individual shots, since Lightroom can still recognize the lens data at this point. The first panorama creation step, after you import your pictures into Lightroom, is to select the range of pictures to stitch together (click the first, then use the Shift key and select the last shot in the range). Merge your shots into a panorama Next, click Photo | Photo Merge | Panorama… You’ll get a dialog box that lets you decide between Spherical, Clyndrical, or Perspective projection. Click each selection to decide which projection type looks best for your panorama. Select the “Auto Crop” to clean up the frame edges. Click on “Merge” when you’re satisfied with the projection type. Select your new panorama After the panorama is stitched, you’ll need to select it. Note the panorama is saved in DNG raw format, which lets you have maximum flexibility for further editing enhancements. “DNG” is the Adobe “digital negative” format that is very close to “raw”. The light range and color range (bit depth) is maintained, allowing for maximum highlight recovery, shadow recovery, and “tone mapping”. I tend to lump HDR and tone mapping together, but many photographers consider single-shot manipulations to be “tone mapping”, while multiple overlapping shots at different exposures are required to be considered HDR (high dynamic range). Now would be a good time to do the usual sharpening, noise reduction, color balancing, highlight and shadow manipulations, etc. Because the panorama is in DNG format, you have the maximum flexibility for editing at this stage. For many panoramas, you might be ready to simply export at this point into the finished file format, such as jpeg. Or maybe you’re ready to try the Nik HDR Efex Pro 2 plug-in. Assuming you want to try HDR and you’ve installed the Nik plug-in collection from Google, you’ll next need to select File | Export with Preset | HDR Efex Pro 2. Show a little patience here; it will take a while before Nik is ready. As an aside, Google no longer supports the Nik plug-in collection. As of this writing, though, it’s still available for free download from their web site here. I use the Nik plug-ins in Photoshop, Lightroom, and Zoner Pro. Try out some HDR selections in HDR Efex Pro 2 Use Zoom and Navigator to inspect HDR shot details HDR Efex Pro 2 output file format selection Try out the various canned options and fine-tune controls in HDR Efex Pro 2. Don’t forget to use the Zoom/Navigator controls to inspect details. Most people either love or hate HDR. I fall into the love category. When you’re happy with the HDR effect you want, click on Save to return to Lightroom. HDR Efex Pro 2 won’t allow you to save your picture in raw format, so you should have finished your other editing steps in Lightroom before the conversion to HDR. Only jpg or tiff formats are available for output. #howto

  • The Brenzier Method: Thin Depth of Focus

    Here’s a very specialized kind of a post-processing to simulate a lens with an impossibly thin depth of focus, yet a very wide angle. The guy who is credited with developing the technique is Ryan Brenzier, who uses it mainly for wedding portraits. This kind of photo is intended to isolate the main subject, and makes it look as if you used something like a 20 mm f/0.2 lens wide open. This look is achieved by making a multi-row panorama while using a fast lens at a wide aperture. The effect might even remind you of something that a “LensBaby” might produce. This technique is used for when you want a wide shot, yet you want to isolate the subject. I’ll show you the results of using an 85 mm lens at f/1.4. You’re supposed to use a tripod to enable good control for aligning each overlapped shot (overlapped by both row and column) in the whole grid of photos. On purpose, I tried to see what would happen if I instead hand-held the camera (a sort of worst-case scenario). As with any panorama, it’s best to overlap the shots by 30 to 50 percent. A multi-row panorama requires the shots not just overlap side-to-side, but also above and below. You don’t have to take the shots in any particular sequence; you can start with your main subject and expand out from there, too. You might want to count your shots for each row, so that you get more predictable results when they get stitched together. Try to shoot all of the pictures containing your main subject before it (or they) can move. The shots that are out of focus aren’t as critical as far as minor movement is concerned. If your panorama shooting is going to be time-consuming, then do your people subjects a favor and get their shots in the sequence done first. I used Lightroom 6.14 to create my multi-row panoramas. You use the same menu options for creating a multi-row panorama that you’d use for a single row; Lightroom just figures out what to do. You can, of course use any software that handles multi-row panorama stitching. A word to the wise: re-sample your input photos to have lower resolution. When you create a multi-row panorama, the finished picture can be huge and take a really, really long time to stitch. You’ll thank me for this tip. As with any panorama, it’s best to stick with both manual exposure and manual focus. Don’t change either while shooting the collection of pictures you’re going to use. To physically create the panorama, start by selecting the range of pictures and then click on: Photo | Photo Merge | Panorama… If Lightroom is unable to stitch the pictures, you might try one of the other “projection” options before giving up. I have found that “spherical” is the most forgiving. You might find that one projection option gives you very different picture height-to-width proportions compared to the others. All it will cost you is some time to explore various projection effects. It’s often the case that there might be a little re-touching required, ala the “healing brush”, in the stitched panorama. This will be especially true if subjects are moving a little during the shooting operation. I'm getting quite fond of using Lightroom for stitching, because I usually have very little cleanup work to do. You might also need to perform a little “distortion removal”, if you are close to objects or your panoramas are very wide. Avoid backgrounds that have straight lines, if you want to save yourself a little work. I feel compelled to mention that extreme perspective distortion might be exactly the effect you want; some rules beg to be broken. Lightroom panorama stitching via the ‘Develop’ module I found that Lightroom cropped off several of my hand-held shots, primarily from having ragged row widths. I would have had much better success if I had used a tripod to control shot-to-shot alignment. Not too surprising. The shot above is comprised from roughly 3 rows of 5 shots per row, taken using vertical format. I'm glad I tried this hand-held experiment to convince myself that I don't have to avoid attempting this technique, just because I'm somewhere without access to a tripod. Razor-thin depth of focus over a wide angle Conclusion This type of photographic technique probably won’t find itself being useful on a daily basis, but it might be just the ticket for a special portrait. It’s just one more tool that you should be aware of. If you want your shots to stand out from the crowd, give the Brenzier Method a try. #howto

  • Create Your Own Planet

    Here’s a real power trip: make your own worlds! You’ll need a photo editor that lets you map your photo into ‘polar coordinates’. I use Photoshop to get this fun effect, because it has a distortion filter to project the picture pixels into polar coordinates, which is called “mapping”. The ‘planet’ effect won’t work for all subjects; you need to preplan what you shoot to help yourself out. I will typically start with shots that I stitch into a panorama, although the effect can work with a single photograph, too. Try to find a subject that has similar characteristics on both the left and right sides, such as the same height of sky and foreground. You will be doing yourself a favor if you use manual exposure for stitched photos, so the light will balance best between the opposite sides of the final merged photo. A really crowded planet Let’s begin with a panorama that has roughly matching left and right sides. While I shot the original sequence, I was trying to visualize how well the opposite sides of the view would wrap around and touch each other. Start with a balanced shot I was careful to avoid allowing the main subject matter to extend to the top of the frame, because it will cause an ugly effect that would be difficult to blend after mapping the shot into polar coordinates. I arranged for both the sky and the water in the shot above to be reasonably easy to blend together, making it a good candidate for the planet effect. The left and right sides of the panorama have about the same height of sky and water, and their brightness is similar, too. Make the image into a square Before you can convert into polar coordinates, the picture needs to be in a square format, so you need to make the image width match the image height. Make sure the “Constrain Proportions” isn’t selected. Turn the image upside-down You will also need to rotate the square image 180 degrees prior to conversion into polar coordinates. If you skipped this step, you would end up with a “tunnel” effect instead of a “planet” effect. You should try leaving the shot un-rotated sometime to see what happens; there may be subjects where a tunnel effect looks good. Map the upside-down photo into polar coordinates Now you can convert your picture into polar coordinates (from rectangular coordinates). Click Filter | Distort | Polar Coordinates. Make sure the image scale is small enough to preview the conversion effect (click the little “minus” button). Smooth the seams and picture frame edges Now, you’ll need to use the “healing brush” to get the seams and edges of the frame to blend well. There are always some edge spokes you need to tame. You might use the clone stamp here, too. This is where the real effort takes place. It can take a fair amount of finesse to blend the seams to get the shot to look good. Pre-planning your original photos will help minimize how much time and trouble there is to blend the final picture. Rotate 180 degrees to get right-side-up again Now rotate the picture to get it back to the original orientation. You can of course rotate any amount and then crop the shot to your desired proportions, too. Truth be told, I did a little extra 'healing brush' work after what's shown above using Zoner Pro. I like its more sophisticated healing brush tool a bit more than the one in Photoshop. I don't think any single photo editor is the best at everything. There you have it! Your very own planet. Go easy on this technique; a few planet shots are fun, but turning everything into a planet is a bit much. #howto

  • Nikon D500: Multiple Buttons, Multiple Focus Modes

    The newer high-end Nikons, including the D500, let you assign different focus modes to different buttons. Why would you want such a thing? It’s all about fast reactions. Many camera models and camera generations have of course allowed you to set the “focus mode selector” switch to auto-focus and then press the “AF-mode” button and spin the main command dial to AF-C for continuous auto-focus. Similarly, many models have the “AF-ON” button, or buttons that can be assigned this focus-on-demand feature. That’s only the beginning. It’s silly to ever select AF-S mode instead of AF-C mode, since all you have to do is stop pressing the AF-ON button (while in AF-C mode) to stop focusing. A much more subtle focus requirement is to do something like ignore objects near your desired subject, or to ignore a branch in front of your subject. As soon as you figure out how to select the desired number of focus points or how to set near-subject focus priority, something changes to spoil your shot. Now, you need to start all over again, because your camera insists on focusing on a near branch, or maybe you can’t keep that single focus point (single-point AF area mode) on your erratically-moving target. The point is, the focus requirements never seem to stop changing and you just can’t keep up. You’re tired of missing those shots. What to do? The newer high-end Nikons let you at least triple your chances of getting the shot. Now, you can assign multiple buttons with auto-focus, and each button can have a totally different focus mode assignment. The Nikon D500, for instance, will let you assign the “Pv”, “Fn1”, “Sub-selector” (joy stick), “AF-ON”, and your battery grip “AF-ON” buttons with different focus modes on each one of them! On my D500, I presently have the following button assignments: AF-ON = D25, thumb control, for ‘general-purpose’ focus. Pv = Group Area, middle finger control, for near-subject priority. Sub-selector = Single-point, thumb control, for precision focus. Grip AF-ON = “=AF-ON” copies whatever the camera AF-ON has. I don’t assign the “Fn1” button for focus, because I think its awkward to press it while my index finger is on the shutter release. More acrobatic users may not have this same issue. For me, I only want to use either my thumb or my middle finger to activate focus. Unfortunately, the Sub-selector button is squirrelly, and I have to use the focus-selector lock lever to prevent the joy-stick from moving to different focus points instead of acting like an AF-ON button. To assign these buttons on the D500, you go to the “Custom Settings” (pencil) menu, “f Controls”, and then “Custom Control Assignment”. For each of the desired buttons, you select “AF-area mode + AF-ON”. Each button sub-menu under this option lets you select “Single-Point”, “D-a AF 25, or D72, or D153”, “Group-Area AF”, or “Auto-Area AF”. Auto-focus options for button assignment Note that not every auto-focus option is available to these button assignments (e.g. 3-D tracking isn’t there). Because this is Nikon, different camera models offer a different set of AF assignment options. The Nikon D5, for instance, has the “D9” available, but the D500 starts at “D25”. When you press your assigned button, the viewfinder will instantly change to show you the corresponding active focus-point pattern. This way, you get visual confirmation that you are using the focus mode that you intended. Single-Point AF viewfinder view (center point selected) 25-Point Dynamic Area AF viewfinder view Group-Area AF viewfinder view The point I want to make here is that your choices aren’t set in stone. You can experiment with different modes assigned to different buttons until you feel comfortable with them. Don’t go overboard with changing the assignments all the time, however; it will totally mess up your muscle-memory. Once you get used to using different buttons to get different focus modes, you’ll wonder how you ever got by without them. You can now react nearly instantly to changing conditions and get those shots that you used to miss. This is one of my absolute favorite things about using my D500. #howto

  • High-speed Lens Focus Shift Explained

    I really love shooting with high-speed lenses, like my Nikkor 85 mm f/1.4 AF-S. In some ways, these lenses are like finicky race horses; they aren’t always as well-behaved as you’d like. The Nikkor 85 mm f/1.4 is legendary for its beautiful out-of-focus rendering (bokeh), combined with being sharp at the focus plane. This beautiful bokeh is achieved by a lens design that avoids the use of any aspherical lens elements. The price paid for this bokeh is an effect called “focus shift”, caused by spherical aberration. The outer portions of the lens will focus the rays of light a little differently from the inner portions. If you stop the lens aperture down, those outer light rays are cut off, and don’t contribute to the image. The spherical aberration effect results in the best-focus plane to be located at what’s called the “circle of least confusion”. As you stop down the lens, the “circle of least confusion” shifts, until it stops shifting at typically f/4 or so. If Nikon engineers had used aspherical lens elements in their design, they could have virtually eliminated any spherical aberration. This would have given the lens even higher resolution at large lens apertures (with virtually no improvements at smaller apertures). All designs involve trade-offs, though. Lenses with aspherical lens elements translate into worse bokeh, all else being equal. You start to notice that out-of-focus blobs look like sliced onions, with concentric rings of light-dark patterns. Lights that are visible in the background at night really emphasize this effect. The outer edges of light blobs should gradually melt into the background; they shouldn’t show a ring of light around the edge of the blob. This aspherical tendency is only a generalization, however. As computer modeling gets better, lens bokeh is getting better with lenses having aspherical elements. Sigma, for instance, has a single aspherical element in their 85 mm f/1.4 Art lens; its bokeh can’t compete with the Nikkor, in my opinion, but I can’t say it has ugly bokeh, either. Circle of least confusion with spherical aberration The diagram above lets you visualize what happens as you change the lens aperture. The plane of best focus is located at the “circle of least confusion”, where the light rays get focused into the narrowest bundle. This light bundle always has a non-zero diameter, but gets best at around f/4.0 on the Nikkor 85 mm f/1.4 AF-S. I got the diagram above (I added the labels and arrows) from this site. Many thanks to this organization for making a great graphic depiction of the “circle of least confusion”. The circle of least confusion travels from left-to-right in the diagram as you stop the lens down. With a small aperture, the light rays near the outer portions of the lens get cut off, and the remaining rays (which are consistently focused at a point) now predominate. At a small-enough aperture, focus shift stops. If you keep stopping the lens down, then diffraction starts to take over. The light ray bundle starts to expand again, although it no longer shifts. Resolution starts to degrade in proportion to the expansion of the light bundle. Note that spherical aberration is a result of lens design, and doesn’t correspond to manufacturing variation. You won’t find a lens copy that eliminates spherical aberration, so you can stop looking for one. Nikkor 85mm f/1.4 AF-S lens elements The picture above is from the official Nikon web site, showing the lens elements, which are pure “spherical” shapes. Spherical lens elements have a constant radius on each surface, which makes them much easier to grind than an aspherical surface. The constant radius translates into smooth out-of-focus backgrounds. When I do focus fine-tuning on my camera, I have to note the aperture that corresponds to each fine-tune value (from wide-open until about f/4.0). Unless you shoot using contrast-detect (live view), you’ll need to change the fine-tune value to match the aperture, or else your pictures will be slightly out of focus. Phase-detect auto-focus uses the lens at its widest aperture, which is why you get the focus error. Contrast-detect uses the shooting aperture, which is why you don’t get any focus error in that mode. Some sample calibrations for my cameras look like this: D7000 f/1.4 = tune +1, f/4.0 = tune -4 D7100 f/1.4 = tune +12, f/4.0 = tune +8 D500 f/1.4 = tune +3, f/4.0 = tune 0 D610 f/1.4 = tune +7, f/4.0 = tune +2 The focus shift consistently moves away from the camera as the aperture closes, so the “+” focus-tune needs to decrease to compensate as the aperture closes. As a result, the fine-tune value needs to be decreased as the lens is stopped down. After f/4.0, the focus shift is no longer noticeable. Focus calibration chart, f/1.4, left side rotated further away Focus chart, f/4.0 showing focus shifts further to the left (away from camera) You can see in the focus charts above how the plane of focus shifted to the left (away from the camera) when stopping down. To fix this, the focus fine-tune would need to change from [+7 at f/1.4] to [+2 at f/4.0] for this camera (to shift focus toward the camera). The focus chart was rotated to 45 degrees, with its left side further away from the camera. I used the MTF Mapper program to analyze the photos of the focus chart. It makes it really easy to locate where the focus plane is. It also lets me know how much sharper the lens is when I stop down the aperture! In case you thought of this, I tried to press the “depth of field preview” (Pv) button while I focused. The theory here is that the lens would be stopped down for the phase detect, to eliminate focus error. Unfortunately, the camera refuses to focus while the “Pv” button is pressed. Oh, well. I try to pay as much attention to the backgrounds as I give to the main subject in photographs. Bokeh is really, really important to me. That’s why I love my 85 mm lens so much that I’m willing to put up with its annoying focus shift. If only this lens had vibration reduction… #howto

  • Coolpix B500 40X Super-Zoom Camera and Lens Review

    I was recently lent a Nikon Coolpix B500, which has a 22.5 mm to 900 mm zoom (35 mm format-equivalent focal length). The lens is actually 4 mm f/3.0 to 160 mm f/6.5 and has a macro mode, as well. I haven’t ever seen a detailed resolution analysis of a super-zoom like this, so I thought I’d take up the job myself. I won’t dwell on the camera features too much, but I can’t help myself from at least making a few observations about the camera body as well. Camera Body Highlights The camera itself is purely amateur; it doesn’t support raw-format (just jpeg) or even manual exposure control. Its 16-megapixel sensor is breathtakingly small: 4.62 mm X 6.16 mm, with 1.34-micron pixels. The pixel count is 4612 X 3468. A modern smart phone sensor actually has bigger pixels than this (typically 1.4 micron). But the camera costs around $300.00 which is cheaper than those smart phones. I’m a big fan of lenses that provide great out-of-focus backgrounds (bokeh). Because of the tiny sensor in this camera, depth of focus is huge until you get to long focal lengths. If you can manage to get the background out of focus, the results are actually quite pleasing. For those of you that are interested, it has both Wi-fi and Bluetooth, and supports SnapBridge. You can do the usual remote control from your smart phone, if you wish. It uses 4 AA batteries (I used rechargeables). The camera, incredibly, is capable of shooting 7.7 frames per second in “sports” mode at full resolution. It can only shoot 7 frames at this rate, however, so its buffer is tiny. Given the lack of a viewfinder, however, you’re pretty much out of luck tracking any action if you zoom in to any significant degree. You can shoot HD video, of course. I’m personally not very interested in video, so I won’t discuss it any further. The ISO range goes from 80 to 3200, but image quality would hover near zero at ISO 3200. The shutter range is from 1 second to 1/5000 second (faster than my D610!). This camera is very, very poor at focusing in dim light and low contrast; it gets ridiculously bad if you get anywhere maximum zoom. This is probably my biggest gripe about the camera. The camera grip is really, really nice; it’s deep and has a very tactile rubberized surface. The camera is very light; a bit too light for my taste. I’m used to the weight of ‘real’ cameras, like the D500 and D610, and I even use battery grips with those. I prefer the inertia and balance that DX and FX camera bodies provide (but I might change my tune at the end of a ten-mile hike). The camera’s lens cannot be manually focused or zoomed, either; you focus with the traditional half-press of the shutter button, and you zoom either by rotating the shutter collar or pressing a lever on the side of the lens. There is no viewfinder, either; only a 3-inch, 921k LCD screen. Good luck finding and tracking a subject in the sunshine. Its big brother, the B700, costs about 50% more; it sports a viewfinder and the usual PSAM controls, plus a 60 X zoom. For about $90.00 you can get a Hoodman Loupe to cover and view the LCD screen in sun (and you can use it on your other camera screens in Live View). Not the most convenient solution, but it works. B500 at 160 mm zoom B500 top view. Really deep grip. B500 articulating 921k 3-inch LCD. No viewfinder. The Lens The lens has a minimum focus distance of 12 inches at 4 mm and a minimum focus of about 11 feet at 160 mm. In macro mode (only at 4 mm) it will focus down to about 0.4 inches! The lens has vibration-reduction (which can be turned off for tripod use), and it works amazingly well. There are no filter threads, and there is no lens hood, either. You’ll need to shade the lens with your hand. It’s theoretically easier to design a lens that only has a small image circle, and this camera sensor only needs a really small lens image circle to cover it. I think you’ll agree that this design theory is in fact borne out with this camera/lens combination when you see the finished results. The focus speed is pretty lazy. The zoom is painfully slow, mushy, and approximate. Once the focus and zoom gets there, the shot usually turns out just fine. I realize that I have been spoiled with high-performance cameras and lenses, but my patience was sorely tested at longer focal lengths and in anything other than bright sunlight. I just have to keep telling myself how inexpensive this rig is. Close Focus A quarter with the macro setting (4 mm) They weren’t kidding about the 0.4 inches lens-to-subject distance in macro mode. Lighting at this distance is truly a nightmare. I had to direct a light beam in at a ridiculously steep angle for the above shot. You can get as snarky as you want about the lighting here; I just wanted to see how close I could get. I’d have to call macro mode largely a hoax. If you can’t illuminate it, you can’t photograph it. And it’s not really magnified that much, either. A bug with any sense of self-preservation would be long gone. Flare resistance is pretty good. 5 mm f/6.4 Flare and Chromatic Aberration Take a look at this sample photo. The lens showed remarkable resistance to flare, even though I pointed the lens right into the sun. Impressive. 160 mm (900 mm FX) f/6.5 chromatic aberration Since I can’t shoot raw format, I can’t tell you if the level of color fringing is due to a great lens design or perhaps in-camera processing. There’s a little purple fringing in the bottom left corner, but no too bad. Shots like this really emphasize any lateral chromatic aberration. People really obsess about the 900 mm FX-equivalent maximum zoom, but I was more impressed with the 4 mm (22.5 mm FX-equivalent) end of the zoom. This lens goes really wide (77.3 degrees horizontal), compared to typical kit zooms that only zoom to 27 mm (67.4 degrees horizontal) FX-equivalent. The more expensive B700 lens doesn’t go as wide at this lens does (73.7 degrees horizontal); I’d much rather have this wider angle ability than a longer zoom. Throughout the entire zoom range, there is essentially zero distortion! The pictures below will demonstrate this. The main weakness of this lens is meridional-direction resolution, which approaches what I’d describe as shameful at longer focal lengths; it’s actually not too bad at the wide end. That 40 X Zoom Wide 4 mm. You can barely see the buildings Telephoto 160 mm I added an arrow in the 4 mm shot to show where I zoomed in. There was a fair amount of atmospheric haze, so don’t mistake that for a flare or lens contrast problem. Now that’s a zoom. Lens Resolution The following resolution analysis was done using jpeg with default sharpening. If it was available, I would have shot in raw and performed the analysis without any sharpening. I use the (free) MTFmapper program, which I describe here. I have a 41” X 60” resolution chart to analyze the lens, except at 160 mm. I would have had to be nearly 200 feet away from the chart at that 900 mm-equivalent focal length to photograph the whole thing; instead I used a small chart that’s only 7” X 10”, at about 35 feet. 41” X 60” resolution chart, 4 mm (22.5 mm FX equivalent) Note in the shot of the resolution chart that there isn’t any observable distortion, judging by the chart edges. I didn’t note distortion (or perceptible vignetting) at any focal length. Please be aware in the following measurements that my usual results, measured in “MTF50 line pairs per millimeter” are highly misleading. I’m accustomed to seeing a good “MTF50 lp/mm” lens measurement peak at 40 to 50; seeing measurements around 300 seems astonishing. But here’s the deal: the camera sensor has a lot fewer millimeters in it, so it’s much less impressive than it sounds at first blush. A better unit of measurement for resolution in this case is “line pairs per picture height”, which gives you the total available resolution in the picture. I have decided to provide 4 different resolution measurement units, so that you can take your pick: “cycles per pixel”, “MTF50 lp/mm”, “line pairs per picture height”, and “lines per picture height”. Besides the 2-D plots across the camera sensor, I looked at the low-level data to provide the peak center and corner resolution measurements. Another little fact to note: the “MTF10/30” contrast plots I have included are based on jpeg with in-camera processing. These plots should really be based upon un-sharpened “raw” files to be directly comparable to other lenses. Since I can’t shoot raw with this camera, I wasn’t given a choice. The contrast numbers look too good to be true, and they are. The plots are at least useful to compare center-to-edge differences and meridional-versus-sagittal performance, too. In the MTF50 resolution plots that follow, “red” is good and “blue” is bad. Unlike most other web sites, these plots show nearly 100% of the camera sensor’s field of view. This lets you evaluate lens resolution across the whole field of view instead of just a slice or a single point. Recall that this camera sensor is about 4.6 mm X 6 mm. The resolution measurements were made with the lens wide-open. Stopping down the aperture will make all of the readings even better. I turned the lens vibration reduction off for these shots, and I of course used a large tripod. MTF50 results at 4 mm f/3.0 The MTF50 lp/mm resolution measurements at 4mm show how much better the lens is in the sagittal direction (think spokes of a wheel) than the meridional direction. 4 mm cycles/pixel Mtf50 lp/mm lp/ph l/ph Center 0.49 368 1699 3399 Corner 0.32 240 1110 2220 4 mm f/3.0 MTF10/30 chart The MTF10/30 chart is misleading, since it’s based upon a jpeg image with default sharpening. Again, this makes the resolution and contrast look better than it really is, when compared to raw, unsharpened shots. I don’t have a choice here, however. The lens meridional direction is consistently worse than the sagittal direction at all focal lengths. When the sagittal and meridional resolution differs by a significant amount, you get astigmatism. You will note that astigmatism starts to become a problem at about 2/3 of the way from the lens center when zoomed to 4 mm. 4 mm corner detail (cycles per pixel on each edge) You can see a huge quality difference when comparing the edges that point toward the image center (sagittal) than the meridional edges. This is why it’s important to analyze the edge directions separately. Notice in the picture above the minimal chromatic aberration, which gets emphasized in a high-contrast shot like this. MTF50 results at 17.6 mm (99 mm equivalent) f/4.6 17.6 mm cycles/pixel Mtf50 lp/mm lp/ph l/ph Center 0.38 285 1318 2636 Corner 0.33 248 1144 2289 17.6 mm f/4.6 MTF10/30 chart At 17.6 mm (99 mm equivalent FX) the resolution is just plain spectacular. Astigmatism is very well controlled. MTF50 results at 35.9 mm (202 mm equivalent) f/5.4 35.9 mm cycles/pixel Mtf50 lp/mm lp/ph l/ph Center 0.41 308 1422 2844 Corner 0.34 255 1179 2358 35.9 mm f/5.4 MTF10/30 chart Very, very good resolution at 35.9 mm. MTF50 results at 52 mm (294 mm equivalent) f/5.7 52 mm cycles/pixel Mtf50 lp/mm lp/ph l/ph Center 0.41 308 1422 2844 Corner 0.34 255 1179 2358 52.2 mm f/5.7 MTF10/30 chart Meridional resolution is taking a nosedive in the corners here, but sagittal resolution is excellent. MTF50 results at 70 mm (394 mm equivalent) f/5.9 70 mm cycles/pixel Mtf50 lp/mm lp/ph l/ph Center 0.40 300 1387 2774 Corner 0.31 233 1075 2150 70.0 mm f/5.9 MTF10/30 chart Again, meridional performance in the corners is pretty bad, but the sagittal performance is great. MTF50 results at 160 mm f/6.5 160 mm cycles/pixel Mtf50 lp/mm lp/ph l/ph Center 0.27 203 936 1873 Corner 0.21 158 728 1457 160 mm f/6.5 MTF10/30 chart Performance takes a giant hit at maximum zoom. Be that as it may, check out the shot below of the moon at 160 mm; it may not compete with ‘pro’ monster lenses, but it still looks pretty good. Samples 160 mm (900 mm equivalent) f/6.5 1/1000s ISO 125, un-cropped Rufus hummer detail crop The default in-camera noise reduction slightly smears fine details, even at low ISO and bright light. The majority of the ‘smearing’ is probably the meridional-direction weakness in the lens, however. 160 mm f/6.5 1/250s ISO 125, slightly cropped. The moon shot was hand-held at maximum zoom. Kudos to the VR system in this lens! Considering the maximum zoom being used and that I hand-held the camera, these results are nothing short of fantastic. 5 mm f/3.2 1/125s ISO 125 Summary There are certainly limitations with the B500 camera, but it is capable of very high quality photographs. Keep in mind that this camera/lens combination costs less than many DSLR kit lenses! It’s hard to draw any blanket conclusion on this camera. It has a weird mix of really nice and really irritating features. Personally, I require more direct control (P, A, S, M) than this camera provides, and I really missed raw-format files. The lens, however, exceeded my expectations; with a bit of coaxing, it’s possible to take really nice pictures with the B500. #review

  • Remote Camera Control Using digiCamControl

    When you control your camera via a cable to your computer, it’s called tethered capture. The two most popular communications cables are Ethernet and USB. This article reviews a tethering program called digiCamControl, which uses USB for remote control. I was also going to review tethered capture using LightRoom, but that program is so pathetic at remote control that I decided to not bother. Most people would use a program like this in a studio environment, but it can also be very useful for situations such as shooting from a blind. It's possible, using USB hubs, to 'daisy chain' USB cables together and extend the cable length. I want to concentrate on this program’s ability to perform automatic focus-stacking, which I’ll cover in some detail. The (free) digiCamControl program has a full complement of camera controls, live view, automatic photo transfer to your computer, exposure bracketing, time lapse (movies), web-server remote control (e.g. your smartphone), focus-stacking, multiple screen control, and motion detection. This is a Windows program, and supports most Canon, Sony, and Nikon DSLR/mirrorless cameras (about 100 models). I’m using it with a Nikon D500 and Windows 10. It can be downloaded from here: There is some online documentation, but I don’t consider it very thorough. digiCamControl startup screen after turning camera on When you first start up digiCamControl and turn on your camera, you’ll see something like the screen above. You could start taking pictures by clicking the “aperture” icon in the top-left, but you don’t necessarily know if the camera is in focus yet. The program seems pretty forgiving about turning your camera on before or after starting digiCamControl. The Nikon camera LCD panel shows “PC” to indicate the camera is connected to the computer (via the USB port). digiCamControl main screen after “Capture” click The main reason to use this program is to see a big, beautiful live image on your computer screen, so I go straight to the “Live View” screen, by clicking the “Lv” button. Don’t invoke live view from your camera. You can still use your camera’s shutter release even while connected to the computer. You have full control over exposure, white balance, ISO, and focus from your “Live View” screen on your computer. Download from camera memory card You can separately download pictures from your camera memory card (the main screen “download” button) which will give you thumbnail views of what’s on your camera. Live View screen in digiCamControl The camera LCD doesn’t display its live view, so it doesn’t get hot or consume unnecessary batteries showing you what’s already on your computer screen. Your “live view” is just on your computer’s screen, where you want it, after you click on the “Lv” button. You get a histogram while in live view, too. Click with the left mouse button on the live view screen where you want the focus point located, and then you can click on the “Autofocus” button to focus there. You should see a green square around the focus point location. By default, your pictures will go directly to your computer, such as C:\Users\Ed\Pictures\digiCamControl\Session1 You can assign the session name where you want the photos to go. Since they go to your computer, you’re essentially unlimited for storage. There is pretty significant battery drain while remotely controlling your camera; it’s handy that the battery level is displayed on the main screen. Use a battery grip to make battery drain less of an issue. Live View Exposure Controls The screen shot above shows how you can set your camera controls from the Live View “Control” dialog. Motion Detection and Intervalometer Motion trigger and intervalometer You can trigger your camera via motion detection from Live View, as well as take a series of timed shots (intervalometer). Focus Stacking When you want to stack photos to increase depth of field (even including a landscape shot), you may just fall in love with this program feature. digiCamControl gives you great control over how to configure the near-limit, far-limit, and number of shots in-between for stacking photos. Session setup Start by clicking the “Session” menu option from the main screen, then “Add new session”. Fill in a session name and browse to the folder where you want to save the focus stack photos. A session lets you organize your shots into logical groups. Once you click the “Lv” button from the main menu, you enter “Live View”. You may need to “maximize” the live-view screen to see it. Click on the live view screen with your mouse where you want the “near focus” focus point to be (a green square). Click on the “Auto focus” button to focus on that spot. Advanced Focus Stack dialog Click the (screen top) “Preview” button and then use the mouse wheel to zoom in on the focused spot to verify critical “near” focus. Click on the Preview Screen “X” to close the “preview” dialog and return to live view. Refocus if necessary. Use the bottom-center buttons to rough-focus (the “<<<”, “<<”, ”<”, ”>>>”, ”>>”, ”>” buttons) where more arrows focus in larger steps. Click on the left-hand “lock” button at the bottom of the screen to prevent the near focus from changing any more. Use the same arrow controls to obtain the “far” focus desired, and then click on the right-hand “lock” button at the bottom of the screen to prevent changing the far focus any more. Now that the focus range is locked in, make sure that you expand the “Focus Stacking Advanced” on the left edge of the Live View screen. Enter the desired number of photos, the focus step size (start with around ‘30’) and the wait time between photos. Click the “Preview” button, just below the “Focus Stacking Advanced” controls. This will let you automatically run through all of the focus steps before actually taking the photos (near-to-far), showing the count number as it steps through your requested number of shots. If it looks good, then click on “Start” to let the photo sequence get captured to your computer. Although the digicamcontrol program includes the “enfuse” plugin and can discover other plugins (my CombineZP, for instance) I got errors when I tried to use the plugins. Personally, I use the free stand-alone CombineZP program directly for my focus stacking. The screen shot above shows the Live View screen while setting up the “Focus Stacking Advanced”. The screen shot was done after locking in the “far focus” position; you can see how the near focus looks very fuzzy. After letting digiCamControl control the camera to take the 9 requested shots and saving them to my requested computer folder, I ran the CombineZP program (very similar to the older CombineZM) to stack them into a single shot, as shown below. If you’re interested, I made an article on focus-stacking here: Stacked result from 9 shots: CombineZP “soft stack” I might mention that if you haven’t used focus-stacking software before, you need to make your photo-framing a little wider than what you want for the finished shot. The photo edges will show some unwanted artifacts that are related to shifting focus in each shot. Note the bottom of the photo above shows a “mirror image” that should get cropped off in your editing software. Conclusion There is plenty to explore in this tethering program. Many of its features, such as bracketing exposure, can be done more easily in-camera. Focus-stacking, however, is what I consider a real forte of this program. #howto

  • How to Measure Lens Vignetting

    How dark are those corners in your photos, by the numbers? How much do you have to stop down a lens to lighten the corners? You can get the answers for yourself using a variety of image editors. The only special equipment you probably need is a grey card or a subject with neutral tones and even illumination. If you want to explore how vignetting changes from close-focus to infinity, then you can photograph the clear blue sky for a target (unless your lens is a super-wide). I’ll show you in three different image editors how to measure RGB values. Capture NX-D example to get RGB values (resolution chart photo) The mouse cursor location is used for RGB value feedback in Capture NX-D. It is shown on the bottom edge of the window. Zoner Photo Studio Pro shows RGB and cursor X,Y The Editor in Zoner Photo Studio Pro displays both the cursor location and the RGB values while using the “magnifying glass” cursor, for instance. Photoshop example using a grey card target Shown above, you can use Photoshop to sample locations of interest from a photo of a grey card. Here, I selected a point near the center and a point in a corner. I used the “color sampler” to get the RGB values. Make sure you have correct white balance, so that the R,G, and B values match (or they're at least close) when using your grey card. The selected central point in the example has an average of 158 for the RGB, while the corner point has an average of 79. You would probably assume that with values that are half as big, there would be a one-stop difference between the center and the corner. But life isn’t quite that simple. The RGB values are non-linear in response to brightness. The use of a grey card makes viewing the lighting distribution across a photo much simpler. Since the RGB relative values should be pretty close to the same at any selected location with a neutral grey card, the overall evaluation of vignetting is just easier (R=G=B in daylight with proper white balance). My resolution charts also work well for this purpose. The gray card analysis above was done using the Nikkor 18-140 f/3.5-5.6 wide open at f/5.6 and 140mm. This is the lens pretty much at its worst. This happens to be my worst lens for vignetting that I own. Stopping down quickly minimizes whatever vignetting there is, by the way. So, how do we use these RGB values to get F-stop values? As I mentioned above, the RGB values don’t relate in a very straightforward way to F-stop values. One way to solve the problem would be to start by setting your camera on ‘manual’ exposure, and take a set of RAW-format pictures of a gray card. Start with a photo that’s about 3 stops over-exposed, and then change your exposure (aperture or shutter) by a third of a stop for another shot. Keep this up until your last shot is at least 3 stops under-exposed (a total of 19 shots). For Ansel Adams fans, this would be shots from Zone 8 through Zone 2. In your photo editor, you can then read the RGB values of each shot to note the progression. If any shots have a “255” RGB reading, then you’ve got a blown-out photo and you won’t be able to use it. There are few lenses that have more than 3 stops of vignetting. With this exposure shot collection, however, you should be able to use these RGB values for more than just vignette analysis. Your library of 1/3 stop photos and their RGB values will let you later analyze any photo where you want to critically analyze brightness and contrast ranges in terms of F-stops. For my own tests, I changed the shutter by third-stop values all the way from +3 stops through -5 stops (Zone VIII through Zone 0). I used the “ExifTool” program explained here to get the “Light Value” (or “EV”) in each shot, to get a list of decimal numbers for easy math with the “stops”. I then used a photo editor to get the RGB for each shot (it varies a little bit in each shot, so I noted a typical value, where R=G=B). Notice that the Zone VIII shot is close to the 255 maximum. Stops Zone EV RGB +3.0 VIII 2.7 250 +2.7 3.0 239 +2.3 3.3 234 +2.0 VII 3.6 225 +1.7 4.0 215 +1.3 4.3 200 +1.0 VI 4.6 180 +.7 5.0 160 +.3 5.3 140 0 V 5.7 125 -.3 5.9 105 -.7 6.3 85 -1.0 IV 6.6 70 -1.3 6.9 60 -1.6 7.3 44 -2.0 III 7.6 40 -2.3 7.9 37 -2.7 8.3 33 -3.0 II 8.6 28 -3.3 9.0 24 -3.7 9.3 22 -4.0 I 9.6 21 -4.3 10.0 19 -4.7 10.3 15 -5.0 0 10.6 13 Given the EV-RGB list above, let’s get back to the lens vignetting problem. The lens center measured about 158 (R), and the corner was about 79. This relates to roughly EV 5.0 and EV 6.4, for a difference of about 1.4 stops. I noticed that the DxOMark site rated this lens vignetting at “1.2 stops”. Pretty close. Exposure Value versus RGB value You can see how non-linear the RGB values are, compared to the EV. It’s well-known how the numeric separation is large in bright areas and small in dark areas. Conclusion I tried the experiment above on a couple of different computers and in three different image editors; the results were in close agreement on each. You could probably just use the results of my experiment directly for your own lens vignette analysis. #howto

  • Keeping up with MTFMapper: any MTF you Want

    Since I use it so much, I like to keep readers aware of new features in the MTFMapper program available here, written by Frans van den Bergh. As of this writing, his latest version is 0.6.18. My own favorite features of this program are focus measurement and 2-D resolution plots. I suspect that many users are big fans of being able to make their own MTF contrast plots, however. MTF contrast plots are by far the most popular way to compare lenses, and they’re still basically the only way to get lens performance data from most manufacturers. If you keep up with Roger Cicala at LensRentals.com, you’ll know that he always includes these plots in his lens reviews. Roger has started to always include these MTF contrast plots at resolutions up to 50 lp/mm. This decision is probably driven by modern camera sensors having so much more resolution than in times past; MTF30 just doesn’t cut it anymore. The MTF contrast plots provide a quick analysis of the level of percent lens contrast at a particular resolution; they start at the lens center, and extend to the corner of the field of view. They traditionally measure in both the meridional and sagittal (tangential) directions. MTFMapper 0.6.18 now lets you produce MTF contrast plots with your choice of resolutions! Instead of its default of 10 lp/mm and 30 lp/mm, you now get to pick what resolutions you want plotted. But wait, there’s more. Now, you can add a third plot at yet another selectable resolution. Sample MTF contrast plot at 10, 30, and 50 lp/mm To produce a plot like the sample shown above, you need a photograph of a resolution chart. MTFMapper is very flexible in what your chart design looks like; it basically only needs to see black rectangular shapes (or even trapezoids) against a light background. The program locates the straight edges of the rectangles and takes a measurement of every edge. Once you open the desired chart photos and let the program crunch the measurements, select the “lensprofile” of your photo to see the MTF contrast plots. Configure MTFMapper for MTF contrast plots Before you can produce the contrast plots, you need to tell the program what you want. The Preferences dialog shown above demonstrates how you provide that information. Your answers would be garbage unless you enter the correct “pixel size” microns for your camera. You can see above that I added the “lp1”, “lp2”, and “lp3” arguments to get all three MTF contrast plots. I wanted the “10”, “30”, and “50” lp/mm measurements. You’re free to select 1, 2, or 3 plots at resolutions of your choice. The program won’t stop you from selecting something like “—lp3 60” to get a 60 lp/mm plot. Not many of today’s lenses/sensors can perform at this level, but if a manufacturer will make them, then MTFMapper could measure it. Just for fun: MTF contrast plot at 10, 40, and 60 lp/mm. 105mm f/2.8 Micro-Nikkor The 105mm Micro-Nikkor measurement above looks pretty bad on that green plot, until you realize that it’s at 60 lp/mm. This is using the Nikon D610 (5.95 micron pixels). I wanted to mention that you can get the MTF contrast information for a particular edge in a particular location in the field of view, if you wish. For this information, you’d select the “annotated” information for your photo. Locate the edge you want to analyze, and then left-click the cyan-colored measurement. A plot will get displayed. You can hold the “shift” button down and get up to 3 plots of 3 edges to be displayed at once. The dialog is called “SFR/MTF curve”. The “SFR” letters stand for “Spatial Frequency Response”, and that’s a synonym for “Modulation Transfer Function”. MTF curve details for a single edge in the resolution chart The shot above shows how you can zero in on a single edge for very detailed analysis. In this example, I was interested in the “MTF30” (30% contrast) frequency in the meridional direction, so I dragged the gray bar to where it displayed the contrast “0.299”, and that corresponds to a frequency of 0.4 cycles per pixel. Knowing the sensor pixel size (5.95 microns), the 0.4 c/p frequency value can be converted into “line pairs per millimeter” resolution as follows: Lp/mm = (c/p) * V_Pixels / V_mm, where the sensor is 4016 X 6068 pixels, 24.0 mm X 35.9 mm MTF30 lp/mm = .4 * 4016 / 24 = 66.9 Similarly, the MTF50 lp/mm on this edge (50% contrast) would be 0.287 * 4016 / 24 = 48 You can save the plot image (click “Save image”) and you can also save the plot data as “comma-separated” data for use in Excel. Other new MTF Mapper Features This version of the program, when you use the newest resolution chart design at the recommended shooting distance, lets you get very detailed information about how your camera is aligned to the chart. Newest resolution chart with the round “fiducials” The example shot above (taken from Frans’ documentation) shows how you can get feedback about how the chart is oriented, compared to your camera sensor. The smaller roll, pitch, and yaw readings you can get, the better-aligned the chart is. As I already mentioned, the MTFMapper can use older resolution charts for performing resolution analysis. If you print the newest resolution chart, however, you can get additional features. In the sample shown, the “Yaw = 2.68” indicates that the chart is rotated about a vertical axis such that the right-hand side is further from the camera than the left-hand side. If you have trouble reliably mounting your chart in front of your camera, then this feature could be really useful to you. Please don’t forget to read the “help” documentation that Frans includes with his program. It is packed with useful insights into the “how and why”. Conclusion If you have the inclination, this newer version of MTFMapper should enable you to compare your own lens copy to other web sites, when they only provide MTF contrast plots. Keep in mind that some web sites (like LensRentals) only measure the lens and don’t include the camera sensor. If this is the case, then your own measurements won’t look as good; the camera sensor always drags the measurements down a little bit. Thanks once again, Frans. #review

  • Portrait Retouching Using Masks

    If you want to make friends, learn how to retouch a portrait. Nobody likes themselves as-is, despite what they may say. It’s often been said that your goal should be to make a person look 10 years younger, but not more. If you go too far with retouching, you’ll make a portrait that looks completely fake. You won’t get thanked for that. On the other hand, it’s commonly expected that portrait photographers are also dermatologists, plastic surgeons, dentists, and opthalmologists. You will be doing yourself a favor if you shoot your photographs using camera picture controls such as “portrait” or “neutral”. Avoid “vivid” like the plague. My favorite is “neutral”. Use a slightly-long lens, like the classic 85 mm, to get a pleasing perspective. Back in the day, the 105 mm was king; it’s still a great choice for portraiture. Face parts have many completely opposite requirements; some need sharpening and others need softening. Some parts need more saturation, some need less saturation. To meet these contradictory retouching needs, the best tool is the mask. Many image editors support masking. I’m still a die-hard Nikon Capture NX2 fan, so I’m going to concentrate on how that program uses masks. Lightroom (ala the ‘Adjustment Brush’) has masks. Lightroom’s “Auto Mask” can select non-circular shapes by looking for similar coloration, but I find their masking a bit too limiting. This is probably my least-favorite program for masks. Photoshop, of course, has masks. It has always struck me as being just a bit too complicated and time-consuming for my taste, but that may be because I haven’t invested sufficient time in it. If you’re comfortable with it, then by all means use it. Zoner Photo Studio Pro supports masks via the “Selection Brush”, lassos, circles, rectangles, etc. along with “Mask: Do Not Show”, “Mask: Normal/Inverted” etc. Similar to Capture NX2, you can soften the edges of the selection and erase your selection mistakes. Once you make your mask, you can apply softening, sharpening, or other effects that only affect what’s inside the mask. Zoner Photo Studio Pro mask in the “Editor” tab Zoner Photo Studio Pro masking example using the Brush Selection Portrait Retouching Using Nikon Capture NX2 I want to show you how I use Capture NX2 to accomplish retouching, but I’ll let you pick your own favorite editor to get the same job done. I realize that Capture NX2 is now un-supported by Nikon. If you want to keep using it with your RAW images, then you can check out this article to convert your newer camera files into a RAW format that Capture NX2 can understand. In this program, you perform mask selection/ adjustment pairs. After the adjustment, you click on “New Step”. Next, you select another mask and the adjustment associated with that mask. Mask Tools You need to know how to add a selection mask and, just as important, how to erase a selection mask. Some picture details, such as the corner of an eye, would be extremely difficult to accurately select in a single step. The mask selection brush is typically too wide to easily get into little nooks and crannies. Constantly adjusting the mask brush diameter is horribly inefficient and a losing proposition. It’s much easier to paint outside of the lines, and then switch to the mask “eraser” to clean up your mask. Capture NX2 mask controls: Add and Subtract Don’t be afraid to use a mask that goes beyond the area you want Switch to the mask “eraser” and clean up the corners Finished mask after erasing around the nooks and crannies Retouching Teeth Few people have really white teeth. Nobody has perfectly-white teeth. When retouching, you need to “de-saturate” them and also brighten them. Don’t go too far with this. You want to reduce yellowing by lowering their color saturation, which will leave teeth looking gray. Next, you need to brighten the teeth (without making them pure white). You want to select only the teeth to whiten and brighten them Paint the mask (green) over the teeth in Capture NX2 After you mask an area, you will need to make it invisible while you apply an adjustment. In Capture NX2, you hide the mask by changing the selection from “Show Overlay” to “Hide Selection” as indicated by the arrow in the picture above on the right-hand side. Fine-tune the saturation After you use the “Selection Brush +” to mask only the teeth, select the Saturation/Warmth adjustment. Avoid 100% opacity, or your edits will look a bit “fake”. Also adjust the mask feathering, to avoid hard edges. Change the mask selection from “Show Overlay” to “Hide Selection” to see the progress of the saturation effect. If you make mistakes while painting the (green) selection mask, simply click on the “Selection Brush –“ to erase the parts of the mask that you don’t want. Masks don’t have to be continuous, so that you can do things like selecting both eyes. Teeth after de-saturation. Gray is better than yellow, but not by much. Change the mask selection to “Hide Selection” while adjusting the saturation (or to see any effect you’re working on). The teeth may still look a little disappointing, since they changed from yellow to gray. Not to worry. Increase brightness, but with a mask selecting only the teeth. Choose the Brightness adjustment, while using the mask over the teeth. Change the mask to “Hide Selection” again, while increasing the brightness. Avoid the temptation to over-brighten the teeth; real teeth are slightly yellow and slightly gray. When you’re happy with the way the teeth look, click the “New Step” to finish (assuming you’re using Capture NX2). Fixing Eyes Most eyes need three different adjustments. The iris typically looks better when its color is more saturated. Similar to teeth, the whites of the eyes may need some de-saturation and they always need brightening. You also want the eyes, brows, and lashes to be very sharp (via the ironically-named un-sharp mask). Eyes with typical issues that need improvement Mask used for brightening and de-saturation of any red color Eyes and brows need extra sharpening A portrait just won’t look good if the eyes, lashes, and brows aren’t sharp. Make a mask for them and apply the Unsharp Mask. Repair Eye Bags Bags under the eyes are typically a two-step process. First, they usually need more Gaussian Blur than the rest of the face, and possibly even some Healing Brush. They usually need extra brightness adjustment, too. Make a separate mask for enhancing under the eyes If makeup isn’t used under the eyes, then they usually need to be brightened and have a little healing brush applied. The brightening needs a mask, but the healing brush doesn’t. Skin Here’s some advice: don’t go crazy with the “Healing Brush”. You can waste a lot of hours trying to heal every blemish on an entire face. Try this instead: Gaussian Blur. You’ll find that you can usually hide skin blemishes in a single step by simply blurring the skin. Moderation in all things. You really, really don’t want “Barbie Skin”. When you apply Gaussian Blur, remember to adjust the opacity away from 100%. Skin shouldn’t look blemish-free. And feather the mask edges, too. For males, you’ll generally use much less blur. Make a face mask that avoids the eyes, brows, nostrils, and mouth Perhaps the biggest improvement in most portraits is getting the skin blurred. This does not include the eyes and mouth, however. Gaussian Blur for the skin The Gaussian Blur can be pure magic. Again, don’t forget to allow a little of the original skin to show through. Keep the opacity around 80 percent, and use a generous “feather” for the face mask. Use a large enough blur radius to hide blemishes, but avoid making the skin look fake. Conclusion Portraits typically take more editing work than any other type of picture. Most pictures work just fine with ‘global’ adjustments, without any masking at all, but pictures of people rarely look good with this treatment. A good job of portrait editing leaves the viewer with a sense that something’s different, but they can’t really put their finger on it. Cosmetics and good lighting can certainly help portraits and reduce the retouching labor, but there’s really no substitute for skilled retouching. #howto

  • The History of MTF50 Resolution Measurement

    I thought it might be fun to give you a little insight into how some really smart people figured out how to use computers and math to automatically measure lens resolution. Believe it or not, some of the techniques being used date back to the early 1800’s! It all began with a guy called Jean-Baptiste Joseph Fourier, who was born in 1768. Fourier started looking at how you could combine different combinations of “sine waves” to approximate virtually any curve with a repeating pattern. So, what’s a sine wave? A sine wave, using “radians” The picture above shows the simplest sine wave, which is a “trigonometric function”. This is a function that smoothly changes value as you travel around a circle and varies between positive one and negative one. You’d call this a “single cycle”, or a wave (sort of looks like a water wave cross section). If you imagine the hour-hand on a clock (with a length of 1 inch) running backwards, think of horizontal as zero height (3 o’clock and 9 o’clock). It’s “+1 inch” at 12 o’clock and it’s at “-1 inch” at 6 o’clock. That’s the basic sine wave function. By the way, there’s a closely-related function called “cosine”. The cosine is basically the same, except that the wave is shifted by -90 degrees relative to the sine wave, which is called a “phase shift”. Radians, by the way, are just another way of measuring rotation around a circle. “Pi” radians (3.14159) are the same as 180 degrees. Radians are used more in math and physics, because they’re a more “natural” unit of measure. A sine wave with twice the ‘frequency’ Now, I’m showing you a wave with twice as many oscillations as the first one, or twice the frequency. It varies between the same values (plus one to minus one), but twice as often. More waves within the same distance are what are called “higher frequency”. Taller waves are said to have higher “amplitude”, or higher “intensity”. Add the two sine waves together Fourier noticed what a weird result you can get when you add together multiple waves (a series of sine waves). He discovered that he could construct a line shaped like anything he wanted, if he added enough sine waves (each with a different frequency), together. This “Fourier series” he invented (and announced in 1807) has morphed into a “Fourier Transform”, and it’s used in many fields that one way or another relate to “waves” and frequency analysis. Fourier discovered that making functions that rise up and then fall down more steeply took higher-frequency sine waves to replicate that shape. Think of the Fourier Transform as a technique to break down a function into its component frequencies. There’s a deep connection with the way nature works and Fourier’s multi-frequency wave addition. White light, for instance, is comprised of a continuous spectrum of different frequencies of electromagnetic radiation. Brighter light doesn’t mean higher frequency; it actually means that the waves have higher amplitude, or intensity. This also means that you don’t have to worry about how bright the light is when you try to measure resolution. Some smart guys (Cooley and Tukey) figured out how to write algorithms that implement Fourier transforms in a very fast way, so of course they called them “Fast Fourier Transforms” or FFT’s. These FFTs get used today in lens resolution analysis programs (and in many other places, too). It turns out that Carl Friedrich Gauss in Fourier’s time actually invented the FFT, but it was lost to history and re-discovered in 1965. Many disciplines in math, science, and even photography discovered how useful Fourier transforms could be. They could transform “spatial domains” (positional information) into the frequency domain. The next discovery, called the “inverse Fourier transform”, lets you go the other way, from the frequency domain back into the spatial domain. The old ‘manual’ way to estimate resolution Take a look at the resolution chart above. This is an example of how resolution was estimated before computers and modern measurement techniques. You would photograph the chart, and then try to figure out where the converging lines would turn to mush, and call that your lens resolution. On the plus side, this works as well for film as it does for digital cameras. On the negative side, you now have to control how far away you are when you photograph the chart, it’s slow and tedious to use, and you only get an idea of lens performance in a couple of places in the field of view. One thing that’s made of “waves” is light. The job of a camera lens is to gather and re-direct light waves onto a camera sensor. A really good lens can efficiently react to variations in light, such as the edge of a black square against a white background. If you have a lens with ‘perfect’ resolution, then a photo of a black square against white won’t show any ‘gray zone’ between the black edge and the white background. Reality steps in and rears its ugly head, however, and your photo shows a small zone of gray between the white background and the black square. Plot of light intensity between black square edge and white background If you were to graph a plot of light intensity as you move from the white background onto a black square, you’d notice that good lenses have a plot that lowers quickly (spanning a small number of sensor pixels), whereas with poor lenses the plot would lower much more gradually. If you continuously plot moving over this edge back-and-forth, the plot would look similar to the sine-wave patterns above, but with a steeper rise and fall than those low-frequency waves have. I mention the ‘back-and-forth’, because you’ll recall that the Fourier series only works with repeating patterns (ala waves). Combined intensity plots with a flip in-between. Becomes a ‘repeating pattern’. If you were to perform a Fourier analysis on this repeated rise and fall pattern of light cycles, you could discover how it requires higher-frequency sine waves in the series to approximate the original pattern. A good lens would require a higher frequency set than a poor lens to model the response; we call the response “cycles per pixel”. It generally takes several camera pixels to contain an entire dark-to-light transition cycle of an ‘edge’ photo, so the number of “cycles per pixel” is a value that’s less than one. Lo and behold, you now have a way to evaluate resolution in “cycles per pixel”, thanks to Fourier. The real magic of using these Fourier Transforms is that you can perform the analysis given only a single edge. As a side note, if your lens is out of focus, then the light-to-dark transitions are less steep. This would result in a lower resolution measurement. It’s very important to have your lens in sharp focus while testing it, or else you’ll get a wrong resolution measurement. Subject or camera motion can also mess up resolution measurements, but probably more in one direction than the other. Once you know the “cycles per pixel” resolution and the dimension specifications of your camera sensor, you can easily convert this number into other measurement units, like “line pairs per picture height”. Now, imagine you photograph a series of lines, like a picket fence. A good lens/sensor combination would enable you to record a full transition from light-to-dark (a light “modulation”) on the edges of each picket. If the pickets get too close to each other, however, the light-to-dark doesn’t get to finish before the sensor sees the neighboring picket. If the light-to-dark transition only gets half way to “dark” between closely-spaced pickets (or 50% contrast), we’ll call that the limit of the modulation we will be willing to tolerate. We also call this an MTF50, or “modulation transfer function” of 50. The MTF50 can have units such as “cycles per pixel” or “line pairs per millimeter”, once the size of each pixel is known. What if you want more accurate resolution measurements? If you photograph a vertical square against a white background, the best resolution measurement you can get would be limited by the size of pixels on your digital camera’s sensor. How can we measure with better precision than that? Enter the “Slanted Edge”. It turns out that you can put a slight tilt on those squares and then gather readings from a series of sensor rows that all cross the same edge. If you consider all of those readings in each row, you get a much better idea of the change in brightness across that edge (down to fractions of a pixel). As a matter of fact, the brightness measurement resolution is a function of the “sine” of the angle of the tilt. For instance, the sine of 5 degrees (instead of radians) is .08716, and this represents a fraction of about 1/12. If you tilt a square by 5 degrees, you get about 12X better resolution (or 1/12 of a pixel) in the measurement of the light variation across the edge. That pesky ‘sine’ function is just showing up all over the place. Slanted edges with “cycles per pixel” measurements The shot above shows part of a resolution test chart that has resolution measurements drawn over each (slanted) edge in blue. Those measurements got drawn on the picture by the resolution measurement program I used, called MTFMapper, which is explained further at this link . The measurements shown are in units of “cycles per pixel”. The cycles per pixel relate to how many light-dark transitions can be recorded per pixel (which is less than 1). More cycles-per-pixel mean higher resolution. Notice that the squares (trapezoids) are oriented such that their edges either point toward the center of the lens (sagittal) or are perpendicular to that direction (meridional or tangential). Lenses are typically better at resolving in one direction than the other, so it’s a good idea to measure in both directions. A really good lens would measure the same in either direction. An MTF contrast plot using a D7100 camera with 3.92 micron pixels. Nearly all camera lens manufacturers give you lens “MTF” data separated into meridional (tangential) and sagittal readings. This data is typically presented in the form of “percent contrast” at a couple of different line pitches; these are what are known as “MTF contrast plots”. These plots are a bit different (and less informative) than the “MTF50 resolution plots” being discussed here. The plots are usually only shown at the lens widest aperture; the contrast gets better as a lens aperture gets stopped down (until diffraction sets in). I have more information on these MTF contrast plots at this link. An MTF50 resolution plot, line pairs per millimeter units Computer programs, such as Imatest and MTFMapper use this “slanted edge” technology. These programs are far more efficient than the old method of photographing closely-spaced lines to estimate where the lines-per-millimeter turn to mush. You are finally able to get comprehensive resolution information covering your entire camera sensor. The MTFMapper program, by the way, is free. Conclusion There’s a lot of technology that goes into modern programs that measure resolution via the “slanted edge” technique. It’s based upon knowledge that has been built up literally over centuries. If you were to manually attempt to perform a “slanted edge” lens resolution analysis like what has been shown, it would take you ages (if you could do it at all). Modern computers and algorithms, combined with digital cameras, make it a snap. I think that learning about innovations from scientists, engineers, and mathematicians of the last few hundred years is a humbling experience. #howto

  • Fake Focus Peak on Select Nikon Cameras

    There is a great manual-focus aid built into several models of Nikons, but Nikon doesn’t seem to be aware of it. This is something that’s a tripod-only feature that involves Live View. If your camera has an “Effects” option on the Mode dial and one of the effects is “Color Sketch”, you’re in luck. The Effects Mode, D7100 The “Color Sketch” mode, while in Live View, will allow you to see the subject focus really pop as you manually focus the lens. For non-CPU lenses or manual-focus-only lenses, Live View is the only reliable way to get critical focus. If you have been relying on the little in-viewfinder “green dot” to get focus confirmation on a manual-focus lens, you’re at the mercy of your camera’s built-in phase-detect calibration. Focus fine-tune calibration isn’t available for old or ‘dumb’ lenses, and it’s not available at all for the 3000-series and 5000-series cameras. For critical focus on manual lenses, (or with any un-calibrated auto-focus lenses) you need to be using Live View. This mode uses the camera sensor feedback for focus, so it’s always in-calibration. You probably need to zoom-in to really nail focus. This also means you need to be using a tripod or other solid support. A problem I’ve always had using Live View to focus, though, is a lack of strong, obvious feedback when correct focus is achieved. Nikon has virtually ignored the camera industry standard of “focus peaking”, where the in-focus areas of the picture get highlighted. Focus peaking makes it clear what’s in focus. Here’s where the “Color Sketch” effect comes in. In-focus areas will really pop while in this mode. The downside, however, is that you really don’t want to take photographs while in this mode, unless you want a cartoon sketch effect. That’s where this technique differs from focus peaking, which doesn’t have an effect on the photograph (or movie). The whole shooting procedure looks like this: Set your Mode dial to “Effects” Select the “Color Sketch” option Set your aperture Turn on Live View Focus on your desired subject Optionally, zoom in to REALLY nail focus ( the magnifying glass + button) Switch back to your normal picture-taking mode (P,S,A,M, U1 or U2). Take the shot Some of the Nikon camera models that have an Effects mode include 3300, 3400, 5100, 5300, 5600, 7100,7200, and 750. I have only tried this on my D7100, and it works great. As I had mentioned, you’ll need to have the “Color Sketch” mode available as an effect. Color Sketch with an out-of-focus subject The shot above shows what a typical subject looks like when it’s out of focus with the Color Sketch effect. There is basically nothing added to the subject, and you might even be fooled into thinking that you’re not in the Color Sketch mode. The little icon in the top-left confirms the mode is correct. Color Sketch with an almost-in-focus subject You can see in the shots above how the subject starts to pop when it gets close to being in-focus. Now, lines are being drawn around the parts of the subject that are pretty near the plane of focus. Color Sketch with fully in-focus subject Notice the thin concentric circles near the outer rim of the clock above. They only appeared when the subject was very near to perfect focus. You may think that you don’t even need to magnify the screen to get good focus using this technique, but you can see in this example that fine details may require the extra screen magnification to see them. The cross-hatching on the “XI” above is very difficult to see unless the screen is magnified. Try magnifying Live View and the Color Sketch effect just gets better and better at discerning fine focus. It’s possible to increase the displayed line width (“Outlines”) and color intensity (“Vividness”) of the Color Sketch effect, also. You might find that this will make the peaking effect even more dramatic. I set my Outlines to the maximum line thickness. On the D7100 camera, here’s how you can customize the Color Sketch effect: Rotate the mode dial to “Effects” Press the “info” button Rotate the rear “Main” command dial to select Color Sketch mode Point the camera at something interesting to focus on Press “Lv” button to enter Live View Press the “Ok” button Press the “^” or “v” to select either “Outlines” or “Vividness” Press the “<” or “>” to alter the line thickness (Outlines) or color vividness Press the “Ok” button when you’re happy with the effects You will find that Live View will definitely be more sluggish in this Effects mode, and the frame rate will drop as well. If you shoot outdoors, you might want to bump up the screen brightness or get yourself a screen shade/magnifier like the Hoodman Loupe. None of these tips relate in any way, shape, or form to action shooting, of course. But if you shoot landscapes, still life, or macro work, you might find this technique valuable. By the way, the other available effects don’t seem useful in regards to aiding focus. Maybe someday Nikon will add focus peaking to all of their cameras (or provide a firmware update on all existing models). And maybe someday I’ll get a shot of Sasquatch, too. I’ll place my bet on Bigfoot before Nikon. #howto

bottom of page