top of page

Search Results

272 results found with an empty search

  • Free 'Dehaze' for Lightroom 6.1 or Newer

    Adobe stopped updating the standalone versions of Lightroom long ago. A feature they added to their Creative Cloud version called “DeHaze” caused extreme feature envy for us cloudless users. Guess what? There’s a way to get this functionality in the standalone version of Lightroom after all. There’s a free Lightroom plug-in you can get from here. It works with Lightroom version 6.1 through 6.14. The website also provides instructions for how to install it, but I’ll summarize the installation process here. Installing the Dehaze Lightroom Plug-in Begin by un-zipping the downloaded file into the folder of your choice. Run Lightroom. File | Plug-in Manager… Click “Add…” button in the lower-left of the dialog Navigate to the directory with the “LRHazeFilters.lrplugin” folder and select it, such as: C:\DehazeLightroomPlugin\LRHazeFilters_2_2\LRHazeFilters.lrplugin Click the “Select Folder” button The “Plug-in Manager” window should now show “Status: This plug-in is now enabled” You use the Dehaze plug-in while in the Develop module. When selected, a window with a dehaze slider opens: What’s so great about Dehaze? This plug-in produces an effect akin to tone-mapping, except that it can also work in reverse, by adding extra haze. To get the best effect, you need to stick with RAW format when using this filter, and you need to be in the Lightroom “Develop” module. This plug-in can also produce a nice effect for black and white pictures. Hazy Shot Select the Dehaze plug-in When you’re ready to try Dehaze on your photo, click “File | Plug-in Extras | Dehaze Control”. When the Dehaze dialog opens up, you can drag the control with your mouse to wherever it’s most convenient for you. Rid the haze by a moderate amount Drag the Dehaze slider until you see the effect you want. I generally like the effect I get at a setting around +50. The total adjustment range goes from -100 to +100. Going a bit too far with Dehaze Since you don’t know what’s going too far until you do it, try going beyond what looks good, and then back off from there. Keep in mind that you can always make a “virtual copy” of a shot, and then Dehaze the copy. You can combine the “Dehazed” copy with the original with Lightroom’s “Photo Merge” feature. This can work well when you like the Dehaze sky effect, for example, but you don’t like what happens to the foreground. As with most editing features, you’re given ample opportunities to abuse the controls. Please, please don’t go overboard. Don’t be guilty of giving Dehaze the same kind of reputation that HDR has gotten. The final shot You can see in the “before” and “after” versions that the sky is vastly improved. As-shot. Too hazy for my taste. Enhanced with DeHaze I wouldn’t say that this plug-in is Earth-shattering, but it definitely has earned its place in my Lightroom bag of tricks. It’s simple to invoke and use, and it works. You don’t have to feel left out any more, even though Adobe might have abandoned you. #review

  • DSLR Focus Calibration in Record Time

    Here’s a little trick to get your lens focus-calibrated quickly. This discussion is only relevant for phase-detect focus. It’s well known that “Live View” focus is quite accurate, since the camera sensor itself is used, bypassing any mirrors and separate phase-detect sensors. This article doesn’t, of course, have any relevance to mirrorless cameras. There are some cameras that can focus-calibrate themselves, so this doesn’t apply to those cameras, either. Pay attention to that focus distance! The main requirement for this calibration trick is to have a lens with a focus distance scale on it. You should also use a tripod, if possible, to get reliable results. Pick a focus target that is easy for your camera to use, so that it will focus at the same distance each time. Do yourself a favor, and make sure that you have sufficient illumination so that your camera focus system doesn’t have to struggle and hunt to find focus. Set your camera’s aperture (typically wide-open), and then activate Live View. Now, focus your camera on the target. Note the precise distance on the lens focus scale. Repeat this exercise several times, in case your lens can only focus within a range of distances. Between each focus, manually change the focus distance, to force your camera to re-focus each time. It should only take seconds to find out the focus distance reading to use. Now, you know the “real” focus distance setting that is accurate, since it was done using Live View. Next, turn off Live View, to switch over to phase-detect focus, ideally with single (versus continuous) auto-focus. Auto-focus the camera. Note the reading on the focus scale. If the phase-detect focus distance value matches the Live View focus distance, then you’re done. If there’s a difference in the distance reading, however, then you’ll need to enter a focus calibration value into your camera. If the phase-detect reading was as a shorter distance than the Live View distance scale setting, then you’ll need to calibrate with a “+” setting, to push the focus further from your camera. If phase-detect focused farther than Live View did, then obviously enter a focus calibration value that’s smaller than what you had previously set for the calibration setting. In this fashion, you can iterate on focus calibration settings that will quickly get your lens into perfect calibration. You don’t need to fuss with trying to view photos at high magnification to determine if you got the focus right; all you need to concern yourself with is to match the distance that Live View got. Simple. #howto

  • Make a Flash Diffuser for Free

    If this idea hasn’t occurred to you, it’s possible to make a rugged and perfectly functional flash diffuser for free. It won’t even take much time. What have you got to lose? Flash diffusers are a really good idea to soften your harsh flash output. Even with a diffuser, it’s a good idea to still tilt the flash head for a bounce flash. You can buy diffusers that are essentially a plastic bottle, but why not just try a plastic bottle instead? I kept my eye out for the right-sized bottle for my (Nikon SB-600) flash, and it wasn’t that hard to find. Your own flash may take a different size, though. It should go without saying that you want a bottle that’s white. I found the perfect diffuser at Costco: a bottle of antacid tablets. It’s pretty thick plastic, so it won’t readily break or even deform. It has a pretty neutral color, and it doesn’t block too much light. You might find some “Tupper Ware“that you like for the job. Use plastic that isn’t too clear, or it won’t diffuse the light enough. All you’ll need to make this diffuser is a utility knife, a permanent marker, and maybe some gloves. Please don’t blame me if you cut yourself. Use some good common sense about how you hold the bottle while you perform surgery on it. After you’ve found a properly-sized bottle and cleaned it up, then use the marker to draw where you need to cut it. If you’re fanatical about this, you might want to even measure the flash dimensions to mark more accurate dimensions. You’ve probably heard the motto “measure twice and cut once”… I actually cut my bottle a little small, and then shaved off a few slivers at a time until I got the fit nice and snug. When I shot a gray card, the color balance proved to be a little warm with the diffuser, so I had to use a custom white balance to get totally neutral lighting. I guess you have to pay a teeny price for using this cheap diffuser. Bottle diffuser slips over the flash head This is a really simple piece of gear. You don’t need any straps or glue or anything to keep the bottle snug for a flash like mine, since it has a slight taper along the head. If you have an inconveniently-shaped flash head, you might need some Velcro to keep it snug. The naked flash Flash diffuser disguised as an antacid bottle Bounce flash, no diffuser, pointed at a fairly high ceiling In the shot above, the ceiling was too high for the bounce flash; it had little effect on the subject. Sometimes you encounter paneled or colored ceilings which can also defeat the use of bounce flash. Bounce flash with diffuser, pointed at ceiling The first thing that jumps out at me in the above comparison shots is the bounce flash level of illumination. With no diffuser, and a fairly high ceiling, the flash didn’t have enough effect. If you’re in a fast-paced situation, it’s a pain to have to worry about angling your flash to bounce off of the nearest wall instead of a ceiling. If the ceiling is really high and you aren’t near a wall, then you basically can’t use bounce flash. With the diffuser, enough light gets to the subject even if the ceiling/walls are too far away. Light gets through the sides of the diffuser, so the subject gets both diffuse direct light and bounced light. Gray card, direct flash, no diffuser, with auto white balance Gray card, direct flash, with diffuser, custom white balance There’s a plus and a minus to using this diffuser with direct flash. The lighting is definitely more even with the diffuser. The downside is that the color balance is a little warm, so a custom white balance is required to achieve the same color temperature with the diffuser. As you can see above, the correct custom white balance completely neutralized the warm color. I actually wrote the correct Kelvin temperature (with a permanent marker) onto the diffuser in an inconspicuous place; it makes a good reminder to set the white balance when I use the diffuser. I used the histogram of the gray card photo to see how the R,G,B peaks aligned. With a gray card, the peaks should completely overlap. I adjusted the white balance Kelvin temperature and re-shot the gray card until I got the R,G,B peaks to perfectly overlap. Conclusion Using a diffuser means that you will have fewer things to worry about when using a flash. It softens the light when using direct flash. It gets more illumination onto your subject if you’re using bounce flash and you get too far from walls or a too-high ceiling. You don’t have to be as careful about the angle of the bounce flash, either. My cheap diffuser slightly altered the color balance, compared to a bare flash, but it’s easy to correct for this. This kind of diffuser can’t substitute for the quality you get from a large umbrella reflector, but it’s not a fraction as cumbersome to use, either. I’m not trying to endorse being lazy or cutting corners, but there are times when you just can’t transport elaborate lighting gear. I have to admit that this diffuser looks a little homely, but it’s easy enough to get over that. It makes flash photography a little simpler for me than using a bare flash, and as I said before, the price is right. #howto

  • Lightroom Radial Filter: The Spotlight

    There’s a Lightroom mask called “Radial Filter” that seems like something that is of little use. There are times, however, when it is exactly what you need. Have you ever seen something that was essentially in silhouette, and you wished you had a giant spot light to illuminate it? The “Radial Filter” may be for you. Washington Monument with a giant fake spotlight The shot above shows my vision of the Washington Monument, although it isn’t at all what I saw while I was there. The lighting looked terrible, with the monument looking more like a black obelisk against a nearly-featureless cloudy sky. I took the shot anyway, envisioning what Lightroom post-processing magic could do with it. This is one of those cases where I desperately needed some breathtakingly huge lighting equipment to flip the lighting ratios between the sky and the monument. Lightroom to the rescue. I don’t use it very often, but I think that the “radial filter” can sometimes be just the ticket. If a shot calls for it, there’s nothing stopping you from using multiple radial filters, either. For this shot, though, I only wanted the Washington Monument lit up. The starting point. A throw-away shot. The shot above shows you what I started with. The monument was a featureless dark blob. What I wanted was essentially the opposite, where the monument was lit and the sky was darker and textured. For my desired “spot light” effect, I’d typically adjust the global exposure of the shot at this point, so that the spot light added in later would have the desired brightness. In this case, the rest of the shot already had roughly the (low) exposure I wanted. Make the shot really small, to make room for a big radial filter I wanted to illuminate only the Washington Monument. I knew I could do it with the “Radial Filter”, but the size of the filter I wanted meant I needed to first really shrink the picture, using the “Zoom Level” adjustment. After zooming, I selected the Radial Filter mask. The default settings for this filter are virtually never what is required, however. Starting point for the Radial Filter Mask I selected the Radial Filter mask, and made some initial guesses about what I would need. Since I was after a “spot light” effect, I clicked on “Invert Mask”. I increased the feathering, to make its effect a little more subtle. I initially set the exposure slider to +1, which I would then later adjust to taste. Fit the radial filter to the subject To better see the feathering effect of the filter, I went to the Tools | Adjustment Mask Overlay | Show Overlay. I left the “red” mask color, since it would show up fine in the shot. Now, I adjusted the center, shape, and size of my overlay to be a skinny ellipse with a slight rotation to match my subject. Once the mask was set the way I wanted it, I turned off the red overlay. Fine-tune the exposure of the subject inside the mask Next, I scrolled up to the mask “Exposure” slider, and decided I wanted an even brighter subject. It’s easy to overdo the exposure, so be careful. Adjust the sky: Dehaze Filter I decided at this point that I wanted to adjust the sky and give it more drama. My two go-to choices at this point are the Nik HDR Efex Pro and the Dehaze filter. I always start by trying the Dehaze filter, since it’s really quick to try it out and later cancel it if desired. I discuss getting this Dehaze filter for older versions of Lightroom in a previous post. In this case, I decided that the Dehaze filter was just what I needed and therefore didn’t resort to using the HDR Efex Pro plug-in. The finished shot You can argue all day about the honesty of using fake lighting, but I know what I like. This shot shows what I was after, and I think that the radial filter added just what was missing from the original scene. The radial filter doesn’t just apply to landscapes. You might find that portrait lighting can be vastly improved after the fact with this same technique. As with everything, please don’t overdo it. #howto

  • How Lens Optical Stabilization Works

    The more you understand about how stuff works, the more amazing it is. I had heard long ago that lenses with optical stabilization use “gyros”, but I never gave it much thought. I had heard of “gyroscopes”, but spinning wheels aren’t what we’re talking about. The name gyro has the Greek root gyros, meaning rotation. A typical lens optical stabilization unit (a Canon lens) The picture above shows a stabilization unit, which moves a compensating lens group to keep the image on the sensor (and in the viewfinder) from moving. The question to ask is how the lens knows the photographer is jiggling the lens, and what to do about it. In a DSLR, it can’t use the camera sensor to help figure out image movement, since the shutter is hiding the sensor. Let’s start with something called the Coriolis Effect. Coriolis comes from a French mathematician named Gaspard-Gustave de Coriolis. On a big scale, it’s what causes large-scale weather patterns in the northern hemisphere moving north to have an east-word velocity, and the opposite effect in the southern hemisphere. On a smaller scale, it’s the force required to keep walking in one direction as you try to move from the center to the edge of a rotating merry-go-round. When a photographer hand-holds a lens, the lens invariably starts to rotate a bit in various directions. This rotation results in the Coriolis Effect, which can be sensed and then compensated for. A really smart person envisioned a gyro design that could notice rotation by jiggling a weight in one direction and sensing a force that was perpendicular to the direction of that jiggle. As shown above, when the weight was moving up, the force would push the weight to the left. The same weight would get pushed to the right if it was moving down. The forces would all reverse when the rotation switched from clockwise to counter-clockwise. This design is known as a vibratory rate-measuring gyro. It’s common to jiggle these weights at about 10,000 cycles per second (or 10 kHz). The “rate-measuring” here is the rotation rate, usually expressed as degrees per second. Using MEMS technology (micro-electrical-mechanical-systems), the miniature weight, the springs, a drive motor, and position sensors could all be built on a microscopic scale. Power requirements scale down with the size of the parts being used, so a camera battery could drive this system easily. Even with low-power requirements, cameras will typically turn off the stabilization when you take your finger off of the focus button. This type of gyro design was introduced in 1991. The typical name for these units is “Coriolis vibratory gyroscope”. It finally found its way into lenses in 1995. The gyro concept shown above can only sense rotation along one axis, so two of them would be needed in a lens to handle the yaw (left-right) and pitch (up-down) axis of potential rotation. A photographer typically wouldn’t be rotating the lens along its optical axis (roll axis), so that motion isn’t compensated for. Typical hand-held rotation rates being counteracted are around ½ degree per second to 20 degrees per second. The center of rotation is roughly the rear of the camera (or the photographer’s eye). The Analog Devices, Inc. description of their gyro design The picture above shows a little bit more detail. The “Coriolis Sense Fingers” in the drawing are little capacitors that sense the gap distance between their little parallel fingers as the “resonating mass” shifts left or right, according to the direction of rotation of the device. The tiny signal from these capacitors can be converted into a voltage that varies according to the rotation. What the “sense capacitors” look like in the silicon design The Sense Capacitors (via a scanning electron microscope) The whole silicon design of the gyro Getting into even more detail, the picture above shows “Comb-Drives”. These little guys are given an alternating positive and negative voltage, to force them to have a net positive or negative charge. The moving “Active Mass” has little fingers that fit in-between these Comb-Drive fingers. The active mass is alternately pushed away or pulled toward the stationary comb drive fingers, since its electric charge is either attracted to or repelled by the comb drive fingers as their voltage is switched between positive and negative. Comb-drive actual silicon, via scanning electron microscope The shot above is a close-up of the little silicon comb-drive fingers. They push and pull the “active mass” to keep it vibrating. The “active mass” is suspended on silicon beam “springs” to greatly increase the magnitude of its motion when it vibrates, which enhances the signals produced by the gyro. The whole gyro needs to keep the “active mass” vibrating back and forth in its “drive direction”. The mass will get a sideways vibration (the sense direction) when the device (lens) is rotated, due to that Coriolis Effect. When a sideways vibration happens, that’s when those “sense capacitors” mentioned earlier produce a signal to indicate the device is rotating, and in which direction it’s rotating. The signal coming from the gyro is typically a low current, which is converted into a digital count that is proportional to the rotation rate. These little gyros are so small that even air becomes a problem. When they’re manufactured, they have to install them into a package that maintains a vacuum. How small is that gyro, anyway? Small. The lens optical stabilization unit needs a separate little gyro to sense rotation about each axis being controlled (e.g. yaw and pitch). Once the stabilization unit gets the gyro signals to indicate that the lens is rotating, then its microcomputer commands little actuators to move the compensating lens group in the stabilization unit to counteract that rotation. There are other kinds of gyro designs that are much more complicated than the “simple” one I have described. In fact, they can get mind-numbingly complex. Better-quality vibratory gyros are actually able to detect rotation rates of less than 10 degrees per hour. If that isn’t enough to bring tears to your eyes, I don’t know what will. You also probably have MEMS gyros in your smartphone. The next time you complain about having to spend extra money for a lens that has optical stabilization, just think about the technology that goes into it. And try to imagine the brilliance of the people that invented it. By the way, in-lens stabilization is generally preferred over in-camera stabilization. With a DSLR, lens-based stabilization lets you see a steadier viewfinder image and it makes it easier for your camera to focus on a non-moving target. One downside, however, is that the moving stabilization optics can make for slightly worse bokeh. A big thanks to Canon, Analog Devices, Inc. et al. for the visuals used in this article. I don’t yet have my own scanning electron microscope. #howto

  • F-stop Fun Facts

    Did you ever wonder how they decided upon camera lens f-stop numbering? Are there any other numbering schemes that could be useful? And did you know that the ‘F’ stands for “focal ratio”, which is “the ratio of the system's focal length to the diameter of the entrance pupil”. Nikkor Noct f/1.2 (half-stop faster than f/1.4) F-stops by the numbers F-stop progression Did you know that the standard F-stop numbering scheme comes from a math sequence? Most people know that full stops are based upon doubling or halving light intensity, but not where the actual numbering scheme comes from. Now you know. If you wanted to calculate “standard” F-stop ranges, here’s what you would do: F-stop full scale calculation For the above, the progression solving the above sequence would be: 1, 1.4, 2.0, 2.8, 4.0, 5.6, 8.0, 11.0, 16.0 … You can actually go the other direction, too, if you use (-1 x 0.5), (-2 x 0.5), … for the exponent sequence above! This gets you F-stops like 0.5 and 0.707 for those lenses few mortals will ever be able to afford. F-stop half-scale calculation For the above, the progression solving the above sequence would be: 1, 1.2, 1.4, 1.7, 2.0, 2.4, 2.8, … F-stop third-scale calculation For the above, the progression solving the above sequence would be: 1, 1.12, 1.26, 1.4, 1.6, 1.8, 2.0, 2.2, 2.5, 2.8, … If they ever made them, you should now be able to see how lens manufacturers could mark lenses in fourth-stop, fifth-stop, sixth-stop etc. using fractions in the exponents like 4, 5, 6. You’d think that could be useful for something like cinema lenses, where they like finer exposure control, but instead they go one better and offer step-less aperture control. Speaking of cinema lenses, those lenses are marked in “T” stops, where the T is for “transmission”. I think all lenses should be marked this way, because what really counts is how much light gets to your camera sensor. The F stops can be off by up to a whole F-stops’ worth of transmission, depending upon the lens design and how good the lens multi-coating is. Zoom lenses are particularly dishonest about their transmission. It doesn’t make you a better photographer, but its fun to know how things got to be the way they are. By the way, did you know that early camera shutter speeds had sequences like 1/400, 1/200, 1/100, 1/50… ? That actually seems more logical to me than what they have today. Also, did you know that speeds like 15 and 30 seconds are actually 16 and 32 seconds, respectively? The camera makers just lie about these values. Time them for yourself to see. #howto

  • Nikon Custom Settings Banks versus Photo Shooting Banks

    Many people have a fundamental confusion about Nikon memory banks on their “pro” model cameras. Nikon keeps the distinction as clear as mud. The “Photo shooting menu bank” is found in the “Photo Shooting Menu” (the little camera icon). You can assign up to four of these (A,B,C,D) to have unique settings in each. You can also name these banks to be something meaningful. I have settings for “Sports”, “Landscape”, “Manual”, and “Live View”, so that’s what I named them. Here’s where you save your unique setups that you configure for things like Auto-ISO, default ISO, Manual shooting mode, picture controls, shutter speed, aperture setting, etc. The “Custom settings bank” is found in the “Custom Setting Menu” (the pencil icon). What you save here are things like custom button assignments. You also get four of these (also named A,B,C,D). This is where the confusion sets in. Fortunately, you can give these banks names, too. If you give these banks different names than the “photo shooting” banks, then it will help eliminate confusion. Both banks are added into "My Menu" for fast access Photo shooting bank Custom settings bank I have the names “Focus buttons” and “Live View” in my custom settings banks. The “Focus buttons” save the way I have configured my AF-ON, joystick, Fn1, Fn2, and PV buttons. These button assignments prevent focusing in Live View unless I use the touch-screen, which I find enormously irritating. The bank I named “Live View” rids the various “AF-ON plus area mode” features I assign to other buttons. Once these custom button assignments are cleared (by selecting my “Live View” custom settings bank), then my “AF-ON” button can once again be used to get Live View to auto-focus. In the “Setup Menu” (the little wrench icon), there’s a “Save/load settings” option to save the shooting configuration (all shooting banks and all custom settings banks). To get at the two memory banks quicker, I assign my “Fn2” button to go to “My Menu”. Inside “My Menu”, I added the “Photo shooting menu bank” and the “Custom settings bank”. Even if I don't select a different bank here, it's a very fast way to verify how my camera is presently configured. I still wish Nikon would just stick the “U1”,”U2”, etc. dial on all of their cameras, which is infinitely faster than menu-diving. And just try menu diving if you're half blinded by bright sunshine. #howto

  • Nikon D850 Buffer Capacity Reality Testing

    Competitor #1: Sony XQD I tried to look up the D850 camera shot buffer numbers on the internet. All over the map. Either terrible or stunning, depending on who you ask. Nikon claims it should have a 51 shot buffer with the settings I shoot with. I felt compelled to conduct some testing of my own, using a couple of different memory card types, since I’m a natural born skeptic. The manufacturers of those memory cards seem to completely fabricate their numbers. Going by “the specs” is a fool’s errand. So here’s my testing scenario. I set my D850 to ISO 64 (least noise and therefore smallest picture memory) with large-14-bit-lossless-compressed RAW format. I used continuous-high shooting speed, which the “specifications” rate at 7 frames per second. I only populated the camera with a single memory card at a time. I shot landscape pictures in the sunshine with “typical complexity”. Noisy, complex pictures take up more memory and will therefore decrease the buffer numbers. I use a battery grip, but I just use the standard “EN/EL 15-a” battery in it, so no 9 frames per second for me. The first card I tested is the “Sony G-series 400MB/s write speed“ 32GB XQD card. I have read that in actual reality it writes at very roughly “113.84 MB/s”, according to this site when tested in the Nikon D850 camera. This sounds like a case of “the large Sony print giveth and the small reviewer’s print taketh away”. The second card I tested is the “Lexar Professional 1000X 64GB 150MB/s” card, which the fine print states as being capable of writing at 75MB/s. Competitor #2 fits in the SD card slot. For both cards, I formatted them just prior to testing, so that storage fragmenting wouldn’t be an issue with the timing. Results So here’s what I got. The Sony XQD card managed 37 shots before it hiccupped and slowed down. The Lexar card (Laxar?) got me 24 shots before slowing down. Yikes. Pretty underwhelming. If I were shooting on an NFL sideline or an Olympic track, this camera setting probably wouldn’t be my first choice. For most other stuff, I probably couldn’t care less. I have to give credit where credit is due, however: I got a little faster than 7 frames per second. Actually, I got 37 complete (XQD) frames in 5.06 seconds, or 7.31 frames per second. I used the sound track timing from a video to “visualize” each shutter/mirror slap. Nikon wasn’t lying there; they were actually a bit conservative. Next, I set the D850 to 12-bit lossless compressed raw, and voila, the XQD got 200 shots at full speed! The ‘Laxar’, however, only got me 34 shots with this 12-bit setting. I could change the scene 'complexity' and brightness and get fewer shots; typically about 193 shots in 27.5 seconds (7.02 fps). For a really complex scene, I once got only 101 shots in 13.2 seconds (7.65 fps). Now you can see why people argue about the real buffer size. Addendum 9-21-2019: I got more interested in 'scene complexity' and did a lot more testing. There were times where I got an average of 43 shots in 6.1 seconds (7.05 fps) using the lossless compressed 14-bit. Still not Nikon's 51 shots, but maybe there's a super-fast XQD card out there that can squeeze out the extra 8 shots. The more I test, the murkier the results... Since the D850 is all about quality, why on earth would I ever be willing go down to 12-bit shooting? I read an article here by ‘Verm’ Sherman that changed my mind about 12 bits. He tried and tried to demonstrate how inferior the 12-bit files are, compared to 14-bit, but was unable to do so. The shots just kept looking spectacular and equal in his tests. I did some tests myself, and I have to agree; I can’t tell the difference. But I did notice the difference of about 15MB smaller file sizes, which really adds up over time. On my own camera, I have a “Sports” photo shooting bank that uses the 12-bit lossless compressed setting, to 'guarantee' that I get the 200-shot buffer. I have a separate “Landscape” photo shooting bank that is set to 14-bit lossless compressed mode, but it’s mostly for “insurance” just in case in the future there may be displays that can possibly show a difference. Spending an extra 15MB per shot does seem like a painful insurance premium, however. I have to admit that I feel a lot less guilty about having my “Sports” mode on 12-bit, though. The quality to my eye is stunning, and there is still a ton of elbow room in the dynamic range. I can’t resist mentioning that my Nikon D500, using the exact same “Sony G-series 400MB/s write speed“ 32GB XQD card, has a 200-shot buffer, and it shoots 10 frames per second to boot using 14-bit lossless compressed (or any other setting except uncompressed 14-bit). Smokin. And verified. And no excuses. There are just so many variables when it comes to shot buffer capacity that I have to recommend that you verify yours before you try shooting that once-in-a-lifetime opportunity. You never know when the Loch Ness monster and Bigfoot might happen to show up in that forest clearing at the same instant, and you’re the only witness. #review

  • Nikon D500 Un-cropped versus D850 Cropped Shot Comparison

    There’s been a lot of talk about using crop sensor cameras for subjects that need that “effective focal length” increase, for distant subjects like birds. For example, a 600mm lens has an effective focal length of (600 X 1.5) or 900mm. This may be close to the truth if both the full-frame and the crop sensors have the same overall resolution, but what about an FX camera that has more resolution than the DX camera, like the D850 versus the D500? Does the Nikon D500 beat the D850 if you crop the D850 picture down to the same field of view as the D500? I decided to find out for myself just how good the D850 sensor is, and see if it can match the D500, even if you crop the D850 shots down to the same view you get with the D500. I know that the D500 can shoot faster and has a bigger frame buffer, but the D850 is no slouch, either. Both cameras are just as sensitive to light and can focus at the same speed, too. I’m not here to talk about the merits of one camera over the other; I’m only interested in knowing if I lose any quality using the D850 and crop the shots, when compared to un-cropped D500 shots. If you like wide-angle shooting, there is of course no substitute for a full-frame camera. There are multiple “full-frame advantage” topics I could talk about, but I want to focus strictly on cropping here. To get some answers, I set up a resolution target and then shot it from the exact same position and with the same lens; I just swapped camera bodies. After running it through my image analysis software, I took a look at the shots up close. The shootout: D850 versus D500 I used my Sigma 70-200mm f/2.8 lens at 70mm and f/2.8 for these tests. I shot in raw format, and I didn’t post-process the shots in any way before analyzing them in my image analysis software. I wanted to mention that I shot the chart with an exposure compensation of +0.7 stops with both cameras. The D500 meter consistently ended up with slightly darker images than the D850, but the image analysis software measurements are unaffected by that small difference. The shot above is the MTF50 2-dimensional resolution plot from the D500. The measurements are in units of line pairs per millimeter (lp/mm). The 20.9 megapixel D500 sensor pixels are 4.22 microns, compared to the 45.7 megapixel D850, with pixels of 4.35 microns. The resolution chart is filling the frame left-to-right, and some of the chart goes outside of the frame top-to-bottom. The shot above is the D850 using the same lens at the same distance. The plot looks a little funny, because the resolution chart no longer fills the frame. The actual resolution measurements of the little trapezoids in the target chart are unaffected by the framing difference, however. There are essentially the same number of sensor pixels on each little target trapezoid for each camera. The resolution results between the D500 and D850 look quite close. Given the different pixel dimensions between the cameras, the D500 is expected to have slightly higher MTF50 lp/mm resolution numbers than the D850, if the little target trapezoids have the same “cycles per pixel” resolution. The resolution results are remarkably similar, here, and the D500 numbers are a tiny bit higher, but generally within experimental error. D500 target center The D500 edge measurements of the little trapezoids are in the range of 0.27 c/p to 0.32 c/p. For this sensor, a measurement of 0.31 c/p is equivalent to an MTF50 of 73.3 lp/mm. The peak value of 0.32 equates to an MTF50 of 75.7 lp/mm, which matches the maximum value shown in the D500 MTF50 chart above. D850 target center The D850 edge measurements of the little target trapezoids are in the range of 0.28 c/p to 0.31 c/p. For this D850 sensor, a measurement of 0.31 c/p is equivalent to an MTF50 of 71.6 lp/mm. At least in the lens center, then, there’s essentially no difference between the cameras. Next, let’s take a look at the sensor right edge. D500 right edge D850 right edge Comparing the cameras on the right edge, the D500 fared a little bit better on most of the target edges, but the measurements aren’t hugely different. D500 top edge D850 top edge The readings between the D500 and D850 on the top of the frame are also comparable, but here I’d give the ‘edge’ to the D850 results. Without the little blue measurement values to guide me, I would have a hard time telling which shot was from which camera. Conclusion If I were to take a bunch of shots with both cameras and crop the D850 shots to match the D500 and then hand them to somebody to choose which was which, I’ll bet they couldn’t tell. The bottom line is that cropping the D850 shots gets me the same quality as the D500; there is no DX “effective focal length” advantage to be seen here. I have always been on a big guilt trip when I crop a shot; this is definitely going to make me feel better about myself in that regard. At least with the D850. #review

  • Sigma Focus Algorithms: Speed versus Accuracy

    Sigma lets you program their “global vision” series of lenses with their USB dock. This includes the Sport, Art, and Contemporary lenses. One of the things you can program is which autofocus algorithm to use. You get three algorithms to choose from: “Fast AF Priority”, “Standard AF”, or “Smooth AF Priority”. By assigning a different algorithm to different custom switches on the lens (C1 and C2), you can change your mind on the fly and pick the appropriate focus algorithm to fit the shooting conditions. The “Smooth AF Priority” algorithm is primarily for video use, so I never use it (it’s the slowest focus algorithm). I’m interested in getting the fastest focus performance that I can get, so I want to use the “Fast AF Priority” whenever I can. I have already measured the speed of the “Fast AF Priority” algorithm versus the “Standard AF” algorithm, and found that the Fast algorithm is about 20 percent quicker than the Standard algorithm. I had used a Nikon D500 and the Sigma 150-600 Contemporary for the speed test. I thought I’d try to determine just how repeatable the focus algorithms are. If a camera/lens combination is super fast to focus but is totally unreliable at getting to the correct target distance, then you haven’t really gained anything. I decided to use my Sigma 70-200 f/2.8 Sport lens for this test. I have programmed the C1 switch for the “Fast AF Priority” algorithm, and the C2 switch is programmed with “Standard AF” (“Standard” is also Sigma’s default algorithm if you don’t program the lens). I used a Nikon D850 for the tests. All of the test shots were done at 190mm and f/2.8 from a distance of 1.88 meters. This is a fairly close subject distance, but I wanted to do a test where I could spot even tiny focus errors. Sigma Custom Switch (C1) settings options The screen above shows how to access the autofocus speed options, via the “AF Speed Setting” button. It also shows how my C1 lens switch is currently programmed with the “Fast AF Priority” and “Moderate View Mode” optical stabilization on my 70-200mm lens. All of the same options are available for the C2 lens switch. Sigma’s available programmable AF Speed algorithms The picture above shows you the three autofocus speed selections that are available for programming a lens with their Optimization Pro software and their USB dock. You can always change your mind and reprogram the lens later, if you’re not happy with a selection. Sigma already upgraded the firmware in their 150-600 lenses, which vastly improved focus speed. If I hadn’t purchased their USB dock, I couldn’t have taken advantage of their improvements. Focus Comparison Testing Procedures To perform the tests, I would start by first selecting the desired (already-programmed) custom switch setting. I mounted the camera onto a sturdy tripod, because it’s critical to keep the camera at a fixed distance from the target. The camera was set to phase-detect autofocus, with all of the same settings I’d use for regular action photography (where I want fast autofocus). I only used unsharpened raw format for the testing, although jpeg can be used here if you aren’t concerned with accurate target edge resolution values. I mounted a focus target that is designed for focus evaluation/calibration using the free MTFMapper software. The target is designed such that the (middle) camera focus sensor only sees a single high-contrast edge, and won’t be confused by neighboring details to focus on. The target is mounted at a 45-degree angle relative to the camera sensor. This makes it easy to determine what’s in focus and what isn’t. I focus on the middle of the target, where the big vertical trapezoid edge is located. When the target is rotated about the vertical, the trapezoid shape starts to look like a rectangle. I focus with the lens wide open, so that there will be no room for doubt about where the plane of best focus ends up. This is, by the way, the same basic setup that I use to focus fine-tune my lenses at close distances. I have bigger targets for focus calibration at longer distances. To spice up the test a little bit, I shot the photos at a light intensity of EV 7.3, which is typical indoor room lighting, and definitely more of a challenge for a focus system than sunlight. The Focus Target The photo above shows what the focus target looks like. The little blue numbers on each little slanted square are resolution measurements for each measured edge. These numbers are placed there via the MTFMapper program when the photo is analyzed. I’m using a small target, which has overall dimensions of just 8.5 inches tall by 9.5 inches wide, plus some whitespace around that. I want small little squares so that I can discern very small focus errors. The “large” vertical target edge I focus on is just 2 inches tall, and each little square is just a quarter inch on an edge (6.4mm). Since the test shots are done with the target rotated by 45 degrees, the little squares in front and behind the large black target edge go quickly out of focus, and have a very low corresponding measured resolution number. Ideally, the highest resolution measurement would be the large vertical edge in the middle of the shot, since that’s where the focus sensor I’m using is aimed. The little squares that line up with that large vertical edge should have a similar resolution number (assuming the camera sensor edge is aligned parallel to the chart). I start by manually shifting the focus well away from the middle of the target and press my “AF-ON” button to initiate autofocus. If all goes well, then the camera will of course focus perfectly on the large vertical edge in the middle of the field of view. The resolution reading (little blue number on the edge) should be highest on that same edge. I repeat this procedure over and over again; each time I de-focus the lens and press the AF-ON button to re-focus on the target edge and then take the shot. Reality rears its ugly head, however. The resolution measurements will show where the lens actually ended up focusing. If you have quality equipment and have properly calibrated the focus “fine tune”, the best focus should at least be “near” to the desired focus distance. The camera’s phase-detect sensors will tell the camera when focus is “good enough”, and the camera then tells the lens to stop focusing. If you were to shoot in really dim lighting, then you may experience focus-hunting; use bright-enough lighting that your camera doesn't have to struggle with this test. This test, then, is to evaluate the range of distances where focus ended up while using first the “fast” autofocus algorithm (C1 switch), and then using the “standard” autofocus algorithm (C2 switch). Examining the focus target up close In the shot above, I had turned the focus target upside-down, so that the right side of the target is rotated away from me. As you can see, the zone of sharp focus is really narrow. In this shot, the focus was perfect, and the little squares aligned above and below the large vertical edge have the highest resolution numbers (0.18 cycles per pixel). You might notice that your camera will tend to focus too near if you start your focus distance setting in front of the target. As soon as the camera thinks focus is “good enough”, it stops the focus action. If you start from the far side of the target, the focus can tend to be too far (once again, it entered the “good enough” zone and stopped). Keep this in mind when performing focus fine-tune calibration; do a set of shots starting focus nearer and then a set of shots beyond the focus target to verify your camera’s focus behavior. My Nikon D850 doesn't suffer from the stop-focus-too-soon problem, no matter if I focus near-to-far or from far-to-near. Test Results I couldn’t detect any difference in the tendency to miss focus with either the Standard or Fast autofocus algorithms. I did half of the tests starting focus too near the target and half starting focus beyond the target; it didn’t alter the results. I didn’t have a single focus miss of more than 7mm at 1.88 meters target distance, no matter which focus algorithm was chosen. I shot about 100 tests overall, to best determine “average” focus behavior. Never make a focus determination on the basis of a single shot; this is one of those “statistical” things. With either focus algorithm, the focus was on average within 3mm of perfect. I had previously done this same testing procedure on my Sigma 150-600 Contemporary lens. I didn’t see any accuracy or repeatability problems by using the Fast algorithm instead of the Standard algorithm on that lens, either. This doesn’t, of course, guarantee that all of Sigma’s lenses behave this well. Always "trust but verify". Here, then, is a case where you get it all: speed, repeatability, and accuracy. If there aren’t any focus repeatability differences between the Fast and the Standard algorithms, then why would you choose the slower Standard algorithm? I have kept my C2 switch programmed with the Standard algorithm as a sort of insurance policy, but I haven’t needed it yet for general photography. It may be that in extremely dim lighting the Standard focus algorithm might be more reliable, but I haven’t tested it. I’ll leave that task to the reader, as they say. I tried to describe my test procedures in painstaking detail, in case you want to verify your own Sigma lens/camera combination. The autofocus algorithm choices, not to mention all of the other programmable choices, are of course unavailable to you if you don’t get the Sigma USB dock. For me, the ability to customize my Sigma lenses using their dock has made all the difference. #review

  • Flashpoint Wave Commander Remote Shutter Intervalometer Review

    If you have to deal in using long exposures or image stacking, here’s a gizmo you might be interested in. The Flashpoint Wave Commander can control taking a long series of photographs. You get to specify how long to wait before taking the shots, the shot duration, how many shots, and the delay between shots. Flashpoint Wave Commander You can see the plug-in cord for the camera. This part is what you can replace to fit other camera models. Use the multi-direction control and its “set” button to program it. The Flashpoint shutter release button is the big round button shown on the left. Connected to camera’s 10-pin plug The Flashpoint connects to your camera’s remote control input plug (e.g. the 10-pin plug on my Nikon D500 and D850). It’s modular, so you can buy cheap (about $8) separate plugs to fit many Nikon, Canon, Sony, Samsung, Matsushita, Pentax, Olympus, and Panasonic cameras. I endured a lot of tedious photography of things like star-scapes and infrared landscapes using my watch to monitor shutter times from around 15 seconds to 4 minutes. I finally got smart and got this unit. This intervalometer lets you specify how long to wait before you take any shots, how long the exposure should be, how many shots to take, and how long to wait between shots. It’s really easy to set up, and it remembers your settings for the next time, unless you turn if off. It has a beeper, if you want sound, and also a screen backlight for night photography. A pair of AAA batteries powers the unit. You can set any of the times from 1 second through 100 hours, and you can take from 1 to 399 shots in a sequence. Just press its little start/stop button to start the program running. To configure your camera to use the intervalometer, you need to be in “manual” mode, and set the shutter on “bulb”. Set the “single-shot” mode, also. Make sure you’re in “release-priority”, so that the camera won’t freeze if it isn’t in focus. It’s also wise to close the eyepiece shutter (or your viewfinder blocker) so that light can’t enter the viewfinder during the exposure. Even though my Nikons have intervalometer features built into them, I find this device superior. And the price is right. It is “wired” to your camera, but once you program it, you can start it and walk away until the program finishes. You can also use this remote as a simple wired shutter release, even if its batteries go dead. If all you want to do is take a photo without camera shake, then just connect the unit (don’t even bother to turn it on) and press the Flashpoint’s shutter release button instead of its start/button. Simple. I don’t get any money from these guys, so I have nothing to gain if you get one or not. I just wanted you to be aware that this device exists; I really like mine. #review

  • Fix that Lens Infrared Hotspot with LightRoom

    If you have a lens that generates that dreaded hotspot in the middle of your photos when you try infrared photography, you may want to try this trick. LightRoom offers the “radial filter”, which you can use to make that hotspot disappear. Most modern lenses are quite poor at infrared photography, because manufacturers no longer take care to use proper internal anti-reflection coatings that are effective against infrared light. There are of course limits to how bad your lens can be, but for many lenses, you can use the radial filter to darken that hotspot and save the picture. The dreaded hotspot in the middle of the shot The shot above was taken with a Nikkor 18-55mm kit lens (Nikkor 18-55 3.5-5.6 GII DX VR) that most websites will report as “good” for infrared photography. I used an 850nm infrared filter and took the shot at f/11. The picture looks ruined to my eye, due to that pesky hotspot. Let’s take a look at what Lightroom can do to try and rescue the shot next. Configure a radial filter to fix that hotspot As shown above, select the radial filter, and click the middle of the hotspot in the picture. Drag the mouse to get the desired diameter for the filter to surround the spot. Make sure to click on “Invert Mask” so that the filter will affect the interior of that circle. Set the feathering amount, so that the edges of the filter circle will blend into the background. You might want to temporarily set the following, also: Tools | Adjustment mask overlay | Show overlay This command will let you see your mask, and it’s quite helpful while you are adjusting the “Feather” amount. After you’re done, select “Hide overlay”. There's also a "Show Selected Mask Overlay" checkbox below the image to turn the mask on/off. Lightroom also lets you change the mask color, if you find it too difficult to evaluate the effect using the default red color. Fine-tune the radial filter Decrease the exposure value, until the hotspot is darkened to match its surroundings. When you’re happy with the mask settings, click “Done”. Go ahead and perform the usual edits after you’re finished using the radial filter. With the infrared filter I used, I usually prefer to turn the shot into black and white. The hotspot is gone Finished shot As you can see above, the hotspot is basically gone. I converted the shot into black and white, which I almost always do with this particular IR 850nm filter. The plug-in Silver Efex Pro 2 can be very helpful in manipulating the shot as black and white, by the way. You have to be careful that you don’t over-expose the shot to the point where the hotspot gets into the “clipping” region in any of the R, G, or B color channels. At that point, you have to admit defeat; the shot’s not recoverable. I have a few lenses that are so-so when shooting infrared. There’s a mild hotspot in each of them, particularly when I stop the aperture down beyond about f/5.6. This simple trick can save the shots that I’d otherwise send into the trashcan. #howto

bottom of page