HDR, which stands for High Dynamic Range, is a bad word to some photographers. I think they have been overly influenced by some bad early use of it. It can be an excellent tool for certain kinds of images.
First, though, what is dynamic range? Dynamic range is a measure of the span between the lowest level signal that can be used and the highest level. In most electronic systems the high end is limited by the point where the signal starts to clip or distort. The low end is limited by the point where an unacceptable amount of noise intrudes. For photography it is that range from the darkest value that is usable to the brightest value that doesn’t clip to pure featureless white.
Modern digital sensors are far better than ones in early digital cameras. High end sensors now are rated at between 13 and 15 stops of dynamic range. That is incredible. Early sensors had maybe 5-6 stops.
But like many things, the numbers are misleading. It is not that the camera makers lie, just that they do not quantify what they really mean. So my sensor may technically have 14 stops of range, but I cannot really use all of that with no cost.
If you want to jump in to a little more technical depth, check out this article.
There is this problem called noise. It is worse at the dark range of exposure. We call what we do “digital photography”, but the reality is that a significant portion of it is based on analog signals. The information coming from the sensor is analog and it has to be amplified and digitized before it is actually digital data. Electronics, even the wonderful systems available now, have a certain level of noise in analog circuits. It is not a design fault, it is basic physics that cannot be entirely eliminated.
So when we capture an image that has a wide range of brightness values, it needs to be processed a lot in order to make a good print or even a good image for social media. A lot of this processing involves boosting the dark values to a more usable level.
But, the darkest values are close to the noise level of the electronics. So boosting them also boosts the noise. You have seen this when you brighten an image a lot and notice it looks very grainy and even blocky.
Enter HDR as a technique for mitigating the problem. HDR software takes several exposures, usually referred to as an exposure bracket, and combines them into a single image with a compressed dynamic range. Typically 3 exposures are used: one overexposed to make sure shadow data is good, one at the correct nominal exposure, and one underexposed to get all the highlight data.
In combining this data, the software can select highest quality exposure value for each pixel. It uses sophisticated algorithms to “compress” the dynamic range. That is, it makes the brightest areas less bright and the darkest areas less dark. I could not explain the exact algorithms used.
This sounds great. What is the problem?
There is actually little problem with HDR as a concept. The problem is, when it first became popular, it was often abused by many practitioners who applied it in a heavy-handed way. Images with the dreaded “HDR look” were obvious and often scorned. The HDR look is an over compressed image with few real highlights and few real shadows. Everything has a bland sameness to the tonal range.
The look rightly was looked down on by “serious” photographers. It tarnished the technique as a whole. That is unfortunate, because HDR is great for some things.
When to use it
HDR can create images that could not otherwise be made and it doesn’t have to be obvious. If a scene has extremely high contrast then HDR is often the only means to get the results we want.
Way back in the olden days we had to use graduated neutral density filters in front of the lens to darken the brightest areas, usually the sky. This would pull the dynamic range down to a reasonable range to capture in one exposure. It was the “analog” equivalent of HDR. Of course, this involved adjusting the exposure to try to anticipate the final capture range. It was tricky, but it was the only way to do it.
Now with HDR, no one I know uses split neutral density filters except the remaining film photographers. Except in one case.
HDR has one Achilles Heal – subject movement. An action scene is very difficult for the HDR software to build a good result.
If only some small parts are moving, like grass or leaves shifting with the wind, the HDR software may use “ghosting” algorithms to try to work around the movement. If you are trying to photograph a high contrast action scene, like a car race, good luck. You probably will not be able to apply HDR because there is not enough correlation between the different exposures.
This is an HDR image. Trying to create an image with the direct sun in it and at the same time preserve the deep shadows in the mountains wasn’t going to work in one exposure. The HDR software was able to pull it all together.
I don’t think this looks like the bad old “HDR look”. What do you think?