Dodging and Burning

Classic Rocky Mountain Balck & White. This exhibits strong use of traditional dodging and burning.

I have mentioned dodging and burning before, but usually in the context of black & white images. Dodging and burning is much more general than that these days. They are techniques that should be known by all photographers.

History

We usually think of dodging and burning as something associated with black & white photography. This is because this is where they were invented and first applied. Ansel Adams and the masters before him used dodging and burning extensively to achieve the artistic look they wanted.

The technique has its roots in the chemical darkroom. Photographers discovered that during the sometimes minutes long exposure of a print, they could change the tonal values of the print by withholding or adding light to selected areas.

Remember that these black & white processes were built around negatives. That is, on the print material, the more light it receives the darker the area is and the less light it receives, the lighter it it. In the limit no light at all will give the white base of the paper.

Hence the origins of the names. The printer (a person creating a darkroom print) might use a small tool, usually a circular or oval shaped piece of paper on a stick, to shield a region from some of the light. This holding back some light was called dodging. It made the dodged region of the print lighter. The printer could also add light to a region, usually by cutting a small hole in a sheet of paper and using it to shield everything except the hole from the light. This was called burning. It made the region receiving extra light darker.

In today’s digital processing, the terms are archaic. I remember them by thinking that burning sounds like it would make it darker. They might better be called just lightening and darkening. In my LIghtroom process, I call these layers just “light” and “dark”.

What are they now?

In the more general sense, dodging and burning are a means of selectively changing the tonal intensities or other properties of regions of an image. We can do this in great detail now and it is not at all limited to black & white images.

I am fairly confident in saying that all images you see a professional fine art photographer print use dodging and burning. The artist may spend hours tweaking the relationships. It is so easy now and we have so much control relative to the chemical darkroom days that it would almost be foolish not to. It would be passing up a great opportunity to enhance the visual experience for your viewers.

Digital post processing

Virtually all software tools that photographers use have the ability to selectively adjust tones in regions. The different tools may use their own names for it, but they all do about the same thing. I will discuss Lightroom Classic and Photoshop since I am familiar with them.

Since we edit on a computer using software tools, we are no longer limited to it being a real-time “performance” in the darkroom. Artists back in the day had to repeat the lengthy dodging and burning process for each print. Now we can do it once to create our “digital negative”. Editing becomes a pleasant creative process we can enjoy in our office with a nice glass of wine and some relaxing music playing.

And because we are no longer limited to black and white and chemical processes, the range of what we can adjust is greatly increased. We use the same techniques to selectively adjust colors and sharpness and contrast. It is even almost trivial to remove distracting elements.

It’s a great time to be to be a photographer!

Lightroom Classic

Ah, a marketing blunder by Adobe. Renaming “Lightoom” to “Lightroom Classic” was an affront to photographers and a thinly disguised attempt to push most users to the (reduced capability) online version. Thanks. Now that I have that off my chest let me just say that I will call the product just “Lightroom”. Know that I mean the desktop version where I have all my images stored locally.

That out of the way, Lightroom is a fantastic product that is vitally important to a large percentage of photographers. It is where we store and catalog and search for and edit our image library.

In addition to everything else it does, Lightroom has very capable dodge and burn tools and they are being enhanced all the time. At the time I am writing this, Lightroom version 12 was just released. It adds some significant new features.

Lightoom has several selection tools for dodging and burning and general editing. They are called the linear gradient, the radial gradient, the brush, and color and luminence range. In addition, there are “AI-based” features to aid in selecting the sky, the subject, people, and objects.

The purpose of all these tools is to select a certain region of an image to modify. Once we have a selection there is a range of editing that can be applied, such as exposure, contrast, texture, clarity, dehaze, temperature and tint adjustments, saturation, and sharpness. This gives us a very fine degree of control of the look of our image, down to arbitrarily small regions. And a wonderful bonus is, all adjustments in Lightroom are non-destructive. Everything can be modified or rolled back however much we want, even all the way to the original image.

Photoshop

Lightroom gets more capable all the time and is used as the exclusive editing tool for many photographers. But Photoshop is the granddaddy, the patriarch. While Lightroom makes it easy to do a lot of things, Photoshop does not restrict us from doing anything. We can mash, bend, distort, replace and modify any of the pixels in an image. You can combine multiple images together. You just have to know how.

Adjustment layers with masks are a primary means of local adjustments. These layers can be used to do traditional dodging and burning adjustments. There are even tools in the Photoshop tool bar that do dodging and burning, but I would not suggest using them, since they directly modify pixels. The ability to use a non-destructive workflow is important in Photoshop. At least, it is important to me. Some people disagree. Do whatever works best for you.

There are probably 2 main ways to do dodging and burning in Photoshop: 2 curve layers or 1 overlay layer. The first uses 2 curves adjustment layers, one set to lighten about a stop and the other set to darken about a stop. Each has a black mask. By painting in white areas in one of the masks we can selectively lighten or darken.

The method I more often use is to create a layer filled with 50% gray and a blend mode of Overlay. Then when I paint lighter than 50% gray on the layer it lightens or darker than 50% grey it darkens. I like this better because it is only one layer and it is more intuitive to me to use white to lighten and black to darken.

Either method is easily alterable and non-destructive. Each can be set up by a simple Photoshop action.

It has been edited

So in today’s photography world, assume any image you see has been edited – a lot. It is easy. It makes our images better. We are making art, not documentary.

There are photographers who think any modification of an image is wrong. They are, of course, free to feel that and act on their beliefs. I feel sorry for them. They are severely limiting their artistic potential. And they are probably “stretching the truth”. They do some color and contrast correction. Maybe a little dodging and burning and vignetting. Take out an errant twig sticking in from the side. Be skeptical when someone tells you an image has not been modified. What is the limit of “purity” vs. “artifice”? Who sets the rule? Why should there be a limitation?

Dodging and burning and related transforms have been used since the early days of photography. Masters like Ansel Adams would never have become famous without them. That is why it took many hours to print an Ansel Adams print. Most people would say it was worth it.

If you are doing photography today, I believe you need to master dodging and burning and all the related tools we have to work with now. The tools are there for us to use to make our images better. The concepts are timeless, only the technology changes. The editing controls are there because we need to use them to achieve our vision for our images. Not using them is like tying one hand behind your back. Maybe it makes a statement, but it artificially limits you for no good benefit.

How Big Can I Print It?

A VERY low resolution image (3 MPix) that would print surprisingly well

One of the things we have to wrestle with when we want to make a print is how big can I print this image and get good results? And how large should I print it? There is a lot of advice out there. Some of it is good.

Film vs. Digital

Virtually all images have to be scaled up for printing. The print you want to hang on your wall is many times larger than the sensor or piece of film you start from. Hardly any of us are shooting 8×10 negatives these days. Even if we are, we still usually want to make larger prints.

The technology has changed completely from the film days. Enlargement used to be optical. By adjusting the enlarger lens and the distance from the film carrier to the print surface, the image was blown up to the desired size. If the lens is good, it faithfully magnifies everything, including grain and defects. If the lens is cheap, it enlarges and introduces distortion and blurring.

Digital enlarging is a totally different process. A digital image is an array of pixels. My little printer at my studio likes to have 300 pixels/inch for optimum quality. So if I want to make an 8×10 print and I have at least 2400×3000 pixels, it will print at its best quality without changing a thing. Digital enlarging is a matter of changing the number of pixels.

Digital enlarging

But usually I want to print a larger size than the number of pixels I have. Here the digital technology gets interesting. And wonderful. Going back to my example, if I want to make a 16×20 print and maintain best quality, I would have to double the pixels in each dimension. It would have to go to 4800×6000 pixels.

Photoshop has the ability to scale the number of pixels in your image. There are several algorithms, but the default, just called “Automatic”, usually does a great job. Here is the difference from film: software algorithms are used to intelligently “stretch” the pixels, preserving detail as much as possible and keeping smooth transitions looking good. Lightroom Classic has similar scaling for making a print, but it is automatically applied behind the scenes. Smoke and mirrors.

The result is the ability to scale the image larger with good quality.

Print technology

In a recent article I discussed a little of how an inkjet printer makes great looking prints using discrete dots of ink. There are other technologies, such as dye sublimation or laser writing on photosensitive paper, but they are far less used these days.

It should be obvious, but to make a really big print, you need a really big printer, at least in the short dimension of the print. Really big printers are really expensive and tricky to set up and use. That is why most of us send large prints out to a business that does this professionally.

Why do I say the printer has to be big in the short dimension of the print? Past a certain size, most prints are done on roll feed printers. They have a large roll of paper in them. Say you have a printer that prints 44″ wide. The roll of paper is 44 inches wide and many feet long.

We want to take our same 8×10 aspect ratio image and make a 44×55 inch print. If it was film, we would require an enlarger with at least a 44×55 inch bed and a cut sheet of paper that is 44×55 inch. But an inkjet printer prints a narrow strip at a time across the paper. The heads move across and print a narrow 44 inch long strip of the image, the printer moves the paper a little bit, and it prints another narrow strip. Continuing until it has printed the entire 55 inch length. Then the printer automatically cuts off the print.

But if we naively follow the recommendations for optimum quality, we have to scale our poor little 2400×3000 pixel image up to 13200×16500 pixels. Even the best software algorithms may introduce objectionable artifacts at that magnification.

Viewing distance

Maybe we don’t have to blindly scale everything to 300 (or 360) pixels/inch.

A key question is: at what distance will the image be viewed? Years of studies and observation led to the conclusion that people are most comfortable viewing an image at about 1.5 to 2 times the image diagonal length. This lets the natural angle of the human eye take in the whole image easily. For the example we have been using of the very large print, people would naturally choose to view it from about 105 to 140 inches.

Along with the natural viewing distance there is the acuity of the human eye. I won’t get into detail, but the eye can resolve detail at about 1 arc minute of resolution (0.000290888 radians for the nerds). Simply, the further away something is, the less detail we can see.

Going through the calculations, if our audience is viewing the large print from 1.5 times the diagonal, it only has to be printed at 33 ppi! Finer detail than that cannot be seen from that viewing distance.

I have heard photographers who have images printed for billboards or the sides of a large building talk about inches/pixel. It would look like Lego blocks up close, but it looks sharp from where the viewer is.

Nature of the image

This is true unless the audience is photographers. They are going to get right up to the print, as close as their nose will allow, to see every blemish and defect. 🙂 But normal humans will view it from a distance.

There are modifications to the pixels vs. viewing distance calculations depending on the nature of the image. If the image contains highly detailed structure it will encourage viewers to come closer to examine it. If the image is very low contrast, smooth gradations, it could be even lower resolution.

Printing at the highest possible resolution that you can for the data you have is always a good idea.

Your mileage may vary

How big of a print can you make? It depends – don’t you get tired of hearing that? It is true, though. The real world is messy and simplistic “hacks” often don’t work well. It is better to understand things and know how to make a decision.

When it comes down to it, these are great times for making prints, even large ones. My normal print service lists prints as large as 54×108 inches on their price list. I know even larger ones are possible.

How big should you print? How big can you print?

Conventional wisdom is that scaling the pixels 2x each dimension should usually be safe. My camera’s native size is 8256×5504 pixels. Scaling an image 2x would be 16512×11008 pixels. This would be a “perfect” quality print of 55×36 inches on a Canon printer. I have yet to need to print larger than that.

Given the perceptive effects of visual acuity, I am confident I could create much larger prints. Larger than is even possible by current printers. And they would look good at a reasonable distance.

A key question is who are you printing for? A photographer or engineer will be right up to the print with a magnifying glass looking at each pixel. Most reasonable people will want to stand back at a comfortable distance and appreciate the image as a whole. Who is your audience?

Learn how to scale your image without artifacts and how to use print shapening to correct for problems. Know the perceptual effects of human visual acuity. This is part of the craftsmanship we have to learn in our trade. Given those tools, the rest is artistic judgment. With today’s equipment and careful technique and craftsmanship we can create wonderful results.

Your mileage may vary.

The image with this article is very small – 3 MPix. I would not have a problem making a 13×19 print of it. I doubt you could see the pixels.

Have you tried to make large prints? How did it go? Let me know!

Pixels, PPI, DPI

Intentional pixelation

Pardon me, but sometimes the Engineer in me has to rant. I see so much confusion about pixels and how to scale them to an output size. Pixels are just an RGB dot. How they are presented is up to the output driver. It is complicated at a technical level, but it does not have to be complicated to us poor users. So let’s see what pixels and PPI (pixels per inch) and DPI (dots per inch) really mean mean.

What are pixels?

A digital file is just a rectangular array of pixels. The term “pixel” is a contraction of “picture element”. It is the smallest dot the sensor can resolve or the smallest point of light a display can produce.

Getting to the array of pixels is complicated, since camera sensors don’t read them directly. See my article “How We Get Color Images“. Regardless of what magic actually happens, by the time you view an image on your computer monitor, it is an array of pixels. Conceptually each pixel is represented as a triplet of (Red value, Green value, Blue value). Each of the values is a number from 8 to 16 bits in size. So each value has a magnitude of 0 to 255 or 0 to 65536. What sizes you actually have is determined by the capability of your sensor (the dynamic range) and camera and the color space you are using.

Pixels have no physical meaning. In the main camera I use, the array is 8256 x 5504 pixels. Again, it is just a number. It has no physical meaning. It has no relation by itself to a print size or the size of the image on screen.

What PPI should I use for display?

This is the thing that annoys me the most. I constantly see museum and gallery directors put out requirement that we have to send in electronic samples sized to 72 PPI. PPI stands for pixels per inch.

Way back in the dim distant past, computers only did text. Then Apple came along and wanted to do graphics. They did research and decided 72 PPI looked good on screen. This set the standard, but hasn’t been the case for eons of computer age. The display on my fairly old iMac is about 218 PPI, physically. But the magic 72 PPI stuck with a lot of people.

The increases in PPI density and bit depth and speed of monitors is one of the great technological advances of computers in recent years. All those pixels on screen makes for very sharp and smooth images. We can see so much more.

Worse, many people have been led to believe that the PPI sizing of the image files means something. It doesn’t anymore. Actually, it never did for images displayed on screen. The PPI setting has little or no meaning for an image displayed on screen.

Try it

Try an experiment: take an arbitrary, fairly large jpg file of your choosing. Let’s say the filename is MyFile.jpg. Load it into Photoshop and resize, WITHOUT RESCALING, to 72 PPI. Save it as MyFile-72.jpg. Now reload the original file.jpg and size it to 360 PPI. Again without rescaling. Save it as MyFile-360.jpg. Rescaling changes the number of pixels in the file. We just want to change the PPI setting. These 2 files now have the same number of pixels but different ppi scaling.

Now use whatever image preview application you like and view the 2 sized files. Is one of them 5 times larger than the other? On my system, they are exactly the same size on screen. Even though one is set to be 72 PPI and one is 360 PPI. They are displayed as the same size and the same resolution.

Why is this? Because the file PPI setting means nothing. The display app just looks at the number of pixels and decides how large it is going to make it. If it is a tiny image, say 300×200 pixels, it will probably make it very small to avoid pixelization artifacts when an image is enlarged too much. It it is a reasonable sized file, it will just pick a good output size. It makes these choices based on the number of pixels. If the image is in a web page, the web page code determines the size the image will be.

What PPI should I use for print?

Now we head into an even more confusing area, and here the confusion is somewhat justified. Printing is it’s own special world. The technology and perception is very different from displays. Displays emit light. Prints reflect light. The effect to the viewer is very different.

Printers don’t have pixels. Instead, we refer to the output scaling as DPI – dots per inch. This recognizes that we are now talking about physical marks on paper (or your substrate of choice).

The printer manufacturers have created tremendous confusion in customer’s minds because they overload terms. The Canon Pro-1000 printer I have at my studio has a specified print resolution of 2400 DPI horizontally and 1200 DPI vertically. Yet the optimum print resolution is 300 DPI. That is, when I am creating a print, I should try to set the output resolution to 300 DPI. This is bound to confuse most people. Why not set the output to 2400 dpi for maximum resolution?

How inkjet printers make nice prints

We come to one of the secrets of printers. We customers want prints with crisp lines and smooth gradations of color and tone. As natural as film used to be, or as smooth as the original artwork we are copying. An inkjet printer sprays dots of ink onto the paper. A dot or no dot. At the level of a single dot, this is not smooth. Inkjet inks do not “mix” to create new colors.

So take my Canon printer as an example. Each “dot” (at 300 per inch) is actually subdivided into an 8×4 grid – 8*300 is 2400 dots per inch horizontally. Any position in this 8×4 sub-grid can contain any combination of printer dots of any of the 12 colors. The print driver uses a magical algorithm called “error diffusion” to cover the sub-grid with a blend of printer dots of the available colors that simulate the color and intensity of the pixel to be printed there.

It is mind bending in it’s complexity. One reason they don’t talk about it much is that it is proprietary for each manufacturer and printer, a closely kept trade secret. The good news is that they take over this complexity so we don’t have to. And they do a very good job. So I set my image to print at 300 DPI and magic happens.

It usually doesn’t matter

In summary, PPI settings do not matter for images displayed on your monitor or on the web or sent to your social media account. If someone tells you they need their images scaled to 72 PPI, just smile and do it, but secretly know they do not know what they are talking about. Only the total number of pixels affects the size of the image. And without going into mind numbing detail, I hope this takes a little mystery out of the way printing works.

I feel better now. 🙂 How about you? Are you more or less confused?

Photoshopped

Impressionistic photography

It may be said as an insult. It may be used to shame the photographer as “not a purist”. But should it be? What is wrong with an image being Photoshopped?

History

Photography began as a medium of realism. It is said that Impressionist painting (Monet, vanGogh, etc) was a reaction to the realism of photography. They took their art is a direction photography could not challenge – at the time.

Have you ever thought of traditional painting changing its direction because of photography?

The development of Impressionism can be considered partly as a reaction by artists to the challenge presented by photography, which seemed to devalue the artist’s skill in reproducing reality. Both portrait and landscape paintings were deemed somewhat deficient and lacking in truth as photography “produced lifelike images much more efficiently and reliably”.[31]

Because of the history, and the fact that everything the lens sees is recorded in detail, people tend to have an expectation that a photograph is “real”. A picture can’t lie.

Not only is this wrong in so many ways, but it is no longer a realistic expectation of photography.

Common practice

All photographs are altered from what the sensor recorded. Even if you just take that picture you snapped on your phone and post it to social media, it was altered a lot before you ever saw it. All sorts of distortion corrections, color enhancements, gamma correction and noise reduction was done by the phone. Their algorithms are very good at making the picture look like what you expected to see. It is not the same as the phone recorded.

All images you see in prints or any media are altered – Photoshopped. Some massively. Some just minor color correction and tone enhancements. I would never insult you by showing you an unprocessed picture. Unless it was to make a point about the kind of processing I do.

Even to do black & white these days requires a lot of image processing.

Did you know that even movies are “Photoshopped”? An obvious example is CGI. That stands for computer generated imagery. It proudly states that a lot of what you are seeing is artificially created. And we love it in big action movies.

Nearly all movies are digitally recorded now . All are processed and retouched frame by frame in addition to CGI enhancements. The overall color you see is even completely controlled. They call it color grading. The entire look and shading of each scene is digitally processed to set the mood the director wants.

Bad Photoshopping

One thing I will join people in denouncing is bad Photoshopping. Photoshop is a very complex program to master. It can take years – and they are constantly changing it. But even so, we are artists. We have no excuse for not mastering our tools.

Not knowing how to use the tools to accomplish our vision is like a painter not knowing how to use a brush or a metal sculptor not knowing how to weld. Just using some simple sliders to make the color of an image wonky is not much of an artistic statement.

Yet I have heard well-known professionals almost brag about their limited knowledge of Photoshop. But the reality is that they know enough to do what they want. The exception is Jay Maisel. Jay is one of the greats who I admire. He brags that he does not even have Photoshop on his computer. That is probably true, but he has full time assistants who do have it and can make a picture look like what he tells them he wants. So, a slight exaggeration for dramatic effect.

For the others, though, who really do not know Photoshop well: spend time learning it. It will reward you by making you more efficient and it will open up new artistic possibilities for you.

Artistic expression

My work is called “fine art”. I don’t like the term, but we are stuck with it. Fine art, among other things, means it is not literal or representational. I feel free to bend and even break pixels to any degree I want to bring you the art I see.

I guarantee that any image of mine you see has been processed in Lightroom Classic and maybe Photoshop. Both great tools are well capable of altering the reality of the original frame. And I do alter them.

It can range from basic color and tone correction to removing distracting elements to compositing several images together to create something new. Anything is fair game. The more adapt at my tools I get, the more I am able to use them to help me change my vision. It is circular: what we find out we can do helps us to see new things to do.

Accept it

I accepted it a long time ago. My Photoshopping goes back to about version 5 or 6. In the beginning, I was mostly just doing minor corrections on my very realistic landscapes. I have fond memories of the controversy in the camera club I was in at the time when I won best of show with the first digitally manipulated image ever submitted to them.

Since then I keep widening my vision and perspective. Realism was so deeply ingrained in me that I have had to work at giving myself permission to let my imagination go free.

I’m not where I want to be yet, but I take a much more liberal view of what I can do in an image. Still, I am my own limitation.

If you are seeking “truth” in images, be careful. But if it is important to you, do some research to find out if the image has been manipulated materially. It has been manipulated. but that doesn’t mean content has been added or deleted.

Finding truth is rare in our world. When you look at an image, assume it is art, not truth. At least, that will be true for my work. I may bring truth, but that does not mean it is realism. My images are photoshopped.

The future

In the 19th century, painting was mostly about realism. Then photography came along and took over realism. So painting moved to Impressionism and modernism and abstraction. Now digital art is perfectly capable of creating any abstract or impressionist images we desire. Where will painting go next to separate themselves?

Blessing of Technology

A high resolution image with long and difficult exposure

I admit to starting to become a Luddite in some ways. I spent a long career developing and working with advanced technology, but I am starting to object to its misuse, especially by giant corporations and the government who spy and track and infringe our rights. But on the other hand, I occasionally step back and look at where technology has taken the art of photography and have to say “wow”. We live in the best of times for digital imaging. Technology can also be a blessing.

Old books

I think what precipitated this is that I have been going back re-reading my library of photography books. Many of these are by well-known experts of their day. It has been an amazing realization that many of the images in some of them would not be exceptional or even noticed today. And in some, the author’s discussion of the images was mostly about exposure and technical problems. Exposure used to be an overriding concern. We have come a very long way.

In particular, I based this on going back over the following books. This is just a fraction of my library that I have looked at recently.

  • The Fine Print, by Fred Picker, 1975
  • Taking Great Photographs, by John Hedgecoe, 1983
  • The Photograph: Composition and Color Design, by Harald Mante, 2010
  • Learning to See Creatively, by Bryan Peterson, 1988
  • Photography of Natural Things, by Freeman Patterson, 1982
  • The Making of Landscape Photographs, by Charlie Waite, 1992

Film

Most of these books were based on film photography. It amazes me the degree of technical sophistication and planning that was required. For instance, in The Fine Print, most of the discussion about each image was about the film choice, adjusting the camera tilt/shift settings, exposure considerations, development chemistry, and printing tricks.

Do you remember reciprocity failure and how to compensate for exposure degradation on long exposures? Do you know exposure chemistries and how to push process a negative to increase contrast? How about dodging and burning during printing? Or making an unsharp mask?

I skipped this whole generation by shooting slide film during those days. This complex process of color or black & white developing and printing was not for me. And I’m an Engineer. I generally like complexity.

I would say that many of the results I notice in these old books are “thoughtful”. They have to be. It was generally a slow process. It could take an hour to set up for a shot and determine the exposure and anticipate the printing that would have to be done.

I am very thankful I was able to skip this. I am able to be much more spontaneous and intuitive in my shooting. My standards have become very different.

Early digital

Did you know Kodak invented digital photography? I bet they wish they didn’t. It put them out of business.

The first prototype in 1975 was an 8 pound monster the size of a toaster. It took 23 seconds to record a blurry black & white image that had to be read by a separate, larger box.

But unfortunately, for them, Kodak suffered the classic problem of large corporations with entrenched technology: they did not aggressively pursue the new technology for fear of cannibalizing their existing products. They could not convince management that they are going to be cannibalized, and they would be better off doing it themselves. This has put a lot of companies out of business. Who is your buggy whip provider?

Many years of technology improvements and innovation were required before we got an actual digital camera, the Dycam Model 1 in 1990. The first practical digital camera, in my opinion, was the Nikon N8008s in 1992. It had a whopping 1.5 million pixels and could do color!

Collecting pixels is not much benefit unless we can do something with them. Adobe Photoshop 1.0 was released in 1990 on the new Macintosh computer from Apple. Hard to believe there was a time before Photoshop. Or Apple 🙂

Engineering improvements

As a note on something I have observed over a long career: don’t underestimate the power of engineering. The early digital components were just toys, but they gave a hint of what was possible. Most people dismissed them as impractical, predicting they would never be at parity with film. Now even the most die hard film enthusiasts would be hard pressed to make a good argument that film is better.

Engineers and scientists and manufacturers and marketers can do amazing things when there is a market to support them.

An anecdote will illustrate. A friend of mine at HP developed the ink jet printer technology. It was black and white and pretty crude and slow. Not too long after the first one was made, he told me that someday I would take an 8×10 print out of one of these printers, in full color, and it would look every bit as good as a Kodak print. I politely told him he was crazy. But now, here in my studio, I have a 17″x22″ ink jet printer that makes color and black & white prints far better than commercial prints of a few years ago. Much larger printers exist, too. It stretches belief.

State of the art

Look at where we are now (mid 2022 when this was written). I shoot a 47MPix mirrorless camera. The lenses have better optical properties than ever before. They support the full resolving power of the camera sensor.

I can shoot great quality at much higher ISO speeds than has ever been possible.

This camera has abandoned the optical viewfinder and has gone to a marvelous little video display instead. It shows a wealth of information that photographers in the 1990s and before would never have dreamed of. Or I could chose to see the information on the camera back instead.

Since the camera is mirrorless, the sensor is live all the time, continually measuring exposure across the entire frame. No more 18% gray reflected light meter to interpret. And this exposure information is real time displayed for me as a live histogram, focus tracking, etc. Whatever I choose to see.

Exposure is a minor consideration most of the time. I am usually in Aperture Priority mode and the camera’s internal computers do a wonderful job of accurately determining exposure from the data it can see from the whole sensor. Plus I have the histogram to look at to check for abnormal conditions. And the sensor has such an exceptional dynamic range (range of light capture ability from darkest tones, to brightest) that even if I miss the exposure by a stop or 2, I probably have sufficient data to correct it in the computer. Besides, I can immediately review any image to double-check it.

An embarrassment of riches

I am almost embarrassed to have all this power at hand. Compared to image making of a few years ago it is like going from Morse Code to an iPhone.

I don’t worry much about exposure now. I can see what I am about to capture. Even before shooting I know from the histogram that it will be well exposed. I can immediately review any images to verify them. No doubts. No anxiously waiting for the developed film to come back to see if I got the shot.

This technology frees me from most of the mundane technical concerns and lets me concentrate on composition and creativity. The resolution and tonal detail in my images is the best in history. The computer processing power and tools are the best in history. Printing or display of images has never been better. The ability to transfer even huge files anywhere in the world in seconds is amazing and unprecedented.

Thank you, technology! It is a golden age of imaging. We have a blessing of technology.