My Favorite Lens

In general, we photographers love our equipment, Especially our lenses. It is not uncommon to have a favorite one. You can always get a discussion/fight started among photographers when you talk about lenses. I would like to discuss what has become my favorite lens.

Lenses

The lens is a critically important piece of equipment to photographers. Sensors are improving dramatically and lenses have to improve with them to achieve all the sharpness and resolution the sensor can capture.

Modern lenses constantly improve in resolution. Look at DxO image tests of current best lenses vs. the best from 20 years ago. Our lenses now enable us to capture more information and be able to produce wall-size prints that are extremely sharp.

The lens determines the point of view that is captured in our frame. It establishes the field of view, the width of the scene we are capturing. Some of us naturally have a telephoto view. Others have a wide angle view. This refers to the lens choice we tend to select to frame our subjects. This is just personal preference. The lens is a tool to help express our esthetic.

Many photographers feel they need a whole bag full of lenses of various focal lengths from extreme wide angle to super telephoto, with macro lenses and tilt/shiftes thrown in. Because, you never know what you might find. 🙂 Personally I have simplified my life a lot over time. I generally only carry a 24-70mm and a 70-200mm in my kit. But that is just me and where I am at right now.

So what we want is a lens or lenses that allow us to capture all the information we want (resolution, sharpness, dynamic range) in the field of view we want. A big ask, but doable.

Digital workflow

Most of us are in the digital world now. The digital workflow is quite different from the analog workflow.

What we call the analog workflow – the film days- involved developers and enlargers and prints and lots of chemicals and time. Personally, these are days I don’t miss. I am a big fan of the power and freedom and flexibility we have now.

There is a corresponding workflow for digital processing, though. It includes loading images on our computer, viewing them, culling or grading them, processing selects with our software of choice, etc. Each of theses steps is time consuming. Especially since we tend to shoot so many more frames now that they “don’t cost anything”. And each step requires software and considerable training.

The result, though, is that we spend a lot of time in front of our computer now. We probably spend more time in the digital workflow than we did in the analog workflow.

My favorite lens

What does this have to do with a discussion of my favorite lens? Well, in a sense the “lens” I use the most and that has the most impact on my work is my computer monitor.

This is where I view all my images. Zoomed in to 100% I look at individual pixels. Here is where I crop and color correct and adjust tones and contrast and saturation. This is where I view and edit the image when I convert it to black & white. When I create new images by compositing others together, that is done entirely though the monitor “lens”.

Yes, all of the things I just said are actually done through specialized software. In my case it is primarily Lightroom Classic and Photoshop. But metaphorically and to me, the monitor is the lens into the process.

Now days the monitor is where we view everything we do. Regardless of what the original image looked like, what I see in the monitor at the end of the edits is what counts. The result could be a complete re-imagining of the starting image.

The new primary lens

I spend more time in front of my monitor than I do outside shooting. More and more it is coming to dominate my workflow. If I lost or broke a lens, that would be terrible, but I could continue doing my art with other lenses with only minor re-adjustments to my vision. I had this experience recently. My 24-70 lens dropped and shattered the polarizer filter. I was up in the mountains and I did not have a filter wrench with me to get the jammed filter off, so I had to switch to using an alternate lens. A little frustrating, but not a big deal.

But if my computer died, although I could continue shooting, I could not view or process a single image until I fixed it. Eventually things would back up to a critical point and I would have to get the computer back. I also couldn’t select images for galleries or process images for printing. Dead in the water.

So in a sense, the focal point of the digital workflow is the monitor. That is the new lens I use to view and do most of my work. The monitor is the lens for the increasingly important part of the digital workflow.

The future

In the future will this trend increase or will we return to simpler times? What do you think?

My money would be on the increase of digital processing. We will trend more toward an attitude that the camera and lenses are used to gather raw material, but pictures are actually made in the computer, looking through the monitor. Increasingly, the final image may look less and less like the original capture. Better processing software opens up new possibilities. And viewers are more willing to accept that photography should create something more than a true representation of reality.

So the next time you are lusting for a wonderful new lens, it might be better to upgrade your monitor instead.

Going too far

Northern Colorado Front Range. Heavily processed from original.

We often hear this as a challenge or criticism. “You’re going too far” Meaning, back off. But as an artist, I don’t think I go far enough. I need to push myself to be always going too far. That is how we explore the limits

Too timid

I have written about this before, but it is so important I think it deserves a refresh. In a previous article I encouraged us to go “far enough“. But I think now this is too timid an attitude. We should push “it”, whatever it is, too far.

I know I tend to have too much focus on the actual captured data of the file and what the scene really looked like. Time helps. I tend now to wait to process images until they have aged enough to let me distance myself from the experience of being there.

But still, I tend to hold back and stay too true to the original. I am learning to push beyond to create something else.

As a bonus, this short video by Matt Kloskowski might encourage you to think about editing in new ways. He does not talk much about going too far, but he shows an unconventional approach. The kind of thing I am talking about when I recommend pushing beyond the captured data.

Push it

I know I’ve said it before, but I find truth in something John Paul Caponigro said “You don’t know you’ve gone far enough until you’ve gone too far.”

This is something I need to take to heart. The engineer in me tends to make the image look like the literal, original scene. That ends up creating record shots. Sometimes all I need is a record shot, but that is rare. I have to push it more to make the image into art. Into something interesting that goes beyond the original.

For example, I live in Colorado. If I shoot a beautiful scene in the mountains, so what? Anyone could have stopped there that day and taken the same picture with their cell phone. What sets mine apart? It often will be something more than just the literal scene. It has to rely on my interpretation of what I saw.

Be decisively indecisive

So when I suggest going too far, I am not speaking about relationships or physical safety, but my interpretation of the image. I am discovering more and more with time that images can take a great deal of manipulation.

A raw file from a good camera contains a tremendous amount of data that can be exploited. Editing in Lightroom is completely non-destructive. We can re-edit at will with absolutely no loss. Likewise, although Photoshop is inherently destructive, there are processing techniques that can be used to manipulate images with no damage and with the ability to re-edit in the future. I strongly advise learning and adopting these techniques.

Yes, I know of good artists who can say they know exactly what they want to do with an image and it is OK to do destructive edits, because they will never change their mind in the future. That is not me. Every time I revisit an image I usually tweak it some. Sometimes a lot.

Does that mean I am indecisive? Perhaps. I wouldn’t argue the point. I look at it as an evolving artistic judgment. What I see and feel in an image can change over time. So I consciously decide to use techniques to give me the maximum flexibility to change my mind later. Decisively indecisive.

Don’t worry about breaking it

Let me use Lightroom (“Classic”, because I consider it the only real Lightroom) as an example. I said that all editing in Lightroom is non-destructive. Do you really understand that?

Lightroom uses a marvelous design that always preserves the original data unchanged and keeps all edits as a separate set of processing instructions. Don’t believe me? Here is a portion of actual data from the XMP sidecar file of an image I edited today:

crs:WhiteBalance=”As Shot”
crs:Temperature=”5650″
crs:Tint=”-14″
crs:Exposure2012=”+0.50″
crs:Contrast2012=”+6″
crs:Highlights2012=”+19″
crs:Shadows2012=”0″
crs:Whites2012=”0″
crs:Blacks2012=”-12″
crs:Texture=”0″
crs:Clarity2012=”+20″
crs:Dehaze=”0″
crs:Vibrance=”+5″
crs:Saturation=”0″

If you are familiar with Lightroom, you should recognize these adjustments as the contents of the Basic adjustment panel. I’m not sure what the “2012” suffix means on them, but probably a process version. Anyway, this is literal data copied from the XMP file. It is an industry standard format called XML markup. It is just text. If I change a slider, the text value is changed. These text values are read and re-applied when I open the file in Lightroom. The original pixel data is never altered. You cannot destroy the image by editing it in Lightroom.

What are the limits?

There are limits, but not absolutes. If we boost the exposure too much, at some point we will introduce an unacceptable amount of noise. If we sharpen too much we will introduce artifacts around edges. We can make such a high contrast image that it cannot reproduce properly on screen or in print. We can increase saturation to the point that it is out of gamut for the screen or print.

Most of these are sort of a judgment call by the artist of what the acceptable limit is for the intended application.

But these are just physical limits of what we can do with the tools. The bigger problem, at least for me, is what am I willing to do?

It’s our mindset we need to break

I am the one who usually limits the extents of the changes I will make. I am still too much of a left-brained engineer who is constrained by my memory of what the scene actually looked like.

One way I can tell this is happening is that it is common for me to push an image further every time I revisit it. Upon seeing it again, I think,”that is nice, but I can go further”. And I do. Sometimes the image turns into something different from what I shot. I love it when that happens.

But it is a constant struggle to give myself permission to do it. I am afraid of going too far.

Knowing how the tools work and how to non-destructively edit, I should feel free to slam adjustments to the limits just to see what happens. Then back off to the “right” value for the image. I find that the “right” value tends to be higher if I have over-corrected than it is if I come up from the original. I think this is what Mr. Caponigro means when he says “You don’t know you’ve gone far enough until you’ve gone too far.”

Give yourself the freedom to go too far, than back off as necessary. I will try to do the same.

Not for everyone

I know this advice is not for everyone. I still see photographers who say they pride themselves in getting the image “right” in camera and doing minimal editing. That’s their style and their values, so good for them. But if “right” means the closest match possible to the real scene, that seems very limiting. I think we have progressed well beyond the stage of assuming that a photography must be a true representation of reality.

At least, that is my assumption. I operate from the point of view that I am as free to creatively imagine the contents of my frame as a painter is to create on a blank canvas. Even plein air painters take a lot of liberties with what they choose to include or exclude, what colors to use, etc. Some even use the plein air session as a sketch. Later in the studio they refine and complete it according to their interpretation.

That is basically what I do. Some images require more interpretation than others, and my tools allow more freedom for manipulation. One reason I think I could never paint is there is no “undo” with paint. 🙂

Go too far

So I am discovering that what works for me is to consciously push my adjustments beyond what I first think is right. Yes, it may create a bizarre effect and I have to back it off. But I often find that the new setting I back it off to is more extreme than I thought was correct originally. Seeing the extreme helped me understand a new way to view the image.

If you do it right, you can’t damage the image. Give yourself permission to experiment.

Example

The image with this article is an example. This is the mountains and plains about 5 miles from my home. It seems like every time I go back to the image, I need to tweak it a little. And I always push it a little further. I do not back off of what I already did. I think I am nearly to “far enough”.

How Big Can I Print It?

A VERY low resolution image (3 MPix) that would print surprisingly well

One of the things we have to wrestle with when we want to make a print is how big can I print this image and get good results? And how large should I print it? There is a lot of advice out there. Some of it is good.

Film vs. Digital

Virtually all images have to be scaled up for printing. The print you want to hang on your wall is many times larger than the sensor or piece of film you start from. Hardly any of us are shooting 8×10 negatives these days. Even if we are, we still usually want to make larger prints.

The technology has changed completely from the film days. Enlargement used to be optical. By adjusting the enlarger lens and the distance from the film carrier to the print surface, the image was blown up to the desired size. If the lens is good, it faithfully magnifies everything, including grain and defects. If the lens is cheap, it enlarges and introduces distortion and blurring.

Digital enlarging is a totally different process. A digital image is an array of pixels. My little printer at my studio likes to have 300 pixels/inch for optimum quality. So if I want to make an 8×10 print and I have at least 2400×3000 pixels, it will print at its best quality without changing a thing. Digital enlarging is a matter of changing the number of pixels.

Digital enlarging

But usually I want to print a larger size than the number of pixels I have. Here the digital technology gets interesting. And wonderful. Going back to my example, if I want to make a 16×20 print and maintain best quality, I would have to double the pixels in each dimension. It would have to go to 4800×6000 pixels.

Photoshop has the ability to scale the number of pixels in your image. There are several algorithms, but the default, just called “Automatic”, usually does a great job. Here is the difference from film: software algorithms are used to intelligently “stretch” the pixels, preserving detail as much as possible and keeping smooth transitions looking good. Lightroom Classic has similar scaling for making a print, but it is automatically applied behind the scenes. Smoke and mirrors.

The result is the ability to scale the image larger with good quality.

Print technology

In a recent article I discussed a little of how an inkjet printer makes great looking prints using discrete dots of ink. There are other technologies, such as dye sublimation or laser writing on photosensitive paper, but they are far less used these days.

It should be obvious, but to make a really big print, you need a really big printer, at least in the short dimension of the print. Really big printers are really expensive and tricky to set up and use. That is why most of us send large prints out to a business that does this professionally.

Why do I say the printer has to be big in the short dimension of the print? Past a certain size, most prints are done on roll feed printers. They have a large roll of paper in them. Say you have a printer that prints 44″ wide. The roll of paper is 44 inches wide and many feet long.

We want to take our same 8×10 aspect ratio image and make a 44×55 inch print. If it was film, we would require an enlarger with at least a 44×55 inch bed and a cut sheet of paper that is 44×55 inch. But an inkjet printer prints a narrow strip at a time across the paper. The heads move across and print a narrow 44 inch long strip of the image, the printer moves the paper a little bit, and it prints another narrow strip. Continuing until it has printed the entire 55 inch length. Then the printer automatically cuts off the print.

But if we naively follow the recommendations for optimum quality, we have to scale our poor little 2400×3000 pixel image up to 13200×16500 pixels. Even the best software algorithms may introduce objectionable artifacts at that magnification.

Viewing distance

Maybe we don’t have to blindly scale everything to 300 (or 360) pixels/inch.

A key question is: at what distance will the image be viewed? Years of studies and observation led to the conclusion that people are most comfortable viewing an image at about 1.5 to 2 times the image diagonal length. This lets the natural angle of the human eye take in the whole image easily. For the example we have been using of the very large print, people would naturally choose to view it from about 105 to 140 inches.

Along with the natural viewing distance there is the acuity of the human eye. I won’t get into detail, but the eye can resolve detail at about 1 arc minute of resolution (0.000290888 radians for the nerds). Simply, the further away something is, the less detail we can see.

Going through the calculations, if our audience is viewing the large print from 1.5 times the diagonal, it only has to be printed at 33 ppi! Finer detail than that cannot be seen from that viewing distance.

I have heard photographers who have images printed for billboards or the sides of a large building talk about inches/pixel. It would look like Lego blocks up close, but it looks sharp from where the viewer is.

Nature of the image

This is true unless the audience is photographers. They are going to get right up to the print, as close as their nose will allow, to see every blemish and defect. 🙂 But normal humans will view it from a distance.

There are modifications to the pixels vs. viewing distance calculations depending on the nature of the image. If the image contains highly detailed structure it will encourage viewers to come closer to examine it. If the image is very low contrast, smooth gradations, it could be even lower resolution.

Printing at the highest possible resolution that you can for the data you have is always a good idea.

Your mileage may vary

How big of a print can you make? It depends – don’t you get tired of hearing that? It is true, though. The real world is messy and simplistic “hacks” often don’t work well. It is better to understand things and know how to make a decision.

When it comes down to it, these are great times for making prints, even large ones. My normal print service lists prints as large as 54×108 inches on their price list. I know even larger ones are possible.

How big should you print? How big can you print?

Conventional wisdom is that scaling the pixels 2x each dimension should usually be safe. My camera’s native size is 8256×5504 pixels. Scaling an image 2x would be 16512×11008 pixels. This would be a “perfect” quality print of 55×36 inches on a Canon printer. I have yet to need to print larger than that.

Given the perceptive effects of visual acuity, I am confident I could create much larger prints. Larger than is even possible by current printers. And they would look good at a reasonable distance.

A key question is who are you printing for? A photographer or engineer will be right up to the print with a magnifying glass looking at each pixel. Most reasonable people will want to stand back at a comfortable distance and appreciate the image as a whole. Who is your audience?

Learn how to scale your image without artifacts and how to use print shapening to correct for problems. Know the perceptual effects of human visual acuity. This is part of the craftsmanship we have to learn in our trade. Given those tools, the rest is artistic judgment. With today’s equipment and careful technique and craftsmanship we can create wonderful results.

Your mileage may vary.

The image with this article is very small – 3 MPix. I would not have a problem making a 13×19 print of it. I doubt you could see the pixels.

Have you tried to make large prints? How did it go? Let me know!

Out of Gamut

Abstract image with serious gamut problems.

That seems like a strange thing to say. It’s not a phrase you hear in normal conversation. What can it mean? I have written some about how sensors capture color, but I realize I have not mentioned the gnarly problem of color gamut. Unfortunately, I have been bumping into the problem lately, so I had to re-familiarize myself with it. Some of my new work is seriously out of gamut.

What does gamut mean

Most writers avoid this or give overly simplified descriptions. I’m going to treat you as adults, though. If you really are someone who is completely afraid of technology you might want to skip to the end – or ignore the whole subject.

The concept of gamut is really pretty simple, but you need some specialized knowledge and you have to learn some new things about the world.

I have mentioned the CIE-1931 Chromaticity Diagram before. That sounds scary, but you have probably seen the familiar “horseshoe” diagram of colors. I recommend you watch this video to understand how it was derived and what it means. This is the diagram:

CIE-1931 Chromaticity Diagram

After a lot of research and a lot of measurement, scientists determined that this represents all possible colors a typical human can see. Just the hue – color – not the brightness.

Very simply, a gamut is just a representation of what part of this spectrum a particular device can reproduce or capture.

Show me

The next figure shows the horseshoe with some regions overlayed on it.

Add ProPhoto colour space as a "working color space" - Which feature do you need? - DxO Forums

There are 3 triangular regions labeled: sRGB, Adobe RGB, and ProPhoto RGB. They are called color spaces. The diagram is indicating all possible colors that each color space can represent. The smallest one, sRGB, is typical of a computer monitor. It is what will be used when you share a jpg image with someone. It is small but “safe”. We lose a lot of possible colors, but everyone sees roughly the same thing on all their monitors.

Let’s jump to ProPhoto RGB. You can see that it covers the largest part of the horseshoe. In other words, ProPhoto RGB has the largest gamut. It is the best we have for representing image color and most professional photographers use this now. Unless they are doing weddings. That is a different world.

They’re not ideal?

Unfortunately, these color spaces are an ideal. The ProPhoto color space is a model for editing images. No actual devices or printers can give us the entire ProPhoto RBG gamut. Not even close. Most can barely do sRGB.

Here is a diagram of the color space a Canon pro printer can do.

The small horseshoe, labeled 4, is the printer gamut. It is larger then sRGB (3) and, overall, a lot like AdobeRGB (2). Smaller than ProPhoto RGB, which is not listed here.

It looks pretty good, and in general it is. I use one of these printers. But look at what it does not do. Most greens and extremes of cyan and blue and purple and red and orange and yellow cannot be printed. Actually, almost no extremely saturated colors can be printed.

And it is not just printers. Most monitors, even very good ones, are somewhere between sRGB and AdobeRGB spaces. This cannot really be considered a fault of the monitors or printers. The physics and engineering and cost considerations prohibit them from covering the full ideal range.

Any of these colors that I use in an image, that can’t be created by the device I am using, are referred to as “out of gamut”. Outside of the color space the device can produce. This is what I have been running in to lately.

What happens

So what happens when I try to print an image with out of gamut colors? Well, it is not like it blows up or leaves a hole in the page instead of printing anything. Printers and monitors do the best they can. They “remap” the out of gamut colors to the closest they can do. As artists, we have some control over that process, as we will see in the next section.

But the reality is that these out of gamut colors will lose detail, be washed out and without tonal contrast. When we get to looking at the print, we will say “yech, that is terrible”. Then we need to do something about it.

What can we do about it

There are things to do to mitigate the problem. Here is where we need to understand enough about the technology to know what to do.

First, we have tools to help visualize the problem. Both Lightroom Classic and Photoshop have a Soft Proof view. It will simulate the actual output for a particular printer and paper. You can also view gamut clipping for the monitor. Yes, because of gamut problems you may not be seeing the image’s real color information on your monitor.

Both Lightroom and Photoshop have versions of saturation adjustments and hue adjustment. These can help bring the out of control colors back into a printable or viewable range. With practice we can learn to tweak these settings to balance what is possible with what we want to see.

But even if we give up and decide to print images with out of gamut colors, there are options. the print settings have a great feature called “rendering intent”. They are a way to give guidance to the print engine on how we want it to handle these wild colors. Several different rendering intents are available, but the 2 that are most commonly used are Relative and Perceptual.

Rendering Intents

I use Perceptual intent most often, at least in situations where the are significant out of gamut colors. Using the Perceptual directive signifies to the print driver that I am willing to give up complete tonal accuracy for a result that “looks right”. The driver is free to “squish” the color and tone range in proportional amounts to scale the whole image into a printable range. I don’t do product photography or portraits, so I am usually not fanatical about absolute accuracy. How they work this magic is usually kept as a trade secret. But secret or not, it often does a respectable job of producing a good output.

The other common intent is Relative. This basically prints the data without modification, except that it clips out of gamut colors. That sounds severe, but the reality is that most natural scenes will not have any significant gamut problems, so no clipping will occur.

This is a great intent for most types of scenes, because no tonal compression will take place.

The answer

The answer is “your mileage may vary”. Most images of landscapes and people will not have serious out of gamut problems. When you do, this information may help you get the results you want. When you have a problem, turn on the soft proofing and try the Relative and Perceptual rendering intents. Look at the screen to see if one is acceptable. If not, go back and play with saturation and colors .

Why do I have problems? Well, I’m weird. I have been gravitating to extremely vibrant, highly saturated images. I like the look I am trying to get, but it can be hard to get it onto a print. The image at the top of this article is a slice of an image I am working with now. It is seriously out of gamut. I need to work on it a lot more to be able to print it without loss of color detail. Ah, technical limitations.

Is Scaling Bad?

Heavily sharpened image. Many pixels damaged.

I have written about image sharpness before, but I was challenged by a new viewpoint recently. An author I respect made an assertion that gave me pause. He was describing that when you enlarge film it is an optical scaling but digital enlarging requires modifying the information. Implying that modifying information was bad. So I was wondering, is digital scaling bad?

Edges and detail

Let me get two things out of the way. When we are discussing scaling we only mean upscaling, that is, enlarging an image. Shrinking or reducing an image size is not a problem for either film or digital.

The other thing is that the problems from upscaling mostly are edges or fine detailed areas. An edge is a transition from light to dark or dark to light. The more resolution the medium has to keep the abruptness of the transition, the more it looks sharp to us. Areas with gradual tone transitions, like clouds, can be enlarged a lot with little degradation.

Optical scaling

As Mr. Freeman points out, enlarging prints from film relies on optical scaling. An enlarger (big camera, used backward) projects the negative on to print paper on a platen. Lenses and height extensions are used to enlarge the projected image to the desired size.

This is the classic darkroom process that was used for well over 100 years. It still is used by some. It is well proven.

But is is ideal? The optical zooming process enlarges everything. Edges become stretched and blurred, noise is magnified. It is a near exact magnified image of the original piece of film. Unless it is a contact print of an 8×10 inch or larger negative, it has lost resolution. Walk up close to it and it looks blurry and grainy.

Digital scaling

Digital scaling is generally a very different process. Scaling of digital images is usually an intelligent process that does not just multiply the size of everything. It is based on algorithms that look at the spatial frequency of the information – the amount of edges and detail – and scales to preserve that detail.

For instance, one of the common tools for enlarging images is Photoshop. The Image Size dialog is where this is done. When resample is checked, there are 7 choices of scaling algorithms besides the default “Automatic”. I only use Automatic. From what i can figure out it analyzes the image and decides which of the scaling algorithms is optimal. It works very well.

All of these operations modify the original pixels. That is common when working with digital images and it is desirable. As a matter of fact, it is one of the advantages of digital. A non-destructive workflow should be followed to allow re-editing later.

Scaling is normally done as a last step before printing. The file is customized to the final image size, type of print surface, and printer and paper characteristics. So it is typical to do this on a copy of the edited original. In this way the original file is not modified for a particular print size choice.

Sharpening

In digital imaging, it is hard to talk about scaling without talking about sharpening. They go together. The original digital image you load into Lightroom (or whatever you use) looks pretty dull. All of the captured data is there, but it doesn’t look like what we remembered, or want. It is similar to the need for extensive darkroom work to print black & white negatives.

One of the processes in digital photography in general, and after scaling in particular, is sharpening. There are different kinds and degrees of sharpening and several places in the workflow where it is usually applied. It is too complex a subject to talk about here.

But sharpening deals mainly with the contrast around edges. An edge is a sharp increase in contrast. The algorithms increase the contrast where an edge is detected.

This changes the pixels. It’s not like painting out somebody you don’t want in the frame, but it is a change.

By the way, one of the standard sharpening techniques is called Unsharp Mask. It is mind-bending, because it is a way of sharpening an image by blurring it. Non-intuitive. But the point here is this is digital mimicry of a well known technique used by film printers. So the old film masters used the same type of processing tricks to achieve the results they wanted. They even spotted and retouched their negatives.

Modifying pixels

Let me briefly hit on what I think is the basic stumbling block at the bottom of this. Some people have it in their head that there is something wrong or non-artistic about modifying pixels. That is a straw man. It’s as silly as saying you’re not a good oil painter if you mix your colors, since they are no longer the pure colors that came out of the tubes. I have mentioned before that great prints of film images are often very different from the original frame. Does that make them less than genuine?

Art is about achieving the result you want to present to your viewers. How you get there shouldn’t matter much, and any argument of “purity” is strictly a figment of the objector’s imagination.

One of the great benefits of digital imaging is the incredible malleability of the digital data. It can be processed in ways the film masters could only dream of. We as artists need to use this capability to achieve our vision and bring our creativity to the end product.

I am glad I live in an era of digital imaging. I freely modify pixels in any way that seems appropriate to me.