Limiting File Size

Simple Photoshop example. File size is 22x larger.

In a previous article I talked about the “bloat” that happens when we edit in Photoshop. Is there anything we can do about it? Should we be concerned about limiting file size?

RAW vs Tiff

RAW files are fundamentally different from Photoshop files. A RAW file captures and preserves the data directly from the camera sensor. This data still contains the artifacts from the Bayer filter technology, that is, each pixel represents 1 value of red, green, or blue. Data in this form cannot be shown on your computer monitor until it is processed and expanded by a RAW converter like Lightroom Classic.

It is very important to realize that this data is unaltered, no matter what fancy processing you do in your RAW editor. The adjustments you make are kept as a collection of “processing instructions”. These are applied in real time whenever you view your RAW file.

Because of this design, Lightroom can only change the look of pixels. It cannot in any way add or remove or alter individual pixels. No matter what it looks like on screen.

For instance, even if you use the Healing tool to completely remove a person or object from the picture, the original data is always still there. What it saves is instructions telling it what region to select and what region to copy from. This processing is applied, again, each time you view the image in the editor. Actually, it usually just keeps an edited preview of the image to show quickly, but that is getting too deep.

Photoshop manipulates pixels

Photoshop, though, is the heavy duty pixel pusher. It has no moral imperative to prevent you from doing anything to image data. You can freely add or remove or alter or stretch or shrink or copy over anything. Unless you take steps to edit non-destructively (more on that later), you can remove something from the image by simply copying other pixels over the area you want to remove. The original data is permanently gone. Photoshop doesn’t care.

To do this level of manipulation requires Photoshop to expand the original RAW data to a pixel structure. The pixel data has 3 values, red, green, and blue, for each pixel and each of the values is probably 16 bits if you are editing in one of the “safer” color spaces. I recommend it. This expansion automatically makes Photoshop’s file size at least 3 times larger than the RAW file.

Once the file has been expanded to pixels and edited, there is no going back. It cannot be reprocessed back into a RAW file. You can’t put the genie back into the bottle.

Even RAW files can get big

I am presenting this in a rather black & white (metaphorically) contrast. RAW file editing is no longer immune from growing quite large. The “culprit” is masks.

It used to be that RAW processing was rather coarse and simple. If I adjusted the exposure of the image it applied to the entire image. And the processing instruction was small and simple. This is the literal data that is saved for that adjustment:


Don’t worry about the exact meaning of all of it, That is for the Engineers. The point is that only these literal 24 characters are stored to change the exposure of the entire image.

But then the designers at Adobe and others created very useful and necessary magic. We can mask areas and selectively adjust them! This is an awesome and very welcome change. It pushes back the boundary where we have to go to Photoshop to finish our files. These masks and edits are stored as text with the other processing instructions. As you might guess, it can get large.

After doing a lot of masking and editing I have seen some of these “sidecar” files grow into 10 megabytes or more. So if my original RAW file is 50 MBytes and the editing instructions add another 20 MBytes, that is quite a lot bigger. Still nothing like going to Photoshop, but I needed to point out that RAW processing is not entirely free.

Non-destructive editing

Please give me a moment to plug a non-destructive editing style in Photoshop. Photoshop can do amazing and totally un-undoable things. I know that I often change my mind or have new insights about an image after it ages a while. So weeks of months or more after an initial edit, I may look at an image again and see a different direction to be taken. If the Photoshop edit has gone down a path of no return, this can be hard.

Sure, I could go all the way back to the original RAW file and start over, but this is usually not what I want to do. I don’t want to repeat the hours of detailed work I already did. Typically there was a branch, a fork in the road while I was editing. I chose one path and later I decide I would like to explore the other one.

With discipline, Photoshop edits can be almost totally non-destructive. This means you can undo any decision later. Or perhaps strengthen or reduce the effects of an edit.

Probably 2 techniques serve for about 80% of the goal of non-destructive editing. The first is to use a new blank layer for pixel changing edits. So if I want to remove an element from the image, I will typically create a blank layer, then use stamp or move to overlay changes onto the image. the original information is still there is I later want to expose it or do a better job of removing it.

The second powerful technique is adjustment layers. Use adjustment layers rather than doing adjustment directly to the image layers. This allows the adjustments to be changed in the future. It also allows for masking to limit the effects to selected areas.

Steps to limit Photoshop file size

It is a tradeoff: do all your processing in Lightroom or go into Photoshop. Adobe and others are constantly pushing out the boundary by giving us more and more power and capability in our RAW editors. This is very welcome.

But there comes a point when we may have to do things Lightroom cannot do. There are things we can do to limit the overall Photoshop growth to the minimum, about 3 times the original RAW size. Basically, these destroy the non-destructive edits I recommended before. So all of those edit layers can be flattened down before saving the file.

This commits the edits permanently. They can’t be undone in the future. But the file size will be smaller. And rasterizing smart layers will save a lot of space. Also making changes permanent.

If it sounds like I am negative on doing this, I am. Once I invest a lot of time editing an image in Photoshop it becomes the “master” image. I usually want to keep the freedom to change my mind.

Why bother?

Maybe it’s the wrong attitude, but I try to act as if the file size does not matter. A large file is just a price to pay for the ability to craft an image I am pleased with. Disks are relatively cheap.

It’s a pain when I out grow the 4GByte limit for Tiff files and have to go to a .psb file. Lightroom does a bad job of the user experience. But I put up with it because I want to hold all that work in an editable state.

So officially my attitude is “why bother?”. Don’t sweat the file size growth. You went to Photoshop for a reason. Use it. Do your work. Files get large, It’s just a cost of doing business.

Today’s image

This is an example of a very simple looking file that grew dramatically. The final Photoshop file is 22 times larger than the edited RAW file!. From 61.5 MBytes to 1.34 GBytes. It sure doesn’t look that complex. It was necessary and I would still do it the same way again.

So Big!

An image with some minor processing in Photoshop. It is well over 1GByte.

Our modern cameras have lots of pixels. This is a great benefit for us, especially if we want to make large prints. But sometimes the files we are editing can get so big we have trouble dealing with them. Why is that?


I have made the point before that our modern sensors are amazing. The camera I shoot captures 47 MPixels for each shot. That’s 47 million pixels. There are sensors that go up to 150 MPixels in some medium format camera bodies. I haven’t seen the need to move to that yet.

Why do we need so many pixels? Some will state that we don’t. That it is just pixel envy that keeps us seeking more. There is a good argument that about 20 MPixels is enough for the vast majority of applications.

That is for you to decide for your own needs and preferences. I can state that I believe the quality of our images has moved far beyond film days. Digital images produce the sharpest, most detailed, most colorful, most editable results that have ever been possible, except in some very niche applications. There is no going back.

Raw files

Raw files hold the information that comes directly off the camera sensor. There is minimal processing done. I have discussed Bayer filters and how we get color images. The Raw file is not really an image we can look at yet.

But there are some great features of raw files we need to be aware of. First, this is the closest we can get to the exact data that was captured by the sensor. Little processing has been done. All the processing and interpretation of the resulting image is ours. Among other reasons, this is a reason to always shoot raw instead of jpg files.

Second, the nature of the raw file is that it cannot be edited. The original data is always preserved. Yes, of course, I can go into Lightroom Classic (I will always call it just Lightroom from here on) and do amazing things to the image. All of the changes are saved as what are termed “processing instructions“. The original data is never altered. It cannot be altered.

One of the things this means is that years from now when I have new tools or change my mind about how I want the image to look, I can go back and re-edit it. I can even reset to the original captured bits and start over. No data is ever lost. This is a great things.

And thirdly, the raw file is relatively compact. My camera captures 47 million 14 bit resolution sensor values, each either a red or green or blue data. It is not yet “demosaiced” to expand the Bayer sensor data to full color data for each pixel. In addition certain meta data values are stored in the raw data. Things like the camera and lens information, capture time, my copyright information, etc.

Raw file size

My camera is set to do a lossless compression of the data before saving it. So no data is ever lost in the process. Looking at a randomly selected file I just shot, its file size is 58.08 MBytes on my file system. The size of my raw images varies because of the amount of lossless compression that can be done on each image.

But think about this a minute. I captured 47 million 14 bit images. This should have been 94 MBytes of data, not counting the extra meta data. I am assuming they store the 14 bits in 2 8 bit bytes. I don’t know if that is true. This means the saved raw file is even smaller than the data that came off the sensor. As I edit it and add processing instructions, the file gets somewhat larger, but seldom huge.

Photoshop bloat

Now I sent this raw file to Photoshop and immediately saved it. No editing. The file size is 229.16 MBytes! It is about 4 times larger! And I didn’t even do anything to the image! Why is this?

Well, Photoshop edits pixels, each a triple of (red, green, blue) values for each pixel. Photoshop expands the Bayer data to the flat grid Photoshop needs, This is what Photoshop works with and what is saved. That automatically makes the file at least 3 times its original size. The raw file was compressed, that probably accounts for the difference.

Now to illustrate more of what Photoshop does, I added a blank layer and used the spot healing brush to correct a couple of blemishes, very little. Saving the file again grows the file size to 548.08 MBytes! It doubled!

To continue the demonstration, I added a curves adjustment layer and saved the file again. Now the size is 632.72 MBytes.

The difference

It is clear that LIghtroom and Photoshop show very different behavior when editing images. This is because of their nature and design.

Lightroom is called a parametric editor. It does not modify the image data, Rather, it keeps a list of processing instructions to tell how to change the look of the image when it is viewed.

Photoshop is a pixel editor. It can add/delete/modify pixels at the most detailed level. You have to be careful that you do not lose the original data. It does not care. It will do any amount of change you request. And it has the power of layers to build of levels of modification. This can lead to huge file sizes.

Did you know that there are maximum file sizes for Photoshop files? Standard Photoshop psd files can only be up to 2 GBytes in size. Tiff files can only be 4 GBytes. I exceed these limits a lot. The only choice then is to switch to Photoshop’s “big” file type, the psb. It can grow much larger. Actually, it can handle us up to 4.2 Billion GBytes. That will work for a while. 🙂 Unfortunately it is not a choice to automatically use it.

Any solution?

Well, there is the “if it hurts don’t do that” solution. Stay in Lightroom for most of your image processing. Only go to Photoshop for situations that Lightroom cannot handle. This is a good strategy and I use it.

But if you have to do that detailed pixel grooming and you have to use many layers to process your image to your taste, accept it. The cost is much more powerful computers and larger and faster hard drives. I have both. It is a cost of doing business the way I want.

Editing large files in Photoshop will lead to very large files on your disk. I have a lot of multi GByte files. That is, some of my files have grown to about 100 times the original captured file size! Ouch. I can’t do this routinely. It has to be for special images that are worth the time and file size to do this.

When you have to call out the big power tool, Photoshop can do almost anything. But the cost can be high.

Purity in Photography 2

Pseudo Landscape. Not an actual aerial image. Art, not reality.

Because of its nature of recording the scene in front of the camera, people assume that photography is some kind of “pure” imaging form. That is, that what you see is reality. I take opportunities when I can to dispel this myth. Never assume purity in photography unless it is explicitly presented as such. This is a theme that just won’t go away.


Our excellent digital sensors do a pretty good job of reproducing what the lens images onto their surface. For good and bad. Because of this, some people assume that photographs represent exactly what was captured.

This is just an assumption that in no way restricts me in my art. And it does not restrict anyone else unless they make the explicit determination to not do any manipulation. What the sensor records is often just a starting point in my photographic vision. Not an end point.

It is so easy now to alter images that you should always assume it has been done.


From nearly its beginning, artists have manipulated photographs. Black and white film photographers quickly invented ways to alter their images. Sometimes these were done to overcome limitations with the technology of the time. Sometimes to correct or improve the images, for instance by “spotting” defects and removing distracting objects. More and more commonly alterations were done for artistic improvements.

For fun sometime look up a “straight” print of Ansel Adam’s famous Moonrise, Hernandez, New Mexico compared to one of his later interpretations. The later is almost unrecognizable as the original. Does that mean there is something false about the later prints? No, it is considered one of the great examples in the history of photography. The artist chose to alter it heavily to make it appear as he wanted it to look.

It is never safe to assume that a photograph exactly represents reality.

What is truth?

Is a photograph “truth”? Is it some form of purity? Why? What makes you assume it is?

The technology of its capture process leads some people to assume a purity or truth that may lead you astray. Yes, the sensor recorded all the light falling onto its surface, but there is still a long journey from there to a finished image.

Some might say that Photoshop eliminated truth. That is overstated, but not entirely false. The positive statement is that Photoshop enabled greater artistic expression. Photoshop and other image manipulation tools, along with powerful home computers and large disks, opened a new world of creativity to artists.

Now most photographic artists do extensive manipulation of images. Photoshop, Lightroom Classic, Capture One, and other tools open new worlds of creativity to photographers. Photographers have always done this, but the modern tools add new power and possibilities.

But this power is just a modern convenience. It has always been true that images are created in the artist’s imagination. A great example is Albert Bierstadt, a German painter who helped popularize the American west in the 19th Century. His paintings created a lot of interest, but they were often, let’s say, fanciful. For example his work Rocky Mountain Landscape does not depict any real scene I have ever found in the Rocky Mountains where I live.

The artistic view is that an image is the expression of the artist’s vision and feeling for the image. It seems the truth comes from within rather than being a property of what is represented.

What is the intent of an image?

Does this manipulation make an image less “true”? That depends on the intent of the image.

Maybe it seems obvious, but any image presented as truth must be true. If I see a picture in a news article that claims to show a certain event, it better be exactly that. If it is altered to manipulate the scene or misrepresent the event, that is false and the reporter and their organization should be severely censured.

In my opinion no AI generated “news” or images can be presented as truth. They were generated by a machine rather than being a direct capture or observation of an event.

Let’s go a little away from news and talk about a portrait. Must a portrait be a literal, completely truthful depiction of the subject? Well, they never have been. Portraits are always “retouched”, maybe altered extensively to hide blemishes. Perhaps to make the subject look slimmer or taller or a little more handsom. So a portrait should be a recognizable representation of the person, but do not assume it is literally true.

But I live in the world of art. Art is fantasy and imagination and vision and creativity. We should never get confused that art is reality. I am free to do anything within my image that I think expresses my artistic vision. This makes Bierstadt’s Rocky Mountain Landscape acceptable art, even if not reality.

Don’t waste your effort thinking photographs are always reality. Most do not even pretend to be anymore. Photographs are another artistic expression, unless explicitly presented as reality.

Today’s image

A high altitude aerial? Maybe. Maybe not. Since I have been talking about photographic art not being real, it might be best to assume this isn’t exactly what it seems.

I won’t say more about it now. This is part of a series I am working on.


Wide histogram, single capture image

My last article sang the praises of HDR processing. I don’t want to over sell it. Today I will try to balance it by showing we typically do not need to use HDR.

The good

My previous article attempted to show when and why to use HDR. There is a time and place for it. In general, if a histogram shows more than about 7 stops of needed information then I would consider HDR, if the subject and situation allows it.

The example I used was a scene with the sun visible in the frame but where I also wanted to preserve the deepest shadows. Back in the film days we had to use a split neutral density filter over the lens to try to compress the dynamic range in these situations. Whenever you would have reached for the split ND filter is the time to consider if you can use HDR instead.

The bad

But HDR has some problems and limitations. There is the dreaded “HDR look” that most people want to avoid. In addition, there are problems with subject movement and extra processing steps to do.

When HDR first became available, people tended to go crazy with it. It was almost a symbol of showing off the new technique. The HDR look was over compressed with flat tonality and lack of true whites or blacks. Sure, I could shoot that scene with a 20 stop range and make a print. Too bad it looks weird. It became almost a cliche. Many “serious” photographers shunned it as looking artificial. It got a bad reputation.

But the problem was how people used it, not the technique itself. Almost any technique can be over used to create unappealing images.

There is also the problem I mentioned with subject movement. To create a good HDR image there must be very high correlation between the pixels of each exposure bracket. That is, there can’t be significant movement.

And there is the extra processing. This is not too big of a problem anymore. We can quickly do HDR processing from within Lightroom or Photoshop or your software of choice. It is probably easier now to do it than it was to adjust a split neutral density filter and figure out the exposure.

Why we don’t usually need it

Trust your sensor and the processing software on your computer. Modern high-end camera sensors are amazing. They record the greatest dynamic range of information that has ever been possible in photography. I’m sure it will only get better with new generations of equipment.

My camera records a far greater range of information than it is possible to print. Prints are my gold standard. They are the expected outcome of my work. A surprising fact to many is that, although it is hard to compare because the physics are totally different, the effective dynamic range of print media is around 6 to 8 stops. So making any print has some aspects of dealing with HDR data, since the captured data is probably much greater than the final print.

OK, so I am shooting a high contrast scene. I am careful to allow a little space on each end of the histogram, so say I am dealing with about 12 stops of range. The reality is that, for most needs, this can be used to make a great print.

But that 12 stops of data has darks that are down dangerously close to an unacceptable level of noise. And the brights are dangerously close to clipping. Is that imperfection OK?

How to process extreme ranges

This is not a tutorial on photographic processing. You can find too many of them on the web. I will just give some suggestions. In Lightroom (Classic – the only version I think is worth using) just the 6 controls in the Tones section of the Basic panel can do wonders. And I seldom use Contrast, so there are really 5 most important ones.

Use Whites and Blacks to set the overall white and black points as desired. Then I often use Exposure to balance the overall tonal range. Finally I use Highlights and Shadows to fine tune the tones.

These simple adjustments, along with some tweaks in the Presence section, can do amazing things to “rescue” most images. These are probably an 80% fix for most situations.

Of course, when I select an image to print, I will spend a lot more time working on it. A lot of work will be done with curves and masking and doing fine adjustments. Sometimes I will send it to Photoshop for very detailed tasks that cannot be done in Lightroom. Editing an image can take many hours. Most of us are pretty obsessive about our work.

My point here, though, is that most single captures have enough data to make a great print or other final image. Sometimes we just have to work with it a little.

Maybe you don’t want it

The look of your final image is an artistic decision. It is not dictated by the “reality” of the original scene. You or I as the artist decide the look we want. What we decide is “right”, at least for us.

So I may not want to create a perfectly balanced image that retains all the tones and data of the histogram. I may want to crush the blacks to make a moody, low key image. I may want to over brighten the image to make an ethereal scene. It is not written anywhere that the final print must look exactly and faithfully like the original scene.

This is where artistic intent comes in.

It is not numbers

I want to end with the point that we are creating an image, not manipulating numbers. Well, we are manipulating numbers, but that is not what counts. What counts is the look and expressiveness and quality of the finished product.

Photography is the most technical art, but do not be dictated to by the technology. Do not let someone say you can’t do something because the numbers are wrong. All that counts is the final art you create. Emotional response trumps technical excellence. How does it look to you?


The image today is a full histogram spread. Single capture. I think this kind of thing comes out OK. What do you think?


HDR image. Smokey sunset in the Colorado mountains.

HDR, which stands for High Dynamic Range, is a bad word to some photographers. I think they have been overly influenced by some bad early use of it. It can be an excellent tool for certain kinds of images.

Dynamic range

First, though, what is dynamic range? Dynamic range is a measure of the span between the lowest level signal that can be used and the highest level. In most electronic systems the high end is limited by the point where the signal starts to clip or distort. The low end is limited by the point where an unacceptable amount of noise intrudes. For photography it is that range from the darkest value that is usable to the brightest value that doesn’t clip to pure featureless white.

Modern digital sensors are far better than ones in early digital cameras. High end sensors now are rated at between 13 and 15 stops of dynamic range. That is incredible. Early sensors had maybe 5-6 stops.

But like many things, the numbers are misleading. It is not that the camera makers lie, just that they do not quantify what they really mean. So my sensor may technically have 14 stops of range, but I cannot really use all of that with no cost.

If you want to jump in to a little more technical depth, check out this article.


There is this problem called noise. It is worse at the dark range of exposure. We call what we do “digital photography”, but the reality is that a significant portion of it is based on analog signals. The information coming from the sensor is analog and it has to be amplified and digitized before it is actually digital data. Electronics, even the wonderful systems available now, have a certain level of noise in analog circuits. It is not a design fault, it is basic physics that cannot be entirely eliminated.

So when we capture an image that has a wide range of brightness values, it needs to be processed a lot in order to make a good print or even a good image for social media. A lot of this processing involves boosting the dark values to a more usable level.

But, the darkest values are close to the noise level of the electronics. So boosting them also boosts the noise. You have seen this when you brighten an image a lot and notice it looks very grainy and even blocky.


Enter HDR as a technique for mitigating the problem. HDR software takes several exposures, usually referred to as an exposure bracket, and combines them into a single image with a compressed dynamic range. Typically 3 exposures are used: one overexposed to make sure shadow data is good, one at the correct nominal exposure, and one underexposed to get all the highlight data.

In combining this data, the software can select highest quality exposure value for each pixel. It uses sophisticated algorithms to “compress” the dynamic range. That is, it makes the brightest areas less bright and the darkest areas less dark. I could not explain the exact algorithms used.


This sounds great. What is the problem?

There is actually little problem with HDR as a concept. The problem is, when it first became popular, it was often abused by many practitioners who applied it in a heavy-handed way. Images with the dreaded “HDR look” were obvious and often scorned. The HDR look is an over compressed image with few real highlights and few real shadows. Everything has a bland sameness to the tonal range.

The look rightly was looked down on by “serious” photographers. It tarnished the technique as a whole. That is unfortunate, because HDR is great for some things.

When to use it

HDR can create images that could not otherwise be made and it doesn’t have to be obvious. If a scene has extremely high contrast then HDR is often the only means to get the results we want.

Way back in the olden days we had to use graduated neutral density filters in front of the lens to darken the brightest areas, usually the sky. This would pull the dynamic range down to a reasonable range to capture in one exposure. It was the “analog” equivalent of HDR. Of course, this involved adjusting the exposure to try to anticipate the final capture range. It was tricky, but it was the only way to do it.

Now with HDR, no one I know uses split neutral density filters except the remaining film photographers. Except in one case.


HDR has one Achilles Heal – subject movement. An action scene is very difficult for the HDR software to build a good result.

If only some small parts are moving, like grass or leaves shifting with the wind, the HDR software may use “ghosting” algorithms to try to work around the movement. If you are trying to photograph a high contrast action scene, like a car race, good luck. You probably will not be able to apply HDR because there is not enough correlation between the different exposures.

Today’s image

This is an HDR image. Trying to create an image with the direct sun in it and at the same time preserve the deep shadows in the mountains wasn’t going to work in one exposure. The HDR software was able to pull it all together.

I don’t think this looks like the bad old “HDR look”. What do you think?