An artists journey

Tag: digital photography

  • Why Do We See 255 Everywhere?

    Why Do We See 255 Everywhere?

    Do you ever wonder about the magic number 255 you see all over Photoshop and even in Lightroom Classic if you look? It seems like 255 pops up everywhere. Why is that? It is a strange number to choose.

    It’s just a number

    First let me say that at this point in time, 255 is just a number without meaning. It is the number chosen to represent the maximum value of a channel or color. Something has to be used to represent the maximum value. Looking back, 100 (as in 100%) would have probably made more sense. But we have 255.

    Think of it like Fahrenheit and Centigrade scales. The boiling point of water is 212 in Fahrenheit and 100 in Centigrade. Either way, it represents the same thing, the boiling point of water. That does not change no matter how the number is represented.

    So when you see 255 just read it as the maximum value of that thing. If that is the level you wish to understand, this would be a good point to stop reading this article. 🙂

    Personally I hope you continue. Understanding some of the history and details of our tools can only help improve our craft.

    Roots in binary

    Before we go deeper I need to justify where the number 255 comes from. It is rooted in binary coding. You are probably familiar with digital notations. We have lived with it for so long it seems to permeate everything around us.

    Please pardon me for going full on Geek here. I so seldom get to use my training that it is fun. A very, very brief background: when digital computers were being developed, it was found to be simpler and more reliable to create circuits that were either on or off, no in between states. This was called a bit. A piece of data that was either off or on, noted as 0 or 1. The advantage of this seemingly silly decision is that the bits could be made very small and can be operated on very fast.

    Dev on market©Ed Schlotzhauer

    A single bit by itself isn’t very valuable. To represent realistic information and do calculations bits were combined together in larger units. The next widely used unit was 8 bits. This came to be called a “byte”. Eight bits is a byte – Geek humor.

    It turns out that 8 bits is enough to start encoding useful information. For instance, it will hold 1 character. A byte is big enough to code all the upper and lower case letters, punctuation, and some special symbols. At least in English. And we will see that it holds a useful amount of image data.

    Let me give a very simple description of digital value coding using 3 bits:

    Each combination of the 3 on/off values is assigned a value. The encoded values range from 0 to 7.

    Going back to the unit we called a byte, the 8 bits can encode 256 values, 0 to 255. This is the origin of the magic 255.

    History of Photoshop

    It is hard to think that there was a “before Photoshop”. Thomas Knoll needed to develop ways of doing analysis on images for his PhD thesis. In those days, nothing was available, so he taught himself programming and developed a library of operations. Here is an interesting interview with Thomas.

    His brother John worked for Industrial Light and Magic. He saw applications for image processing in some things they were doing, so he encouraged Thomas to enhance his library. Eventually they decided to try to make it a product. Adobe was interested. It is amazing how things come to be.

    In the days when the library, later Photoshop, was developed, the state of the art of image representation was to code each pixel as 3 8 bit values. One byte each for red, green, and blue. Each color had the value range 0 to 255. This number scheme became baked in to Photoshop and a standard metaphor of the user interface.

    Airplane taking off. A short project.©Ed Schlotzhauer

    Today’s data

    Early digital cameras shot 8 bit images. Today, though, images and Photoshop has grown well beyond that. As an example, my Nikon Z7 II captures 14 bit data. Each red, green and blue channel is 14 bits. That is 16384 values per channel instead of 256. Some other cameras have even more bit depth.

    Photoshop allows us to select if we will treat our files as 8 bit or 16 bit or 32 bit. With all these variables it could impose a huge burden on the user to deal with the actual value range of the data he is editing. Some of these numbers get to be staggering (for 32 bit data each channel has 4,294,967,296 values). Adobe chose to keep the maximum number we see at 255. In effect, it became an arbitrary measuring scale we work with across the apps.

    By the way, Lightroom uses 32 bit data internally. You do not get a choice. But even in Lightroom (Classic at least) the 255 illusion peaks through in one place. Look at the Tone Curve tool. The scale is 0 to 255.

    Still, it’s just a number

    Fahrenheit or Centigrade. It is just an arbitrary number to represent the same thing, the boiling point of water. Adobe has kept that historical number 255 and given it the implied meaning of “maximum”. It no longer has a tie to the actual size of the data you are editing or the maximum value of an 8 bit chunk of data.

    Eerie headstones©Ed Schlotzhauer

    They have done us a service in this. I would hate to think of the mental complexity I would have to go through if this number changed all over the place to be the actual values I am working with. But a simplification comes with some challenges. People tend to forget why the simplification was made. Even that one was made at all.

    When you are using the curves tool and other things, freely accept 255 as meaning “maximum”. Do not forget and think that your data only goes to 255. Or that it has somehow discarded all those other wonderful bits our modern cameras give us. When someone tells you that white is 255/255/255 and seem to think that is the actual value of their data, remember that is just a number on a scale. Smile to yourself knowing you probably understand it at a deeper level than they do.

    I don’t have many images in my catalog that are actually 8 bit data. I am very glad the technology has moved on in wonderful ways. And I am grateful for the simplified scale that normalizes what I see when I am working with all this data. Thank you Adobe. This is something you did right. It doesn’t matter what the number is, something had to be defined as a convenient value for “maximum”.

    Today’s image

    The image at the head of this is actually 8 bit. An 8 bit jpg file. All the data is actually 0 to 255. Back in 2006 that was about the best I could do with the camera I had. It’s not terrible. I like the image, but I wish I could shoot it again with a modern camera.

    As a matter of fact, all the images in this article are 8 bit. I wanted to emphasize that it was a very workable system.

    Side note

    In today’s digital systems we seldom worry much about a few bytes. Every time I press the shutter on my camera it writes about 50 million bytes to my memory card.

    I mentioned that digital bits could be made very small. As an example, Apple’s M4 processor, which is their main CPU as of this writing, has 28 Billion transistors. On one chip. That is hard to comprehend. It certainly wasn’t anticipated when Thomas Knoll developed Photoshop.

  • Out of Focus

    Out of Focus

    A few months ago I wrote about being in focus, both technically and mentally. I want to go a little deeper into how technical focus happens in modern cameras and an an experience I had recently where what I did was out of focus.

    What is focus

    Technically, focus is simple when the lens is adjusted so that the part of the subject you are most interested in is sharply defined. Your lens has a focus ring to use to manually focus. Most of us probably use the camera’s built in auto focus capability. This is much more precise than my old eyes. And a lot faster than most of us can do manually.

    Focusing physically moves one or more of the lens elements inside the lens barrel. This is required to adjust the focus point.

    I will let you argue whether focus is an absolute, precise point or just an acceptable range. I will just say that I am swinging away from being adamant about absolute technical perfection and leaning more toward artistic judgement and intent. Set your own values you will live by.

    Whether we manual focus or use auto focus, we observe in the viewfinder the image moving from a fuzzy blob a crisp, detailed representation of the scene before us. Unless we have a very old piece of technology in our camera with something called a split image viewfinder. I had this in my first SLR. It was magic and awesome for most of the subjects I shot.

    The split image viewfinder showed the image sharp regardless of focus. The image was divided into 2 pieces in the central circle. The pieces were offset from each other when out of focus. Use the focus ring to bring the 2 halves into alignment and the image was sharply focused. Magic. Enough trivia, though.

    Little did I know this was a type of and precursor to what we now call phase detection auto focus. Let’s get a little deeper into the technology.

    How does it work?

    Auto focus in a DSLR or mirrorless camera is complex and requires many precise components. But it works so well now that we tend to take it for granted.

    There are 2 basic technologies in modern cameras. The older one is called contrast detection and the newer and better one is called phase detection.

    I have written on histograms, a subject I consider vitally important to photography. Histograms and their interpretation are the basis of contrast detection auto focus. It is brilliantly simple in concept and in process as what we do when we are manually focusing.

    If an image in the viewfinder is out of focus, the pixels are blurred together. Kind of like looking through a fog. A result is that in the histogram, the values are clustered in the center. This is an indication of low contrast. But when an image is sharp, there is a wider range of brighter and darker pixels. This illustrates it:

    From https://digital-photography.com/camera/autofocus-how-it-works.php

    Focus process

    So conceptually, the system moves the focus a little and measures again to see if the histogram got more narrow (more out of focus) or wider (sharper) . If it got more in focus, continue moving that direction and measuring until the peak contrast if found, But if it got more out of focus, move the focus the other direction and continue the process. It is a hunting process to find the optimum focus point. Just like we do to manually focus.

    Unfortunately, this process is slow. It can take seconds to arrive at the focus. This is why phase detection auto focus came to prominence.

    In phase detection auto focus, some of the light coming through the lens is split off to a separate sensor. Like the split image viewfinder I mentioned above, it is further split into two paths. Through some brilliant engineering, they can determine in one measurement how far off focus is and in what direction. The focus moves there quickly. Note that in mirrorless cameras all the light goes directly to the sensor, so these auto focus sensors are built directly into the sensor.

    I said that phase detection is “better” than contrast detection. That is true as far as being very fast. Actually, contrast detection can achieve more precise focus. There is a kind of system called hybrid the combines the strengths of both. I will not discuss that or go into the bewildering variety of focus areas or focus modes.

    Out of focus

    This is all great as far as technology goes. It works quite well in the cases it is designed for. We are lucky to have it.

    But all of these systems rely on the sensor having enough light to see some contrast. It doesn’t work in the dark. Yes, there is another variation on auto focus that is called active auto focus. It shoots a red beam from the camera to illuminate the focus area. This has a very short range and does not help the scenario I’m about to describe.

    Recently I was in Rocky Mountain National Park, over on the west slope where there is little light. It was full dark on a moonless night. The mountains all around provided lovely silhouettes. The stars were astonishing. Beautiful. I had to stop and get some star images.

    A trailhead parking lot provided a great and convenient place to set up – wondering if those occasional sounds I heard in the dark were bears. I guess not. It was perfect. Except. There was not enough contrast to focus, even at 6400 ISO. And the viewfinder image was too noisy to be useful for manual focus. I did not have a powerful enough flashlight to cast enough light on the nearest object, over 100 yards away, to allow the focus system to work.

    Adding to the problem, the lens I brought on this outing did not have a focus scale (a curse of modern zoom lens design). Normally, in low light, I switch to manual focus and set the lens to infinity for a scene like this. I guessed, but missed badly for a big section of the images. They were uselessly out of focus. I am ashamed to show an example, but like this:

    A blurry night shot©Ed Schlotzhauer

    Experience is a great teacher

    I write frequently advocating that we study our technology to become expert with it. And to practice, practice, practice to know how to use our gear, even in the dark. I failed. I encountered too much dark and a lens I had never tried to use in low light. The combination tripped me up. I am ashamed to admit I did not follow my own advice well enough.

    But every failure is a learning opportunity, right? It can be a great motivator and reinforcer. I did some research and discovered a “hidden feature” I never knew my camera had. It should save me the next time I do this.

    My Nikon camera has a setting I had never paid any attention to called “Save focus position”. When On (the default) it remembers the focus position of the current lens when the camera is turned off and restores it on wake up. But when Off – this is the brilliant part – it sets the lens to infinity on wake up. Now I will have a known infinity focus setting, even in total darkness! This setting is now in my menu shortcuts so I can access it quickly.

    I would never have learned about this feature if I had not failed so spectacularly. Experience really is a great teacher.

    So dig into those obscure settings you never bother with. There sometimes is gold there.

    Keep learning and failing!

    The featured image

    That night’s shooting was not all bad. I nailed the focus on this star shot. It was purely of the stars and had no foreground. This foreground has been substituted from another blurry image that night (actually, redrawn by hand).

    This is artistic expression rather than literal reality. I do that a lot. As photography progresses and matures, I believe that is more and more the norm.

  • They Told You Wrong About ISO

    They Told You Wrong About ISO

    Many of us have a wrong idea about ISO settings. I will just say they told you wrong about ISO. It was a misunderstanding. Whoever “they” are.

    Statement of faith

    It is stated as a “strong suggestion“, especially when we are learning landscape or portrait work. Never shoot with ISO over 100. Maybe it is stated as only shoot at the native ISO setting for your camera. Either way, these are given as rules.

    I hate rules, especially for my art. Rule of thirds. Rules of composition. Never put the subject in the center. Never shoot at midday. Always use a tripod. The list goes on.

    Like with religion, most of the so-called rules are based on good ideas, but over time they are repeated as commands and the underlying reasons are lost. Just do it. (I don’t think that is what Nike meant.) The rules become a statement of blind faith that cannot be challenged.

    What is noise?

    All digital cameras have noise. Noise is randomly generated in the sensor and in the electronics of the signal path until the pixels have been digitized by the analog to digital converter (ADC). The noise is a fundamental property of physics.

    The question is how much noise is there relative to the desired data. This is called signal to noise ratio in engineering. When we amplify a signal by increasing the ISO setting, all the signal including the noise is increased. This is why images shot at high ISO settings tend to look noisy. The image is usually not less sharp, but there is more noise obscuring things.

    It is true for a low cost point and shoot camera or a high end medium format camera. What changes are the relative amounts of noise and the limits the image can be pushed to.

    What is ISO?

    You’re familiar with the exposure triad: the combination of aperture, shutter speed, and ISO that determine exposure. That’s it. Many other things affect the composition and quality of an image, but only those 3 control the exposure.

    Aperture is the size of the diaphragm opening in the lens. It controls, among other things, the amount of light coming in. Shutter speed is the length of time the shutter is open to let light come in. And the ISO setting is kind of like a volume control. It sets the gain or amount of amplification of the sensor data.

    Going way back to early film days, there were no agreed on standards for the measure of how sensitive film was. So a couple of the largest standards organizations (the ASA and DIN scales) came together and created a standards group under the International Organization of Standards. They adopted the acronym of the standards organization (in English) as the name. By the way, officially “ISO” is not an acronym, it is a word, pronounced eye-so.

    Long way around, but now there are defined standards for exposure. For a given combination of aperture and shutter speed, the ISO settings on all cameras give the same exposure.

    Why use higher ISO settings

    OK then, in concept, the ISO setting is a volume control for exposure. Turning it up (increasing the ISO value) amplifies the exposure data. But as I mentioned, it is not free. Amplifying the exposure also amplifies the noise in it.

    It is true that low ISO settings produce less noise in the captured image. Modern sensors are much better than early ones. This is one of the wonders of engineering improvements that happen as a technology matures.

    Then, we should not use high ISO settings, right? Well, everything is a tradeoff. We need to use a minimum shutter speed to avoid camera shake when hand holding or to stop subject movement. We need to use a certain aperture to give the depth of field we want. These decisions must be balanced in the exposure triad, often by increasing the ISO.

    Can’t I just underexpose?

    When you accept that we must use the lowest ISO setting, the logical conclusion is that you could massively underexpose the image and “correct” it in post processing. Unfortunately this doesn’t work well. You are still boosting the noise unacceptably.

    The camera manufacturer knows more about it’s sensors than your image processing software does. The camera’s built-in ISO amplification can take into account it’s characteristics and do a better job. And modern sensors and electronics do a very good job.

    Are you wrong about ISO?

    If you are following a rule dictating you must or can’t do something, yes you are wrong. There are no rules in art. No ISO-like standards body specifies what your image must look like. There are always groups wanting to do this (are you listening camera clubs?), but they have no authority.

    If you are hand holding a shot, it is better to boost the ISO to steady the movement than follow a rule about using low ISO. The noise will be secondary to the reduced shake. Or I sometimes use the lowest ISO setting in my camera to create blur. I enjoy intentional camera movement (ICM) shots and will occasionally force an artificially slow shutter speed.

    If it is night and you want to shoot stars or street scenes, are you not going to do it because you would have to violate a rule by the ISO police?

    Use the ISO setting that lets you express what you want to do. It is your art. There are no rules. Besides, luminance noise looks like film grain. It can be an interesting artistic technique in itself. Do what feels right to you.

    Apology

    I used fairly strong language about this. The reality is that most photography writers have softened their recommendations on ISO. Most of them freely recommend using high ISO. This is healthy.

    But I know many of us were “imprinted” by early mentors who left us feeling there was something dirty about going above 100 ISO. I want to free you if you still have those self-imposed limits. Using even a very high ISO and getting the shot is always better than missing it because you wouldn’t want to chance increased noise.

    Today’s image

    Since I’m advocating it, here is an extreme case that I’m happy with. This was shot hand held with an old Nikon D5500 camera – at ISO 22800. I have corrected out some of the luminance and chromance noise and I am perfectly OK with what remains. Getting the shot made me happy, even if the noise is high.

  • The Histogram is Just Data

    The Histogram is Just Data

    I don’t mean to be insulting, but I cannot understand when people look at histograms as some magical, mysterious, and intimidating technical artifact. It is not. It is just data about what our sensor is seeing. The histogram is just data, and it is useful. Use it. Do not be afraid of it.

    Trigger

    A newsletter I received today triggered this semi-rant. But looking back, I see it has been over 3 years since I wrote about histograms, so it is probably time to revisit the subject. This actually is a subject I feel some passion for and believe it needs to be better understood by photographers.

    The newsletter author declared that our histograms lie. I realize that click-bait is commonly used to try to get people to read articles, but I still feel it is being somewhat underhanded. Now, in fairness, the newsletter author made some valid points – except for the part about histograms lying.

    What is a histogram?

    We see this graph of some data and maybe it does look complex and mysterious if you are not used to working with data and don’t know where the data comes from. Let’s get over that by understanding how simple but effective it is.

    By convention we play like our cameras measure light in a range of 0 to 255. There are no units: 0 represents black and 255 is pure white. The convention came from the history of early digital cameras. It is obsolete today, but still used. That is a topic for another day.

    So there are 256 possible values of brightness (0-255). If we go through and count the number of pixels of each value – the number of pixels in the image that have value 0, the number of pixels in the image that have value 1, etc. – and put them on a graph, we have a histogram.

    Here is a simple example:

    Again, black is on the left going to white on the right. Even without me showing the actual image, we can see that there is a “bump” of dark values on the left and a larger hill of bright values on the right. In between is a relatively low and even count.

    What can we learn from this? It is a black & white image, because there is no separate data for red, green and blue. There is high contrast because of the hills at the dark and bright ends. It is bright but not overexposed. There are deep blacks, but not enough to have lost important information. So, even without seeing the image, we can tell a lot about it. Is the image exposed “correctly”? Ah, that is the question my rant is based on.

    This is why histograms are useful. They are useful data about our image. It gives simple information to help us understand our exposure better.

    Benefit

    Today’s mirrorless cameras bring us the amazing benefit of real-time histograms. We can select to see the histogram live in our viewfinder or on the display on the back.

    What is the benefit? We see an immediate graphical view of the exposure the camera is determining. In the example above, we can see that the light tones are very bright, but not overexposed.

    I routinely use it to watch for “clipping” of brights or darks. If there is a large hump of data jammed up against the left or right edge, that is probably a problem. I will often choose to override the camera’s exposure determination to avoid these peaks.

    Again using the example above and knowing that my camera was in aperture priority mode, we see that it chose 1/750 second as the shutter speed. That works OK in this case, but if I did not agree, I would have easily used the exposure compensation dial to adjust the exposure. I do this a lot.

    So the histogram is a quick and easy to get a feeling for the “shape” of the exposure.

    They don’t lie

    Now coming to the basis of my rant: histograms do not lie (actually, they do; I will say how later and why it doesn’t matter).

    The newsletter author gave the example of a picture of some fruit on a dark table with a black background. She said the histogram lied because the camera did not give the exposure she wanted. It tried to make the whole image evenly exposed.

    No, the histogram is just a straightforward measurement of the data. If you take your temperature but don’t like the reading you get, it is silly to say the thermometer lied.

    What the author was describing was that she wanted to expose to have the same look as the scene she saw. This was a case of disagreeing with the camera’s matrix metering calculation. It was doing it’s job of trying to capture all the data that was there and preventing blown out blacks. But she decided to use exposure compensation to force the camera to expose the scene the way she wanted.

    The histogram did not lie. As a matter of fact, she relied on it to do her exposure compensation values. She used the histogram to determine how to override the camera exposure calculation.

    Actually, I would have used the camera’s original exposure determination. I like to have all the data available to work with. This is called exposing to the right. Bringing the brightness down in post processing to the level she wanted is simple, non-destructive, and does not add noise. Capturing the compensated image the way she wanted irreversibly crunches the blacks.

    They lie

    I said they don’t lie, but they do a little. For speed and efficiency the histogram is derived from the jpg preview of the image. Same as the preview shown in the viewfinder or camera back. If you study jpg processing you will see that it alters and discards a lot of information to give a good perceptual result.

    So the histogram is not actually looking directly at the literal RAW data from the sensor. But there is little observable discrepancy. On my camera, I find that it exaggerates the highlight values very slightly. Still, I typically back the exposure off to avoid highlight clipping, so it adds a little conservatism into the process.

    Trust the data you see. It is good enough.

    They’re not the photographer

    The histogram gives you data. It does not determine exposure. People talk about “good” or “bad” histograms. This is a misunderstanding. There are no absolute good or bad ones. What counts is did you get the exposure you wanted.

    There are valid artistic reasons for shooting what some people would consider bad histograms. If it is what the artist wants, it is correct.

    Histograms give us a reading of the exposure. They do not determine what is right. It gives some insight on what the automatic exposure calculation in the camera is trying to do.

    Use it

    The histogram is a brilliantly simple and wonderfully useful tool. We are lucky to have real-time histograms available to us now. It is a game changer. But it is just data. Do not be afraid of it.

    The histogram does not lie. But it does not automatically ensure that the exposure is exactly what you want. You have to sometimes take change and override the camera settings. When you do, the histogram is there showing you the result of your decisions.

    It is not magical or mysterious. It is a great tool. Use it. A craftsman know how to use his tools.

  • Stability

    Stability

    Even the most adventurous of us need a certain amount of stability. But I’m not talking about financial or mental or interpersonal stability today. I’m referring to the never ending debate about tripods vs. monopods.

    Why do we need them?

    Many people today value crisp, tack sharp images. To achieve this requires good cameras, good lenses, and good technique. One primary factor of the technique is minimizing camera shake.

    We tend to talk about camera shake as a binary thing: yes/no, on/off, shake/no shake. The reality is that it is a range. It is kind of like focus. If something is considered in focus, we really mean it has an acceptable level of focus. Good enough for our purpose, not an absolute.

    Likewise, for camera shake, we must take the point of view that it must be minimized enough for our need.

    Most people tend to hand hold their cameras. I know I do when I can. It is much faster and easier. Achieving sharp images hand held requires special techniques that we will discuss later.

    But when we know we need maximum sharpness, the standard response it to pull out the tripod, or monopod. It’s a debate.

    Tripods

    Tripods are the three-legged things we are all familiar with. They seem to have been around forever and they tend to be pretty large. The three equally spaced legs provides an optimum stable platform in all directions.

    Tripods used to be made of wood. Classical and lovely to look at, but heavy, Then they moved to generally bring made of aluminum or alloys. These were lighter and durable. A problem they had, though was vibration. Referred to as dampening when we’re talking about stability. The metal was kind of springy. It would vibrate when perturbed by a force. Like bumping it or when the mirror of your big DSLR “slaps” up to take a picture. The metal legs would vibrate for many milliseconds before stopping. This caused distortion while the shutter was open, which was probably for those same milliseconds.

    Later, most high end tripods moved to carbon fiber construction. This material has many advantages, but, of course, it is more expensive. The carbon fiber is strong and relatively light. It has much better dampening than metal, so vibrations are smaller. And if your hand has ever frozen to a metal tripod in the winter, well, carbon fiber doesn’t do this nearly as bad. For me, that by itself is a reason to switch to it.

    I have an excellent carbon fiber tripod with a great ball head. I use it for some critical images or long exposures.

    Tripod disadvantages

    Good tripods, used correctly, provide excellent stability. But this means you have to have it, there, when you need it. Perhaps this means carrying it miles over rugged trails for that one shot.

    Personally, I don’t do that much anymore. For me, a photo expedition should be a joy. I’m too old to enjoy carrying a heavy tripod a long ways. Sure, there are small versions that strap conveniently to your camera bag, but they have their problems too. Usually the small ones are short and I have to squat down uncomfortably to use them. Or if they are tall enough, they are not stable enough.

    I have a small one that fits nicely in a checked bag for air travel. I often take it. And just as often do not use it.

    And using a tripod slows you down a lot. Deciding where to set it, setting it up, mounting the camera, adjusting and leveling it all take quite a bit of time and effort. Some say this is an advantage, because it makes you spend more time considering what you are shooting and the composition of the shot. I partially agree and have experienced this. But often I am in a flow and shooting instinctively. The tripod absolutely gets in the way of this.

    Monopods

    A monopod is just one leg of a tripod, right? To some people that makes it 1/3 or less as useful as a tripod. Others (including me) would say it can be as useful or more so.

    If you are not familiar with them, try this experiment. Take a broom and hold it upside down with the handle down on the floor. How does it move?

    You will see that it does not move up or down in the vertical axis. It does more fairly freely in a circular arc left and right, back and forth. Is this minimal amount of stabilization worth it?

    To me it often is. The vertical axis is one of the most vibration prone areas for me. And being constrained on the monopod seems to add mass or resistance in the other axes. Either it is real or it is psychological. But in any case, it makes my images more stable.

    Monopod advantages

    Yes, it is unfair to only talk about tripod disadvantages and monopod advantages, but I want to make a point. There are often ways to overcome the disadvantages of a monopod.

    I have a great monopod. It extends to about 7 ft. and has a small but very nice ball head with quick release on it. It is my walking stick. I like that it is light enough and very strong to serve well as a useful and comfortable walking stick. I am far more likely to take it on a hike than I am a tripod.

    When I want to take an image, it sets up quickly – basically just attaching my camera to the quick release. It provides a decent degree of stability, and there are techniques I can use to improve if necessary.

    For instance, if a railing is handy, I can brace the foot of the monopod against my shoe and force the leg tight against the railing. I have taken very sharp 30 second exposures this way. Trees work fairly well, too. Lean the mounted camera against a tree for added stability. It can be a great tool.

    Which to choose?

    I probably seem biased. It is more of a pragmatic choice. Tripods are great for stability. It is usually the best choice for long exposures.

    Monopods are great when you are out on the trail or otherwise away from your car. If you practice and learn to work with it instead of against it, it can improve many situations to an acceptable level.

    I own a really good tripod and a really good monopod. The monopod gets used about 10 times more than the tripod. But I hand hold about 10 times more than either of them.

    Parameters of sharpness

    To put things in perspective, today’s high pixel count sensors force us to use very good technique to get the results we want. If the image moves as much as 1 pixel during an exposure, we can often detect a blur.

    Pixel pitches are measured in microns today. A micron is 1 millionth of a meter. The length of a bacterium is about 1-10 microns. A strand of spider silk is 3-8 microns in width.

    How is it ever possible to take a sharp image?

    Alternatives

    Several things can work for us. The amazing technology in our cameras provides things like in-body stabilization to minimize the effect of camera shake. It usually improves things a lot.

    And the electronics are improving to allow higher and higher ISO settings to be used. I consider 400 ISO to be my normal setting rather than the 64 ISO native value. That automatically gives me about a 3 stop speed advantage.

    The old “rule” when I was shooting film was that, if you are good, you could hand hold at a shutter speed of twice the focal length. This doesn’t work anymore for our high density sensors. People say the shutter speed should be 3-4 times the focal length. But with the better electronics allowing higher ISO with good results, this is often possible.

    Don’t overlook simple but obvious tricks. There may be something to brace the camera on or against. I often lean against a tree or a pole or a wall for added stability. Or I have put the camera on a rock or a bench and used a self timer to trigger the shot. Simple things like these can let you take amazing images in unlikely places.

    And there are simple techniques you can adapt that increase the stability of hand held shots. Things like bracing the camera against your forehead and forcing your elbows close against your body. And exhaling slowly as you slowly press the shutter. A video on gun shooting technique could be helpful. They have studied the problem a long time.

    So tripod or monopod? I lean toward the monopod. But it is not necessarily an either or choice. There are many creative ways to stabilize your camera. Unless you’re out in the desert with nothing around, there is often something that can be used.

    Today’s image

    This was shot in an airport (obviously) with no tripod or monopod. I couldn’t set the camera on the table in the restaurant where we were eating because there was a joint in the glass that was in the way. I put my camera on my camera bag on the floor. This is a composite of several 4 second shots.

    I used a 2 second timer to allow me to get my hand away and not shake the fragile setup. I could have used the camera app on my phone to trigger it, but that is always so tricky and slow to set up that I seldom do it.