Gamma and Tone Mapping
To understand the key difference between shooting film and shooting digital, you need to get your head around the concept of gamma encoding. As we explained in Chapter 1, digital cameras respond to photons quite differently from either film or our eyes. The sensors in digital cameras simply count photons and assign a tonal value in direct proportion to the number of photons detected—they respond linearly to incoming light.
Human eyes, however, do not respond linearly to light. Our eyes are much more sensitive to small differences in brightness at low levels than at high ones. Film has traditionally been designed to respond to light approximately the way our eyes do, but digital sensors simply don’t work that way.
Gamma encoding is a method of relating the numbers in the digital raw image to the perceived brightness they represent. The sensitivity of the camera sensor is described by a gamma of 1.0, and it has a linear response to the incoming photons. But this means that the captured values don’t correspond to the way humans see light. The relationship between the number of photons that hit our retinas and the perception of light we experience in response is approximated by a gamma of somewhere between 2.0 and 3.0, depending on viewing conditions. Figure 2-3 shows the approximate difference between what the camera sees and what we see; Figure 2-4 is a real-world image showing a linear capture and how the curve must be applied to make it appear “normal.”
Figure 2-3 Digital capture and human response.
Figure 2-4 The image on the left was processed in Camera Raw at linear settings, with Brightness & Contrast set to zero and the Point Curve options set to Linear. In the image on the right, we remapped the lighter image tone curve by adding a Levels adjustment in Photoshop. The tone curve required a steep curve to simulate the results of gamma remapping.
We promised that we’d keep this chapter equation-free—if you want more information about the equations that define gamma encoding, a Google search on “gamma encoding” will likely turn up more than you ever wanted to know—so we’ll simply cut to the chase and point out the practical implications of the linear nature of digital capture.
Digital captures devote a large number of bits to describing differences in highlight intensity to which our eyes are relatively insensitive, and a relatively small number of bits to describing differences in shadow intensity to which our eyes are very sensitive. As you’re about to learn, all our image-editing operations have the unfortunate side effect of reducing the number of bits in the image. This is true for all digital images—whether scanned from film, rendered synthetically, or captured with a digital camera—but it has specific implications for digital capture.
With digital captures, darkening is a much safer operation than lightening, since darkening forces more bits into the shadows, where our eyes are sensitive, while lightening takes the relatively small number of captured bits that describe the shadow information and spreads them across a wider tonal range, exaggerating noise and increasing the likelihood of posterization. With digital, you need to turn the old rule upside down: expose for the highlights and develop for the shadows!