One final topic is key to understanding digital capture in general, not just digital raw. Digital sensors, whether CCD or CMOS, respond to light quite differently than does either the human eye or film. Most human perception, including vision, is nonlinear.
If we place a golf ball in the palm of our hand, then add another one, it doesn’t feel twice as heavy. If we put two spoonfuls of sugar in our coffee instead of one, it doesn’t taste twice as sweet. If we double the acoustic power going to our stereo speakers, the resulting sound isn’t twice as loud. And if we double the number of photons reaching our eyes, we don’t see the scene as twice as bright—brighter, yes, but not twice as bright.
This built-in compression lets us function in a wide range of situations without driving our sensory mechanisms into overload—we can go from subdued room lighting to full daylight without our eyeballs catching fire! But the sensors in digital cameras lack the compressive nonlinearity typical of human perception. They simply count photons in a linear fashion. If a camera uses 12 bits to encode the capture, producing 4,096 levels, then level 2,048 represents half the number of photons recorded at level 4,096. This is the meaning of linear capture: the levels correspond exactly to the number of photons captured. So if it takes 4,096 photons to make the camera record level 4,096, it takes 3,248 photons to make the same camera record level 3,248 and 10 photons to make it register level 10.
Linear capture has important implications for exposure. When a camera captures six stops of dynamic range (which is fairly typical of today’s digital SLRs), half of the 4,096 levels are devoted to the brightest stop, half of the remainder (1,024 levels) are devoted to the next stop, half of the remainder (512 levels) are devoted to the next stop, and so on. The darkest stop, the extreme shadows, is represented by only 64 levels. See Figure 1-3.
Figure 1-3 Linear capture.
We see light very differently. Human vision can’t be modeled accurately using a gamma curve, but gamma curves are so easy to implement, and come sufficiently close, that the working spaces we use to edit images almost invariably use a gamma encoding of somewhere between 1.8 and 2.2. Figure 1-4 shows approximately how we see the same six stops running from black to white.
Figure 1-4 Gamma-encoded gradient.
One of the major tasks raw converters perform is to convert the linear capture to a gamma-encoded space to make the captured levels more closely match the way human eyes see them. In practice, though, the tone mapping from linear to gamma-encoded space is considerably more complex than simply applying a gamma correction—when we edit raw images, we typically move the endpoints, adjust the midtone, and tweak the contrast, so the tone-mapping curve from linear to gamma-encoded space is much more complex than can be represented by a simple gamma formula. If we want our images to survive this tone mapping without falling apart, good exposure is critical.
Correct exposure is at least as important with digital capture as it is with film, but correct exposure in the digital realm means keeping the highlights as close to blowing out, without actually doing so, as possible. If you fall prey to the temptation to underexpose images to avoid blowing out the highlights, you’ll waste a lot of the bits the camera can capture, and you’ll run a significant risk of introducing noise in the midtones and shadows. If you overexpose, you may blow out the highlights, but one of the great things about the Camera Raw plug-in is its ability to recover highlight detail (see the sidebar, “How Much Highlight Detail Can I Recover?” in Chapter 2, How Camera Raw Works), so if you’re going to err on one side or the other, it’s better to err on the side of slight overexposure.
Figure 1-5 shows what happens to the levels in the simple process of conversion from a linear capture to a gamma-corrected space. These illustrations use 8 bits per channel to make the difference very obvious, so the story they tell is somewhat worse than the actual behavior of a 10-bit, 12-bit, or 14-bit per channel capture, but the principle remains the same.
Figure 1-5 Exposure and tone mapping.
Note that the on-camera histogram shows the histogram of the conversion to JPEG: a raw histogram would be a strange-looking beast, with all the data clumped at the shadow end, so cameras show the histogram of the image after processing using the camera’s default settings. Most cameras apply an S-curve to the raw data to give the JPEGs a more film-like response, so the on-camera histogram often tells you that your highlights are blown when in fact they aren’t. Also, the response of a camera set to ISO 100 may be more like ISO 125 or ISO 150 (or, for that matter, ISO 75). It’s worth spending some time determining your camera’s real sensitivity at different speeds, then dialing in an appropriate exposure compensation to ensure that you’re making the best use of the available bits.