Noise, Dynamic Range and Bit Depth in Digital SLRs


by Emil Martinec © 2008

last update: May 22, 2008

Digital imaging records visual information via a sensor placed at the focal plane of a camera's optics to measure the light gathered during an exposure. The sensor is constructed as an array of pixels, each of which is tasked to gather the light arriving within a small patch of sensor area. The efficiency with which the sensor and its pixels gather light, and the accuracy to which it determines the amount gathered by each pixel, are crucial for the quality of the recorded image. The incoming light is the signal the photographer wishes the camera to transcribe faithfully; inaccuracies in the recording process constitute noise, and distort the scene being photographed. In order to extract the best performance from digital imaging, it is helpful to have an understanding of the various contributions to image noise, how various design choices in digital cameras affect this noise, how choices in photographic exposure can help mitigate noise, and how to ameliorate the visual effect of noise post-capture.

This article addresses the following topics:

  1. Sources of noise in digital imaging (this page)
    1. Photon shot noise
    2. Read noise
    3. Pattern noise
    4. Thermal noise
    5. Pixel response non-uniformity
    6. Quantization error
  2. Measuring noise
    1. Read noise and shot noise
    2. S/N ratio vs. exposure, and Dynamic Range
    3. Read noise vs. ISO
  3. Some consequences of noise
    1. Noise, Dynamic Range, and Bit Depth
    2. S/N and exposure decisions
    3. BIG PIXELS vs. small pixels
    4. Supplement: Sensor DR vs. Camera DR
  4. Aspects of noise reduction on raw data
When data on noise is presented in this article, it will be based on analysis of raw data. It is important to measure the raw data directly, without converting it to an output image; the raw conversion process not only makes non-linear transformations of the raw values, it mixes up the data from nearby pixels and makes it hard to know how much of the result is due to the original raw image data, versus how much is attributable to the processing of the raw converter. Software tools used in the analysis here are dcraw, a freeware program allowing raw data to be read and output to a data file without further processing; IRIS, a freeware program with many analysis tools used in astrophotography; ImageJ, another freeware image analysis program maintained by the US National Institutes of Health; and Mathematica, a general purpose mathematical analysis program.

Some technical terms used throughout (with links to Wikipedia topics where available):

For the technically oriented, mathematical derivations of quantitative results will be presented; for those who wish to skip these technical interludes, at the end of each such discussion a Bottom line will be presented giving the qualitative implications of the result.






Sources of Noise

There are several characteristic sources of digital image noise:

  1. Photon shot noise: Light is made up of discrete bundles of energy called photons -- the more intense the light, the higher the number of photons per second that illuminate the scene. The stream of photons will have an average flux (number per second) that arrive at a given area of the sensor; also, there will be fluctuations around that average. The statistical laws which govern these fluctuations are called Poisson statistics and are rather universal, encountered in diverse circumstances. For details and a variety of examples, see this Wikipedia article. The fluctuations in photon counts is visible in images as noise -- Poisson noise, also called photon shot noise; an example is shown in Fig. 1. The term "shot noise" arises from an analogy of the discrete photons that make up a stream of light, to the tiny pellets that compose the stream of buckshot fired from a shotgun (in particular, "shot" does not mean "photographic image").


    Fig.1 - Photon shot noise in an image of the sky from a Canon 1D3 (in the green channel). In the histogram at right, the horizontal coordinate is the raw level (raw units are sometimes called analog-to-digital units ADU or data numbers DN), the vertical axis plots the number of pixels in the sample having that raw level.   The photon noise was isolated by taking the difference of two successive images; the raw values for any one pixel then differ only by the fluctuations in the photon count due to Poisson statistics (apart from a much smaller contribution from read noise).

    The noise histogram is essentially a Gaussian distribution -- a standard "bell curve". The standard deviation of this bell curve is the width of the bell. Fluctuations in photon counts from pixel to pixel are completely uncorrelated; in terms of its spatial variation, photon shot noise is white noise (in other words, it has equal strength at all spatial frequencies); this means that it has a uniform salt-and-pepper appearance with no structure or pattern.

    An important characteristic of fluctuations obeying Poisson statistics is that their standard deviation -- the typical fluctuation away from the average in the typical count -- is equal to the square root of the average count itself. That is, if 10000 photons are collected on average, the typical fluctuation away from this average number of photons will be about 100 -- the counts will typically range from about 9900 to 10100. If instead on average 100 photons are collected, the variation from count to count will be +/- 10. Thus, as the signal grows, the photon shot noise also grows, but more slowly; and the signal-to-noise ratio increases as the square root of the number of photons collected. The higher the illumination, the less apparent the shot noise; the lower the illumination, the more apparent it is.

  2. Sensor read noise: Photons collected by the sensels (the photosensitive part of a pixel) stimulate the emission of electrons, one for each captured photon. After the exposure, the accumulated photo-electrons are converted to a voltage in proportion to their number; this voltage is then amplified by an amount proportional to the ISO gain set in the camera, and digitized in an analog-to-digital converter (ADC). The digital numbers representing the photon counts for all the pixels constitute the RAW data for the image (raw units are sometimes called analog-to-digital units ADU, or data numbers DN). We'll call this digital number the raw value of the pixel.

    In an ideal world, the digital number recorded in the raw data would be directly proportional to the photon count. The constant of proportionality between the number of photons and the corresponding raw value is conventionally and somewhat inaptly called the gain of the sensor (it is not really a gain, rather it is a conversion factor between photo-electron counts and raw values). Since each doubling of ISO doubles the raw value for a given exposure, the gain is inversely proportional to ISO -- doubling the ISO cuts the gain in half, because the same number of photons captured are converted to a raw value twice as big when the ISO is doubled.

    In the real world, the raw level does not precisely reflect the photon count. Each electronic circuit component in the signal processing chain -- from sensel readout, to ISO gain, to digitization -- suffers voltage fluctuations that contribute to a deviation of the raw value from the ideal value proportional to the photon count. The fluctuations in the raw value due to the signal processing electronics constitute the read noise of the sensor.


    Fig.2 - Read noise of a 1D3 at ISO 800. The histogram of the noise is approximately gaussian.
    The average value of 1024 is due to an offset Canon applies to raw data (see page 2).

    The read noise can be isolated by taking a "black frame" image, an exposure with the lens cap on and the highest available shutter speed; there are thus no photons captured, and only the electronic noise from reading the sensor remains.

  3. Pattern noise: In terms of its spatial variation, read noise is not quite white. Upon closer inspection, there are one-dimensional patterns in the fluctuations in Figure 2. Because the human eye is adapted to perceive patterns, this pattern or banding noise can be visually more apparent than white noise, even if it comprises a smaller contribution to the overall noise. Pattern noise is more problematic in older models such as the Canon 20D; see figure 3.

    Fig.3 - Pattern noise in a 20D at ISO 800 (1000x800 pixel crop).   Fixed pattern noise can be removed. By making a template from the average
    of 16 identical blackframes and subtracting it from the image most of the fixed pattern noise is removed. The result may be seen by mousing over the image.
    The residual variable component of pattern noise consists in this example largely of horizontal banding noise.

    Pattern noise can have both a fixed component that does not vary from image to image; as well as a variable component that, while not random from pixel to pixel, is not the same from image to image. Figure 3 shows the read noise of a 20D at ISO 800 (a 1/1000sec exposure with the lens cap on, with a levels adjustment made for visualization purposes). There is both horizontal and vertical banding present. A template for fixed pattern noise was created by averaging sixteen identical blackframes; this has the effect of averaging out any variable, fluctuating component of read noise, leaving mostly the fixed component. The effect of subtracting the fixed pattern noise template from the read noise may be seen by mousing over the image in figure 3. Vertical banding largely disappears in this example, leaving some variable banding noise in the horizontal direction, as well as a homogeneous (white) component of read noise; the width of the read noise histogram decreases by about 20%. This gives an indication of how visually disruptive pattern noise can be -- even though the fixed pattern noise is only about 20% of the overall noise, it is quite apparent because our perception is adapted to picking out patterns, finding edges, etc.

  4. Thermal noise: Thermal agitation of electrons in a sensel can liberate a few electrons; these thermal electrons are indistinguishable from the electrons freed by photon (light) absorption, and thus cause a distortion of the photon count represented by the raw data. Thermal electrons are freed at a relatively constant rate per unit time, thus thermal noise increases with exposure time. Figure 4 gives a plot of the raw value standard deviation of a 20D blackframe (in a 1000x1000 crop from the lower left corner), as a function of exposure time.


    Fig.4 - Thermal noise in 20D blackframes at ISO 400. The knee in the data at exposure time 15sec is due to the max pixel raw level reaching 4095
    (the maximum possible value on this camera), indicating that the rise in standard deviation is largely due to a few outlyers in the distribution.

    The noise level rises as a function of exposure time, with the slope decreasing above an exposure time of about 15sec in this particular sample. The cause of this change in behavior is that the rise in the standard deviation is largely due to a few "hot" pixels out on the tail of the distribution; when the raw values of these pixels reach 4095 (the maximum possible value for this camera), their contribution to the standard deviation saturates, and only the other pixels contribute to a (slower) rise in the standard deviation. In crops from the same image used to generate the above plot, but chosen to avoid such "hot" pixels, the standard deviation rises much more slowly.

    To show the growth in number of these outlying "hot" pixels, Figure 5 plots in white the pixels whose values are four or more standard deviations above the mean (in a quadrant of the crop used to generate Figure 4) for two different exposure times.


    Fig.5 - Hot pixels are the main contributors to thermal noise.   Shown in white are the pixels whose values are four or more
    standard deviations above the mean.   On the left, exposure time is 15sec; on the right, 30sec.

    Another thermal contribution to image degradation is amplifier glow, which is caused by infrared radiation (heat) emitted by the readout amplifier


    Fig.6 - Amplifier glow (lower right) in a 612 sec exposure of a Canon 20D.

    For exposures of less than a second or so, read noise is relatively constant and thermal noise constitutes a negligible contribution to overall image noise. It will not be considered further in our discussion.

  5. Pixel response non-uniformity (PRNU): Not all pixels in a sensor have exactly the same efficiency in capturing and counting photons; even if there were no read noise, photon noise, etc, there would still be a variation in the raw counts from this non-uniformity in pixel response, or PRNU. Figure 7 plots PRNU "noise" as a function of exposure. The graph was obtained by measuring separately the combined photon shot noise and read noise, and appropriately subtracting these from the total noise in a series of exposures; this residual noise is what is plotted in Figure 7. A more complete explanation of the measurement of PRNU may be found on page 2.

    Fig.7 - Noise due to pixel response non-uniformity of a Canon 20D at ISO 100, as a function of raw value.   Mouse over to compare
    to the magnitude of combined read noise and photon shot noise.   Fluctuations in the response from pixel to pixel are about 0.6%.

    PRNU "noise" grows in proportion to the exposure level -- different pixels record differing percentages of the photons incident upon them, and so the contribution to the standard deviation of raw values from PRNU rises in direct proportion to the exposure level. On the other hand, photon shot noise grows as the square root of exposure; and read noise is independent of exposure level. Thus PRNU is most important at the highest exposure levels, as one sees from the mouseover in Figure 7. For instance, PRNU is equal to combined shot and read noise at raw value of about 2500 on my 20D (roughly the top 2/3 EV of highlights) at ISO 100. At lower exposure levels, photon noise is the dominant contribution until one gets into deep shadows where read noise becomes important.

  6. Quantization error: When the analog voltage signal from the sensor is digitized into a raw value, it is rounded to a nearby integer value. Due to this rounding off, the raw value mis-states the actual signal by a slight amount; the error introduced by the digitization is called quantization error, and is sometimes referred to as quantization noise. In practice, this is a rather minor contribution to the noise. Figure 8 shows the result of quantization on the noise histogram.


    Fig. 8 - The error introduced by quantization of a noisy signal is rather small. On the left, noise of width eight levels;
    on the right, the quantization step is increased to eight levels, but the width of the histogram increases by less than 10%.

    Noise of width 8 levels on the 0-255 scale was added to a uniform gray background; then the bit depth was truncated by squeezing the levels by a factor eight and then re-expanding by a factor eight. The left side of the image shows the image and its histogram before increasing the quantization step; on the right, the quantization step is made equal to the original width of the noise profile, and the standard deviation of the histogram rises by less than ten percent. More generally, in a typical image the roundoff error will vary from pixel to pixel, taking all values between plus and minus half the quantization step. Averaging the quantization error over a uniformly distributed set of input values will yield an average quantization error of about 0.3 of the quantization step. Thus, quantization error is negligible in digital imaging provided the noise exceeds the quantization step.

< home - top - next >