User Manual

56
Using 8-bit Versus 10-bit Video
While both 8- and 10-bit uncompressed video are capable of providing excellent quality
broadcast video, 10-bit represents a significantly higher quality and is preferable in many
situations.
Because 10-bit video has four times the numerical precision when compared to 8-bit, it has a
signal-to-noise ratio 12 db higher than 8-bit video.
Visually, in 8-bit video compared to 10-bit video, you will notice a substantial difference. In 8-
bit video there will be “contour lines” or “striations” visible, particularly noticeable in scenes
having soft gradients like a ramp or sunset. For example, if a sky region is mostly the same
color but varies by only a few digital numbers from one side of the picture to another, you
may see contour lines where the signal passes from one digital value to the next higher value.
Since each numerical value in a 10-bit system is only one fourth as large as an 8-bit system's,
these contours become invisible and the sky varies smoothly.
10-bit video is often used when the source and output video (or “master”) is also 10-bit. Even
if the input and/or output video is 8-bit, a 10-bit “project” will still m aintain a higher quality
when there is a significant amount of effects rendering involved.
Industry standard professional mastering formats—Sony Digital Betacam for Standard
Definition and Panasonic D5 for High Definition—are both true 10-bit formats.