The following is Part 1 in a series of posts regarding a confusing problem I encountered in Sony Vegas after rendering a video project. My footage looked fine in the Sony Vegas video preview window but after the rendering process it appeared to have more contrast, extensive information loss in the lowlights, a slightly grainier appearance and a different colour scheme. See the difference for yourself below:
If you compare the dark-coloured guitars just to the right of the actor in the two images, you can see how they've been partly blacked out in the rendered video (bottom image). The aforementioned colour change is also quite conspicuous. For example, check out the guitar at the top right of the frame; it's gone from blue to green! Some videographers might actually prefer the visual style of the rendered product over the original look, but that's not what I was aiming for in this music video...plus, I wanted to know what was behind the change in appearance. The problem persisted despite experimenting with different cameras, rendering the footage as several different file types including .avi and .mp4 and playing each of those files in various media players including VLC and Quicktime. I eventually found the answer lying in a dark, dank corner of the Internet and decided to give it the exposure it deserves. Unlike the very haphazardly articulated solution I chanced upon, however, mine will hopefully include a more thorough explanation of the issue so that you, the reader, know more about the 'why' as well as the 'how' of the issue. So let's begin...
PART 1: Bits, pixels and the RGB colour model
Try to make out a single pixel or pel (short for 'picture element') on your computer screen. Which one to pick, huh? A relatively large proportion of monitors are HD (High Definition) displays, which means they're 1366 pixels wide by 768 pixels tall. That's a lot of pixels to choose from...1,049,088 to be exact!
The number of distinct colours each pixel on your screen is capable of displaying at one time is determined by how many bits, or pieces of binary information, encode the pixel's output (in combination with the display modes supported by your graphics adapter). This is called a pixel's bit depth or colour depth. A 1-bit pixel can display 2^1 colours = 2 colours (i.e. monochrome display), a 2-bit pixel can display 2^2 colours = 4 colours, a 3-bit pixel can display 2^3 colours = 8 colours and so on. It's a fairly simple relationship; the more bits per pixel, the more distinct colours it can display...up to a certain point. You're probably viewing this webpage on a monitor composed of 24- or 32-bit pixels (for Windows users, you can check this under the Display tab in the System Information tool), meaning that your monitor can display up to 2^24 (i.e. 16,777,216) different colours! Since the human eye can only discern up to 10 million different colours, the usefulness of a 24-bit display might seem negligible, but try calculating 2^23 and see what you get...you'll find it takes 24 bits per pixel to encode enough colours to pass your eye's 10 million-colour threshold (this is why a 24-bit display is also called a True Colour display on Windows operating systems. Apple calls it 'millions of colours'). As an aside, you might be wondering why companies bother manufacturing a 32-bit display system when the 24-bit system already encodes for every visible colour. It's because those 8 extra bits aren't used to encode colour. Instead, they're used to describe information related to opacity (click here to find out more). This means a 32-bit pixel can display no more distinct colours than a 24-bit pixel (i.e. almost 17 million). So that's all well and good, but how exactly do the bits of information describe pixel colour?
Assuming we have a system where 24 bits are being used to encode for pixel output, every one of those 16.7 million colours can be reproduced using a combination of three colours known as the primary colours; they are red, green and blue. This method of colour creation is known as additive colour.
Additive colour mixing is the foundation of the RGB colour model, designed primarily for use in electronic systems such as computer monitors and television screens. The distinct colours being outputted by every pixel on your own monitor needed to recreate the above image of an insect are being produced through additive colour mixing. Assuming you have a 24- or 32-bit display system (which you probably do) 8 bits (i.e. 1 byte) per pixel are dedicated to describing the intensity of the colour red (i.e. the red component), another 8 bits encode the green component and the final 8 bits encode the blue component. In graphic software, the component value of each of these colours may be numerically represented using a scale from 0 to 255 (the information range a single byte is just capable of handling).
A value of 0 represents black, whilst a value of 255 represents white. As you can imagine, a value of 36 for the red component encodes a very dark red, whilst a red value of 234 describes a much brighter, more saturated red (see below). Applying this concept on a grander scale, you can think of an LCD display as a grid-like arrangement of hundreds of thousands of little red, green and blue lamps, each with their own dimmer switch.
Now, knowing that each of the millions of pixels in a 24-bit display is capable of outputting 256 different intensities of red, green and blue (plus white) and combining them through additive colour mixing, the fact that your computer monitor can display 16.7 million (i.e. 256 x 256 x 256) distinct colours may be a bit easier to fathom!
In Part 2 of this blog series, I will delve deeper into the RGB colour model and discuss why different electronic devices do not necessarily interpret or display particular RGB values in the same way.
This post, like all others I author, is a work in progress; updated as needed. Was it informative? Easy to understand? Feel free to leave any comments or questions below!
Tim Szewczyk -