The light catching buckets on a sensor are not really pixels in the traditional sense even though we all refer to them as pixels. They are more accurately referred to as ‘sensels’ or sensing elements or sensels which capture light.
Most cameras today use something called a bare filter system, a complete bare set is made up of four colored filtered sensels—two green staggered with one red and one blue. Think of a filter as a very small piece of colored film covering the light sensel. Its job is to prevent certain types of light from entering. So, let me ask you this, what do you think happens when different colored light hits one of these colored filtered sensels?
If a red filtered sensel is exposed to a bright blue or green light, the sensel cannot see it. However, red light can penetrate a red filter in the volume where the brightness of that light is recoded by the sensel. The camera’s processor knows if any light penetrated the red filtered selsel, it can only be red. And therefore, it calculates that pixel as red light.
So the color calculation happens in the processor, not by the sensor itself. Blue filtered sensels will only capture blue light, and green will only capture green. For the rest of this lesson, we will use colored water to show you what the processor sees. Just remember it starts out as an intensity of light and not a color.
The information of red, green and blue light which makes up an image is also referred to as channel information or channels. Now, imagine you have several questions, you know that when you magnify an image, you see several colors. Not just red, green or blue, but you also see yellow and orange and aqua and purple and pink, so you probably wondering what in the world is going on.
For every pixel we see in a digital image, nine sensels are involved in the calculation, one central sensel which would be the position of the target pixel as well as the surrounding eight sensels which will contribute their information to the process which is calculation. Now, just for fun, say “Central sensel” nine times as fast as you can, now that gets you busy for a little while.
Now, let us simplify this a little bit and instead look at three sensels. A red, green and blue with the middle sensel acting as the target pixel area. Now, we already know that if the sensels are exposed to red light, the target pixel will be red. If the sensels are exposed to blue light, the target pixel will be blue, and probably you have already guessed that if sensels are exposed to green light, the target pixel will be green. Okay now, those were pretty easy.
What about yellow light? Well, yellow light is made up of both red and green light. The red and green sensels both capture light in the processor knows that when this happens, the target pixel is yellow. Aqua light is made up of both green and blue light. Purple light or magenta is made up of both red and blue light.
If the sensels do not catch any light, the pixel will be black. If the sensels are completely full, the pixels will be white. If the sensel have any equal amount in between those too, it will be a shade of gray. In fact, different amounts of red, green and blue information can make just about any color of light.
We know that in jpeg images which use 8-bit depth, there are a total of 256 shades or intensities of light. If we were to calculate the total number of possible colors, it would be 256 red intensities x 256 blue intensities x 256 green intensities which equal about 17 million possible colors. That is amazing.
Now, you can also see this in action use the color picker in Photoshop. The numeric values you see at RGB stands for the amount of red, green and blue information you used in calculating that specific color. Do you notice how these values stay between zero and 255? Every time we pick a color using this watch, we can see how much of each is used in the calculation. Black has all zeros, white has all 255 and grays are all equal values.
If we were to look at any other color, we can see how much each channel contributes to that particular color. And we do not think about it much when we are working in photoshop, but we are using its tools to make adjustment and what we are really doing is changing the pixel’s appearance by reassigning the input information. We are changing these numbers. That is what Photoshop does when we make adjustment and I think that is a pretty cool concept to know.
I would probably say 90% to 95% of all photographers and Photoshop user really do not understand that this is happening in Photoshop, so hey, you know that is a good little conversation starter, “Do you know how pixel gets it color?” Okay, you can talk about that in you next little photo meeting or whatever you guys do for that.
The take home message for this lesson is that every pixel gets its color from gathering red, green and blue information from several sensels and it’s the processor that is able to piece this information together and decipher the proper color for a pixel—it is pretty much amazing if you ask me.
If you found this video helpful, you maybe interested in my new DVD, Photoshop Crash Course. I will not only teach you the most important tools in Photoshop. I will teach how to think in Photoshop. It can be ordered from the following link: www.michealthementor.com/store