Color Systems — Part 1

What is color? How do you define it? How do you describe it? How can we ensure that when you tell me to make something red, I make it the exact same red you intended? For centuries people have been developing systems to describe color to answer these questions.

A few weeks ago Marco asked a question about additive and subtractive color systems on my post about the fundamentals of color. I did my best to answer, but his question inspired me to explore color systems a little more.

cie-srgb-comparison.jpg
The sRGB color gamut shown over the color space for human vision

A couple of notes before we begin. My exploration turned into a long post so I’ve split it two. The one you’re reading now and a continuation next Monday.

The science of colors systems is a technical subject and I tried to tone down the technical. I apologize in advance if the science is too little, too much, or if I’ve gotten any of it incorrect. I’ll provide plenty of resources so you can dig beyond these two posts.

My goal with both is to give us all a bit more understanding of what’s going on when we choose a color in Photoshop or use a hexadecimal value in our code and ultimately to help us develop better eyes for color.

Terminology

First let’s try to get some definitions out of the way. I say try because the more I looked into this, the more I discovered different terminology, which seemed to be saying the same thing. The 3 common terms you’ll come across often are color models, color systems, and color spaces.

  • Color model — a geometric or mathematical framework to describe color
  • Color system — as far as I can tell, the same thing as a color model, though it seems to be used in a more abstract way
  • Color space — the practical application of a color model. A color space specifies the gamut of colors that can be produced using a specific model

To be honest I’ve seen all 3 terms above used to describe the same thing and it’s been more difficult than expected to find good definitions. I found definitions that agree with the ones above and I found definitions which would indicate the above are incorrect in some way. Mostly I’ve found all three terms used interchangeably and I’ll end up doing the same here.

If anyone can offer good definitions and point me to a source, I’d greatly appreciate it.

What is Color?

Let’s get one more thing out of the way. What we see as colors are different wavelengths of light over a limited range of the entire electromagnetic spectrum. That limited range is the visible spectrum running from about 390–;700nm.

Isaac Newton was the first to understand the spectrum of light. He refracted white light through a prism in order to see its component colors. Our understanding of color today runs through Newton’s work with light.

Cones are the part of the human eye that detect color. Rods are sensitive only to black, white and gray. We have 3 types of cones each having a peak sensitivity over different wavelengths of light. Below are the cone types followed by the range of wavelength and peak for each.

  • short — 400–500nm — 420–440 nm
  • medium — 450–630nm — 530–540 nm
  • long — 500–700nm — 560–580 nm

There are two theories about how we see color using our 3 cones. Both theories are seen as valid and describe different stages in visual physiology.

  • Trichromatic theory — suggests the 3 cones of the retina are sensitive to red, green, and blue.
  • Opponent process theory — suggests we interpret color in a more antagonistic way; red vs. green, blue vs. yellow, black vs. white.

3 cone cells. 3 ranges of wavelength. 3 primary colors. Lots of things about color seem to come in 3s.

rgb-cmyk.png

Additive and Subtractive Color

When we see color we sometimes see it directly from a light source and other times we see it indirectly. In the latter case the light source strikes an object and what we see is the light that’s reflected from the object.

This is the basis of additive and subtractive color. When we see color produced directly from a light source, such as in a computer monitor or television, we’re dealing with additive color. Additive colors are produced by mixing different wavelengths of light in varying combination.

When we see the color of physical objects (or printed colors) such as a table or wall or a page in a magazine we’re seeing subtractive colors. The object absorbs some wavelengths of light and it reflects others. The color we see comes from the wavelengths that are reflected.

RGB is an additive system, which is why we use it for digital color. Screens produce their own light source. RGB relates closely to how we actually perceive color, though it doesn’t represent the full gamut of human vision.

As you no doubt know and can tell from the initials, the primary colors of the RGB color model are red, green, and blue. To get any other color inside the model we mix varying amounts of red, green, and blue.

  • red — rgb(255,0,0) — #ff0000
  • green — rgb(0.255,0) — #00ff00
  • blue -— rgb(0,0,255) — #0000ff

CMY(K) is a subtractive system and it’s used in print. Its primary colors are cyan, magenta, and yellow which are close to the primary blue, red, and yellow we learned as kids. In theory, mixing all three should lead to black, but due to the reality of inks they don’t and so a true black ink is added.

The CMY(K) gamut is smaller than the RGB gamut and can’t represent the brightness of RGB colors.

In some respects CMY(K) printing can be seen as additive. Due to an optical illusion you can print dots of cyan, magenta, yellow, and black. The human eye can’t distinguish the dots at normal viewing size and colors can be created and perceived by varying how many dots appear in a given part of an image. In a sense what we see as color is the addition of smaller dots of colors.

Notice that the secondary colors of both the RGB and CMY color models is the other color model’s primary colors and again remember that the gamut of colors produced by either is less than the gamut of human vision. Every device also has its own unique gamut based on the actual colors it can reproduce, which again are less than all the colors we can perceive.

cie-1931-chromaticity.jpg

The CIE Color Model

In 1931 the International Commission on Illumination developed a mathematical color space, which appropriately became known as the 1931 CIE Color Space. It’s been revised over the years, but the idea is that it maps all the different colors that an average person can perceive.

CIE was developed to be independent of any device or means of producing color and is based as closely as possible on how human beings perceive color.

A true mapping of color would be 3-dimensional since we have 3 cones. CIE divides color into 2 parts, brightness and chromaticity. Think only of white, black, and gray for a moment. All have different values of brightness, but their chromaticity is the same.

Chromaticity is the quality of a color independent of its brightness. CIE diagrams show the range of chromaticity visible to an average person and different diagrams can be produced for different values of brightness. The region of color in the diagram is the gamut of human vision.

The curved edge of the gamut corresponds to monochromatic light with each point representing a pure hue of a single wavelength. The straight edge is called the line of purples and have no counterpart in monochromic light. Less saturated colors appear toward the center of the diagram.

Take any two points on the diagram and connect them with a straight line. All the colors on that line can be created by mixing the colors at the two end points in varying amounts. Take 3 points and all the colors enclosed by the triangle formed by those points can be created by mixing the 3 corner points in varying amounts.

Look at the CIE diagram and you can see it’s not a triangle. Any 3 points we try to use as primaries for a color model can’t reproduce every color in the gamut of human vision. Some color will always lie outside the model we create.

Note: When looking at these diagrams on a monitor or printed page a color space other than CIE is being used and so colors outside that space’s gamut aren’t being displayed properly.

In 1960 attempts were made to correct some deficiencies in the original 2-dimensional CIE model known as CIEXYZ. This led to a second version of the model, CIEUV. A third version, CIELAB, was created in 1976 to also address deficiencies in the original.

CIELAB or LaB or L*A * B remaps the visible colors so they extend equally along two axes forming a square. The system is device independent and is a useful color space for editing digital images. It’s the color model in Adobe PostScript and you use it all the time in whatever image editor you have.

Closing Thoughts

I’ll leave things there today. Hopefully this post didn’t get too technical and for those of you who enjoy the technical, hopefully I didn’t make too much of a mess of the science. If I did, please let me know so I can make corrections.

The key points to remember are:

  • What we see as color is how our eyes interpret different wavelengths of light
  • The human eye has 3 color receptors each working over and peaking at different ranges of wavelengths
  • Color produced by a light source is different than color reflected off an object
  • Color systems have been developed to mathematically describe color and one of those systems, CIE attempts to map the full gamut of color in human vision

Next week I’ll pick up with the Munsell Color System, which leads to describing colors in terms of hue, saturation, and lightness or brightness. I’ll look at some other systems and try to explain the difference between HSL and HSB.

Download a free sample from my book, Design Fundamentals.

Leave a Reply

Your email address will not be published.

css.php