Colour spaces: the easy solution
What is colour? How do we see? How do we digitally represent the real world of colours? All these questions to be answered shortly...
CIE 1931 characterisation of human cone cells
Modified under the CC-BY-SA licence, sourced from Vanessaezekowitz
Back in the 30's, the Commission Internationale de l'Éclairage (International Commission on Illumination) did some experiments to characterise human perception of colour. Our eyes have four light-detecing cells: one that simply detects any wavelength light (for use in dark situations), one that detects short-wavelength light, one that detects medium-wavelength light, and one that detects long-wavelength light.
We'll focus on the colour-sensing cells, as they characterise our perception of colour. We can measure precisely which savelengths of light they each react to, and by how much. This is displayed in the grpah above, with the wavelength of light (in nanometres) along the horizontal axis and the strength of the response along the vertical axis.
Please note that although each of the short-, medium-, and long-wavelength responding cells are colourised, this is not the colour that they are responsible for, and is effectively arbitrary at this point. We will define colour perception from these responses in the next diagram, such that these colourations are for clarity and convenience.
CIE 1931 colour matching
Modified under public domain licencing, sourced from Marco Polo
Next, the CIE gathered several individuals to conduct an experiment. They were provided with an illuminated colour on one side (the target), and asked to mix two other coloured lights in order to reproduce the target colour. This resulted in the above plot, where certain measured wavelengths of light required the proportional combination of three other wavelengths of light.
Hang on a minute, why does one of the traces go negative? You can't have negative light! Ah, but you can have a negative colour response to that particular wavelength of light. In order to match coloured light around 520 nm wavelength, no combination of the other wavelengths worked. So, they added some longer-wavelength coloured light to the target. Thus, to keep the maths equation balanced, if you wanted to produce the true target colour by mixing the other two, you would need to add negative long-wavelength light!
Please note that this doesn't "break physics" or anything - we are exclusively measuring the human perception and interpretation of wavelengths of physical light, not the measured reality of wavelengths of physical light. In other words, to percieve a certain wavelength of light from a combination of two other wavelengths, we sometimes need to offset by a third wavelength in a wierd way, because human perception is weird.
CIE 1931 xy chromaticity diagram
For details on the transformation to the below diagram, see this Medium post.
Modified under the CC-BY-SA licence, sourced from Sakurambo, BenRG
Let's skip over all the complex and awkward maths required to get to this diagram. It is a plot of the extents of human perception of colour, at a given absolute luminance (brightness). In short, the diagram we were looking at is put into a 3D arc that measures the extents of all possible colours that humans can percieve at all possible luminance (brightness) levels that humans can percieve. It's done in a fancy way such that the vertical axis of that 3D arc is purely the luminance (brightness) portion of the colour, and then that dimension is removed to result in a 3D arc projected onto a 2D surface. This arc is then rectified into a more suitable shape by some other maths (see the CIE 1931 rg chromaticity diagram for details on this intermediate step), and results in the shape we see here.
All that said, this little diagram represents every possible colour that humans can percieve. Thus, we can now start to lay out a method for digitally representing colour, and measuring how much human-percievable colour each digital representation can actually represent. We can define certain digital colour representations within this mathematical space, and plot this on the diagram for easy comparison. Each of these representations are called a "colour space."
Please note that the colours inlaid in the diagram are purely illustrative, and not the true colours they represent. This will be explained by each diagram below.
CIE RGB
Modified under the CC-BY-SA licence, sourced from Sakurambo, BenRG
This is the original colour space, which uses three pure lights of certains wavelengths to mix and generate all colours. These three wavelengths are at 700 nanometres (red), 546.1 nanometres (green) and 435.8 nanometres (blue). It's a pretty good representation of all colours, though you may note that some colours escape. In particular, these colours are opposite "red," right where we needed to mix in negative red light!
This is pretty much as much as is practically possible to physically construct a colour space for representation in the real world, as colours outside this triangle would need impossible phsyical combinations of light to represent.
Please note that this doesn't mean the bits within the arc which are outside this triangle are impossible: they just can't be created by mixing only "red" and "green" and "blue" wavelengths of light. There would need to be another wavelength of green-ish light to mix in, thus making the shape a quadrangle.
Rec. 601
Modified under the CC-BY-SA licence, sourced from Sakurambo, BenRG
Modified under the CC-BY-SA licence, sourced from Sakurambo, BenRG
In 1982, some broadcast engineers characterised analogue characteristics of Standard Defintion Television (SDTV), such that it could be appropriately represented and transmitted digitally. As the world used two separate analogue television systems (NTSC and PAL), which is a story for another time, there were two slightly different digital colour spaces. The diagrams show the NTSC (above) and PAL (below) colour spaces in white, overlaid on the CIE 1931 xy chromaticity diagram.
The extents of this colour space are derived from the physical response of Cathode Ray Tube (CRT) television sets, and their ability to reproduce colours.
Don't worry about telling them apart at this point, the differences will become much more visible later on, and there are some animations at the end for easier comparisons.
Rec. 709 (sRGB)
Modified under the CC-BY-SA licence, sourced from Sakurambo, BenRG
In 1993, High Definition Television (HDTV) was introduced to the world, and with it came a different colour space. As technology had improved in the decade or so in between, CRT TVs were slightly better at physically reproducing colours, and thus the colour space was slightly larger.
A few years later, in 1996, computer monitors used at home were starting to be used more commonly for colour uses, and so characterising their response to reproducing colours was required. As they were still CRTs, their colour space is the same as for Rec. 709, but contained other tweaks and optimisations for home and office use. This colour space is called sRGB, and is still the default colour space in every digital domain. Technology has progressed, but the way we store and process colour is still stuck in 1996 on a CRT.
Adobe RGB
Modified under the CC-BY-SA licence, sourced from Sakurambo, BenRG
Shortly after sRGB was defined, Adobe thought they could do better, and defined their own colour space in 1998, called Adobe RGB. It is very similar to Rec. 709 and sRGB (above) but is able to store and process more "green" colours and details and shades. This is of key importance, as human perception of green is significantly more sensitive than other colours, as we are able to more accurately distinguish fine differences in green-ish chrominance and luminance (I know the first diagram on this page doesn't represent it thusly, it displays the physical output response of the cells to that wavelength of light, not the amount of light required to produce any output at that wavelength).
Unfortunately, Adobe RGB has not seen widespread adoption and still remains rather obscure and unused.
scRGB
Modified under the CC-BY-SA licence, sourced from Sakurambo, BenRG
In 2003, developers at Microsoft needed a larger colour space to do transformations for high dynamic range colour applications, which could be easily transformed to the format required for HDMI. They designed it as similar to sRGB (which made colour transformations simple and accurate), though much larger.
It was introduced with Windows Vista, and implemented in several other components and technologies in Windows 7.
DCI-P3 (Display P3)
Modified under the CC-BY-SA licence, sourced from Sakurambo, BenRG
After the turn of the millennium, in 2005, the digital revolution started spreading more professionally, including to the movie industry. The Digital Cinema Initiative as part of digital theatrical motion picture distribution needed their own high-quality colour space for representing colour. Interestingly, DCI-P3 is actually a smaller colour space than Adobe RGB (above).
In 2015, Apple tweaked and adjusted the technical paramters of DCI-P3 for home and office use on their iMac computers, resulting in a colour space called Display P3. Most Apple products now utilise this space, and it is being supported on many other technology products, too.
DCI-P3+
Modified under the CC-BY-SA licence, sourced from Sakurambo, BenRG
At some point, the Digital Cinema Initiative also defined an expanded colour space version of DCI-P3, known as DCI-P3+. References to this space are scant, and details of application non-existant.
The only official details of this are availble from Canon.
Cinema Gamut
Modified under the CC-BY-SA licence, sourced from Sakurambo, BenRG
I can find no references to Cinema Gamut, aside from Canon.
Rec. 2020
Modified under the CC-BY-SA licence, sourced from Sakurambo, BenRG
In 2012, when the world didn't end, the television and broadcast engineers got together again to design and specifise a standard for Ultra High Definition Television (UHDTV), both with a standard dynamic range (SDR), and a wide colour gamut (WCG) or wide colour space.
In 2016, this was later tweaked and adjusted to support high dynamic range (HDR), with the same colour space. Whenever someone talks about "HDR," this is the colour representation they mean.
ProPhoto
Modified under the CC-BY-SA licence, sourced from Sakurambo, BenRG
Kodak developed this colour space in 2013 to allow for a large representation of colours, whilst preserving certain colour properties during editing and manipulations typical with digital photo processing. As such, applications such as Adobe Lightroom Classic utilise this space internally for processing image data from raw camera files.
It does not see much use outside of photo editing and manipulation.
ACES AP1
Modified under the CC-BY-SA licence, sourced from Sakurambo, BenRG
The Academy of Motion Picture Arts and Sciences developed the Academy Color Encoding System (ACES) to provide a colour-accurate workflow for digital theatrical movie production. It specifies many aspects of the production and post-production process, with one such component being the digital representation of colour, in a colour space. ACES supports both high dynamic range (HDR) and wide colour gamut (WCG) in its colour spaces.
The AP1 colour space is designed to have primary colours that are somewhat physically realisable (at the time of creation in 2014). It is likely that the movies you watch today are edited and producing using this colour space (although the final colour space delivered to cinemas and theatres is likely different).
ACES AP0
Modified under the CC-BY-SA licence, sourced from Sakurambo, BenRG
The AP0 colour space as defined by ACES is designed to enclose the entire CIE 1931 xy range of human-perceivable colours. This allows the colour space to perfectly map between a physical colour, and a digital representation of that colour, within the limits of what can be perceived by human vision. There is no need for a larger colour space, and it minimally encompasses the colours that are perceivable, such that it is an efficient representation. This is literally endgame.
Why?
You may be aware that high dynamic range and wide colour spaces and "as the creator intended" are all the rage right now, and technology is struggling to keep up with those trends. So we're busy adding support for the DCI-P3 and Rec. 2020 colour spaces to be able to represent these colours at these ranges - but why stop there? Yes, they're convenient because they roughly match the physical capabilities of displays and monitors now, but that means we're going to have to switch everything again in a few years as technology improves. Again and again switching until we reach the inevitable conclusion of... ACES AP0.
So why not spare ourselves the pain now and jump to the end? Well, it's not a solved problem. Some displays are better than others. Some people are happy to spend more than others. How do we rectify the difference between colour spaces and the colours that the creator used on their display, with the colours that a user can see on their display? If the user's display is less capable, what do we do with the extra that the creator used? If the user has a better display than the creator, what do we do with that extra capability?
There is no right answer to these (yet?), so for the meantime we just try and make everything standardise on what is generally capable, and hope that the differences can be ignored. Which is a bit stupid, particularly as it solves no problems and creates problems for the future when we will have to figure it out.
It seems a lot simpler to me to just use ACES AP0, and introduce a new metadata mechanism for both colours and displays. If this system was more standardised, then the "no right answer" questions can be balanced by the user at any time they like, and we don't have to worry about technology representations ever again.
Animations through all spaces
Discrete animation through all spaces
Modified under the CC-BY-SA licence, sourced from Sakurambo, BenRG
Smooth animation through all spaces
Modified under the CC-BY-SA licence, sourced from Sakurambo, BenRG