1.6 - Colour: From Source to Perception - Part 2


After sunlight has either been absorbed or scattered from the surface of an object, we're left with a barrage of photons that can be any particular pattern of wavelengths from the rainbow (a spectrum). How does the eye and brain even begin to process this myriad of wavelength information into colour?

Dragging your mind back to that wonderful world of Biology 101, we know light enters the eye through the pupil, passes through the lens and is focussed onto the back of the eye. Then there's some handwaving discussion of a special type of cell, something about signals being passed through an optic nerve into the brain and voila, we can see. That's usually where the explanation in class stops, never diving into the depths of what's truly going on in the back of the eye and brain. This is a huge shame; a fuller description makes the difference in perception between seeing with your eyes and ki extremely obvious. And who would I be to deny you, dear reader, a learning opportunity?

[ Figure 1 ]

[ groundbreakingsci-stuff dot com/post/169481695052/1point6p2#one ]

The eye is a delicate instrument and has evolved to protect itself. If a damagingly bright level of light is detected entering the pupil, the iris will restrict to reduce the amount of light. The iris is scaffolded by a ring of muscle that can contract to make the pupil smaller, and radial muscles that can pull the ring back out, widening the pupil instead. Behind the muscles lies a thin layer of pigmented cells. Those cells act like screens, preventing light other than that from the pupil entering the eye. The iris' colour is mostly driven by concentrations of melanin, the same pigment giving rise to skin colour; the usual colour range from a deep brown with high concentration (like my own), blue with a low concentration, and those middling hazels, deep greens and green-blues (also like my own at times). The darker the iris, the better this protection.

Behind the iris is the lens, a fairly solid, curved and clear mass. The shape of the lens causes light to bend as it passes through and focuses light down at the back of the eye. The lens can stretch to focus down light at different distances, but there's a finite limit to its abilities. Even with standard, healthy vision your finger still looks fuzzy if you hold it too close to your eye. The same failure occurs at long distances, too. Not all eyes are built the same and if the range of distances your eye works at isn't suitable for your life, you'll often struggle focussing and may need glasses to offset the problem. With the more common condition of near-sightedness the focus point of light from the lens falls just short of the back of the eye. This could be caused by the eye being too long or the lens set too far back. Someone with near-sightness would probably hold a book closer to their face to comfortably read it. I have the opposite problem of far-sightedness. My eyesight is marginally more suitable for long-distance vision, therefore not so helpful for reading tiny, spidery handwriting from my colleagues or performing intricate bench-work in the lab, so I wear glasses to help. Far-sightedness runs in my family. Dad probably should have worn glasses too, but he did so little regular reading he improvised by squinting and holding books at arms length instead. I have the sneaking suspicion many of our friends assumed he was barely literate for the longest time.

You may notice from the figure the incoming image gets turned upside-down in the eye. Luckily, your brain can combine the image with other senses and calculate the image needs to flip. What you may not know is, if you wear a pair of glasses that flips the image before your eye sees it, your brain will adapt over the space of two weeks or so and re-correct the image inversion. This is brain plasticity in action and shows how the brain can adapt to new or modified senses. And not only that, but this correction doesn't happen all at once; oddly the brain flips the parts of the scene it believes are important first, usually faces. How strange must that be to see? I would very much like to try this experiment for a month to test my own brain's plasticity, although I'm not sure how long Videl would find my pin-balling around the house amusing. – I sincerely apologise; the notes Pan has left on this section's draft are getting increasingly exasperated at these tangents but I'm letting this fun-fact lie because it's fantastic.

[ Figure 2 ]

[ groundbreakingsci-stuff dot com/post/169481695052/1point6p2#two ]

The effect of colour-blindness on the wavelengths of light cones are sensitive to. Certain types of colour-blindness can be mitigated by screening out particular confused wavelengths.

Back to that focused image. Light travels to the back of the eye, hitting the retina. The retina contains a sheet of cells sensitive to light. There are four types of cell - three of a cone-type that respond to different colours (covering blue, red and green light) and one rod-type that responds to a broad range of colours and low light levels. The cells are so-named due to their shape. The cones sit with the highest density on the part of the retina corresponding to the region at the centre of our vision. Rods barely exist in the centre but have a higher density in our peripheral vision. About half of the information from our vision comes from this tiny centre spot, the fovea centralis, and is why the very centre of your vision is so much better than your peripheral. Animals and some zoomorphic people often have reduced colour vision (usually missing the red cone) with a different distribution of these cells across the eye. Rabbits, evolved to survive as prey animals, have a line of high-density of these cells across the eye so they can see clearly along the horizon for predators, instead of the clear centre of vision like anthropoids.

What makes these cone cells suitable for colour vision? There is a pigment in the cells that absorbs light at a particular set of wavelengths causing an electron in the molecule to move - sound familiar? When the electron moves the whole molecule changes shape to compensate, becoming the right shape to fit inside a receptor within the cell, much like a key fitting into a lock. This in turn triggers a nerve attached to the cell to fire. Outside the fovea centralis there's more than one cell attached to a nerve and so the nerve has to reach a different activation threshold to fire, but the principle is the same. Oddly, the nerves are attached to the front of the retina rather than the back, but the nerves don't interfere with your vision. Those nerves bundle together into a thick mass of cabling known as the optic nerve and pass out the back of the eye into the brain.

Whilst our eyes can respond to a many wavelengths of light and there exists an overlap in what cones respond (for example yellow light triggers both the red and green cones in different amounts) the signal leaving the eye in regards to colour is only four numbers relating to the intensity of red, green, blue light and overall intensity. That's it. That's why LCD monitors, with only red, green and blue pixels next each other, can trick our eyes into thinking there's a full spectrum of colours on the screen - the screen providing only the information that passes from the eye regardless so your brain cannot tell the difference.

With so few types of detectors then, a fault in one will have a large impact on vision. Some forms of colourblindness are caused by missing cone types. Other forms are by mutations in the cone cells that mean the response function of the cells greatly overlap, so green and red cones are almost always triggered at similar levels, for example. The colour vision in some cases can be corrected by wearing sunglasses that block wavelengths of light in the confusing overlap, allowing the brain to distinguish between red and green far easier.

The colour signal has now been broken down from the nuanced spectrum we began with to just three (plus one) numbers. This is an absurdly clever form of data compression by the eye; imagine how many cones we'd need to capture every wavelength explicitly? The nerves take this even further - instead of keeping the absolute values of these red, green and blue cells the nerves combines the signals in two different ways. The first is (red signal) - (green signal), the "red-green channel" and is a proxy for yellow light. The second is calculated as (red signal) - (green signal) - (blue signal), the so-called "yellow-blue" channel. This one extra step, whilst seemingly arbitrary, reduces the information and processing needed from four numbers to three, a reduction of 25%. Since most anthropoidal people rely on vision this is a highly significant saving for the brain to make.

From these three numbers the brain can more-or-less re-expand and perceive the entire colour spectrum. As the data about particular wavelength intensities has been so compressed however, the reconstruction in the brain is not loss-less. This can lead to a few interesting quirks of colour vision (and not ki-sense) which I'll get to in a moment.

[ Figure 3 ]

[ groundbreakingsci-stuff dot com/post/169481695052/1point6p2#three ]

The main regions I'll be discussing and their location in the brain, using Pan's head. I could make a Dad comment here about actually finding her brain but I won't.

First, how and where this vision reconstruction happens. The information from the eye needs to pass from the retina to the correct regions of the brain. Signals from the cells (and because the cells are attached to nerves fixed in place, the spatial location information) are sent down the optic nerve towards the back of the brain where the visual processing regions live, the so-called visual cortex. On route is the thalamus - two lumps of brain cells (neurons) on either side of the centre of the brain that perform partial processing on visual as well as taste, auditory and touch sensation. Sense of smell is a little different; molecules we breathe in binding to receptors in the olfactory bulb at the back of the nose, which is a part of the brain itself. This very direct connection could be why the sense of smell is so immediate and memories so intense even for anthropoids, even more so for animals and zoomorphic people (and my family) with sensitive noses. The thalamus is able to weigh up which sensory information is important enough to properly process and pass on to other areas of the brain, including which senses need our immediate and limited attention. This is why you'll often find yourself turning down music when driving in an unfamiliar area, or closing your eyes when using ki-sense.

For vision in particular, the thalamus' analysis is used to help focus the eyes and bring the signals from both eyes into alignment, then information is passed onto other areas of the brain. Whilst there's a general hierarchical structure between sub-regions of the visual cortex (the aptly-named V1 coming before V2 for example), the thalamus has connections to - and can therefore communicate with - all these regions directly.

These "backstreet" connections to some areas of the visual cortex are thought to explain blindsight, a phenomenon where some individuals can't see some part of their field of vision but are still able to act as though they can. They're able to reach out for and grab objects accurately without consciously seeing them. In this case, the lower parts of the visual cortex responsible for projecting the external world have partially failed, but the regions able to label objects (whatever they may be) and locate them in space has not. If the information in the brain travelled strictly in one direction this would not be possible. Note, blindsight isn't the same as ki-sense as it's still visually driven, (those with blindsight wouldn't be able to perform these tasks if they shut their eyes) but shows how incredibly complicated and interwoven the brain is.

Whilst the thalamus doesn't play much of a role in colour processing, the concept of backstreets through the brain and processing information without perceiving it becomes important when we learn how to use ki to move at speeds our bodies haven't evolved to. At that point your visual system will become a deceptive mess and we'll be revisiting integrating your senses in later chapters.

Most of the information from the eyes then travels to the back of the brain into the visual cortex. The regions of the brain that process visual information are ridiculously complex and still not well understood, particularly in human anatomy as opposed to rats for example. So, to avoid having to reissue this textbook every few months with updates and drawing the ire of an academic community I'm not immersed in, I'll skirt the finer details. What we can say for certain is there are defined regions of the visual cortex that can:

- encode spatial information (what signals are from where in the visual field to build a picture)

- find contrasting edges

- distinguish between horizontal and vertical lines of colour and contrast

- compare signals from both eyes to calculate depth

- monitor incoming signals over time to detect motion direction and understand that in 3D space

- distinguish objects

- distinguish colour and map them onto the image.

These functions may occur in multiple places with slightly different results to be merged together, (colour is picked out of the visual signal in regions called V1 and V4 for example, not just in one place) but all-in-all the process is bafflingly complex even for seasoned academics.

The main signal enters V1 right at the back of the brain and radiates out in two directions - dorsally (that is, towards the top of the brain) and ventrally (along the side and underneath of the brain). The signal moving dorsally is referred to as the "where" pathway, moving through areas of the brain associated with movement. These regions help you understand your body in relation to the space you're seeing. I know these pathways are pretty strong for me! The ventral pathway is the "what" pathway and information moves through regions like the temporal lobe (associated with memory) and the limbic system (governing emotion). This pathways labels what's in your visual field. What makes a book a book? How do we know that an object is a square-shape if we've never seen a true, perfect square in our lives to compare to? Who knows exactly, but the ventral pathway seems to.

The flow is not just a one-way street either - the parts of the brain directing attention (the salience network) can feed back down to these visual regions and modulate their activity, effectively switching off their communication with the conscious areas of your brain for a time. Have you ever been thinking so hard about something you can't remember what you've been reading? Blame the salience network switching to "internal mode" and ignoring external stimuli.

[ Figure 4 ]

[ groundbreakingsci-stuff dot com/post/169481695052/1point6p2#four ]

Metamers - colours that look the same to our eyes but have a completely different spectrum. The similarity between colours can break down under different types of light in the environment.

V4 is where the colour information is combined to form all the hues we know, and feeds into that ventral "what" pathway. Here's where our eyes are deceived by colour. The extreme data compression the eye performs means we have information only on brightness, red to green and yellow to blue, rather than intensity information for every possible wavelength of light. This, then, means we can encounter what are known as metamers - different combinations of light wavelengths that produce exactly the same colour in our eye under normal conditions. You could have a spectrum with one wavelength of yellow light and a different spectrum with red and green light. As long as the difference between the signal that's picked up by the red and green cones is the same (with no blue cone signal), the result of that red-green channel calculation will be the same value (0) and you'll perceive the same yellow colour. These metamers are interesting as if you had a different set of colours in the incident light (say, only red light rather than a rainbow of white light), this will cause the spectrum of light scattered from an object to differ and the metamer pairing break.

Whilst we can see a huge range of colours, the existence of these compressed channels means there are some mixes of colours we just can't experience. Imagine we had wavelengths of light entering the eye that were yellow and blue. The red-green channel picks out the yellow fine but then moving to the yellow-blue channel calculation it'll fall over. We can see reddish-orange colours and reddish-blue (purples) but we can't see yellow-blue as a colour, nor reddish-green. \these colour mixes definitely can exist as a spectrum of light, but in our brain they can't exist at all.

Usually.

In a previous section I mentioned a condition called synaesthesia. This is the mixing of sensory signals, where one sense can trigger another. One of the most common is grapheme-colour synaesthesia, experiencing colour when reading words and numbers either within the mind's eye ('association') or 'projected' onto the letter themselves. The colours are as consistent as the reading of the letter - the colours appear to the person at the time the letter or number is understood and are intrinsic to one another. Grapheme-colour synaesthesia is caused by an overlap in function and increased size of the right fusiform gyrus, a long region of brain matter on the underside of the brain and is the area responsible for "labelling" the faces, shapes, places and words in your vision. This region connects to the angular gyrus above to further process shape and colour labels. For synaesthetes there's a misfiring at this point, saying a letter shape must be a particular colour.

It is an odd condition. Even projection synaesthetes, those that physically see a letter or number coloured in the world around them, know that what they're experiencing is not a real colour. They know for example that the letters on this page are black, both truly physically seeing the colour and processing and labelling the colour in the ventral stream as black. But whilst the synaesthete does not see a real rainbow of colours triggered when reading, they do process and label the letters with colours regardless, like imagining the colour automatically. They have a partial experience of colour.

Crucially for synaesthetes, as the accidental synaesthetic colours are not passing through cone cells and mixing in the red-green, yellow-blue channels, the final colour result doesn't have to be bound by the limit of the eyes. Synaesthetes can "see" yellow-blue and red-green colours. They won't be able to experience the colour beyond what they see with their eyes, only know and insist that the colours are definitely blue-yellow, as those were the label the brain assigned. This kind of mixing of signals applies for all synaesthetic responses whereby strange, seemingly unphysical sensations can occur. Other types of synaesthesia like "auditory-taste" can leave a synaesthete disliking someone's name purely for the imaginary taste it leaves in their mouth. Synaesthesia can be very disruptive to lives, although useful as as memory aid.

What, then, does all this mean for ki-sense? My own studies have been embarrassingly small - the number of reliable ki-users I know who would willingly lie in a body scanner numbering less than ten - however I have located brain regions that are associated with ki-sensing. You can read the paper if you like ("case study of fMRI responses in ki-sensing") although it is a lot of academic waffle for a simple take-home message. I just asked my friends and family to lie in the scanner and actively sense my ki with eyes closed as I sat in the control room, then compared the result to them ignoring me and looking at bright lights and loud noises. Then bored them witless making them repeat the task for half an hour on multiple days so I had huge stack of data to work with.

[ Figure 5 ]

[ groundbreakingsci-stuff dot com/post/169481695052/1point6p2#five ]

As with a standard synaesthetic response, the fusiform and angular gyri were activated when there was no apparent need to be, as though they were "seeing" something. The same with hearing in auditory centres and active memory retrieval - all were activated, matching the experience of ki-sense touching on every part of sensation and combining into something more. Higher regions of the visual field are activated even when eyes were shut. This included some of the dorsal ("where") pathway, showing what I know from experience; that ki-sensers are projecting ki into the world around them in a vision-like way. This all means when I fail at communicating the intricacies of Pan's ki through drawing this is is purely because the eye works with three colours and ki works with millions of possible - and seemingly impossible - combinations, never-mind every other sense and memory type that can be folded in. Ki-sense by default is a synaesthetic experience. There is no way for me to communicate the richness of ki-sense without getting you to learn to use it in the first place.

And if I haven't convinced you to work on developing your ki-sense after this, I don't think I ever will.

In case you've been wondering, this study would have been impossible to sneak passed the University's ethics board without raising suspicion. Instead, I did the study at Capsule Corp on their mostly-idle medical-grade scanner. Bulma installed the beast to settle an argument with Vegeta on how much damage his martial arts training was doing to his body. …Let's just say results were inconclusive as neither would concede. Bulma was very keen for me to use it, even running some of the analysis herself - she did admit in the end she was pleased something "less petty" came of the machine's installation. Conducting a study without ethics isn't good scientific practice I know, and for that and other questionable research I've performed on myself I fully expect to get into trouble, but needs must. I doubt anyone repeating this particular study would have trouble obtaining ethics however, so don't fret if you're ever asked to be part of the replication!

I've introduced a lot of science in this section, I realise. I'm hoping parts may be familiar to you, maybe you now have an army of "fun facts" to share at parties at least. It goes without saying that proficiency in ki-sense does not require this level of background knowledge, but I feel compelled to provide the information nonetheless to paint a fuller, multi-wavelength picture. Many of these concepts, appearing tangential now, will be referred to again directly or in analogy. The dual-nature of light will rear its head again, for example, in the next section.