We now understand that learning does not represent merely a "relatively permanent change in behavior resulting from experience". Rather, it is a change in knowledge. Through various learning processes, the organism acquires new knowledge that allows it to predict and control events in the world.
The implication of this cognitive view of learning is that human beings, and other organisms (primates, certainly, and probably other mammals as well, at least) are cognitive beings. Our behavior goes beyond innate, pre-programmed reflexes, taxis, and instincts, to include actions based on our knowledge and understanding of the world.
Intelligent behavior requires that we:
- acquire knowledge about ourselves and the world around us;
- store this knowledge in memory so that it will be available for use later;
- integrate this new information with prior knowledge;
- use knowledge to plan and execute actions that will cope with environmental demands and achieve personal goals; and
- use language as a tool for thinking, problem-solving, and communication with others
Nativism and Empiricism
Modern philosophers and psychologists have debated two broad possible sources of knowledge.
According to the nativism associated with Rene Descartes, a 16th-century French philosopher, mind and body are separate entities (a philosophical position known as dualism). According to Descartes, who was a devout Roman Catholic, the body responds to sensory impressions, which in turn generate animal behavior as reflexes. All animal knowledge,a posteriori, a Latin phrase meaning that it is acquired through sensory experience. But the human mind contains a priori knowledge that that exists independent of sensory experience, because it comes from God.
On the other side of the debate was the empiricism espoused by John Locke, a 17th-century English philosopher. Locke argued that all knowledge comes to us by way of sensory experiences, either through sensation, by which we observe external objects, or reflection, by which we observe the workings of our own minds. Locke's empiricism provided the philosophical background for the concept of conditioning, and greatly influenced Pavlov, Thorndike, Watson, and Skinner. Like these later psychologists, Locke and other empiricists argued that associations were formed between sensory stimuli, in which the presence of one object evoked the idea of another, with which it was associated; and that complex ideas were built out of higher-order associations between simple ideas.
Immanuel Kant, a 17th-century German philosopher, provided a synthesis of nativism and empiricism. Like Locke, Kant believed that knowledge was acquired through experience; but he also believed that our experience of the world presumes the prior existence of certain innate mental structures -- ideas such as space, time, and other categories of thought.
This cognitive constitution is presupposed by experience. Consider Locke's view of the association of ideas. Locke adhered to the traditional principle that ideas were associated by virtue of contiguity -- meaning spatial and temporal contiguity. But you can't experience contiguity unless you already have some idea of space and time. Never mind that association by contiguity is the wrong idea about associations. The point is that if you believe that associations are formed by virtue of the spatial and temporal contiguity between events, then you must already possess the ideas of space and time, before you can experience contiguity between them. As argued in previous lectures, modern approaches to learning suppose that associations are based on contingency rather than contiguity. But that just means that the organism must already possess the concepts of correlation and causation before it can experience contingency. Either way, Kant seems to have it right:a posteriori sensory experience presupposes a priori mental structures.
Interestingly, the phrenologists postulated faculties of time, locality (space), and causality in their system -- again, precisely the faculties that would need to be innately specified for learning to occur, under either the traditional S-R view or the modern cognitive view.
The Vocabulary of Sensation and Perception
The tension between empiricism and nativism runs throughout the study of cognition: we have already seen it in sociobiology and evolutionary psychology, and we will see it again in our treatment of language, and again in our treatment of development. But nobody -- not even Descartes -- denies the importance of sensory experience. Therefore, we begin our study of cognition by describing sensation and perception.
- Sensation detects the presence of an object or event in the environment. The function of our sensory apparatus is to answer such questions as "Has there been a change in the environment?";"Is there something out there?"; and if so,"How strong is it?".
- Perception forms a mental representation of that object or event. The function of our perceptual apparatus is to answer such questions as "What is out there?","Where is it?", and "What is it doing?".
The boundary between sensation and perception is not entirely clear -- it's more like the boundary between the United States and Canada or Mexico, than the Iron Curtain of the Cold War era.
In sensation, we must first distinguish between the distal stimulus and the proximal stimulus.
- The distal stimulus is the object of perception itself -- the tree, or rock, or ocean, or car that exists in the world outside the mind.
- The proximal stimulus consists of all the physical energies emanating from the distal stimulus, and which fall on the sensory receptor organs associated with our eyes, ears, etc. The proximal stimulus might be a pattern of light waves reflected from the tree, or a pattern of sound waves generated by the motion of its branches and leaves as it sways in the breeze, or some other kind of physical energy.
The first step in sensation is transduction, or the conversion of the proximal stimulus (light waves, sound waves, or whatever) into a neural impulse. This is accomplished by the sensory receptor organ. There are many different receptor organs in the body, roughly corresponding to the various modalities of sensation -- vision, audition, etc.
The neural impulse is then transmitted to the cerebral cortex of the brain (such as the primary visual area of the occipital lobe) via various neural pathways (such as the optic nerve).
Sensation continues with the observer's detection of the stimulus, and analysis of certain fundamental qualities such as intensity.
Somewhere around this point we begin to talk about perception, which ends with the construction of a mental representation -- a mental image, if you will -- of the distal stimulus, including a description of its physical features and an analysis of its meaning and implications.
The problem of perception, then, is simply this:
How do we get from the distal stimulus, through the proximal stimulus,
to the mental representation of the distal stimulus?
The Sensory Modalities
The first step in the process of forming mental representations of the world is sensation -- transforming the physical energies radiating from the stimulus in the world into neural impulses traveling through the nervous system.
How many senses are there -- how many
different ways of experiencing, and knowing, the world?
The Greek philosopher Aristotle (384-322 BC), in De Anima, gave the traditional answer of five" special senses": vision, hearing, smell, taste, and touch. (This gives rise to the notion of a "sixth sense", or intuition; this "sixth sense" also plays a role in the pseudoscientific literature on parapsychology).
On the contrary, some authors (e.g.,
Stoffregen & Bardy in Behavioral & Brain Sciences,
2001) have argued that there are no differences among the
various senses, and that there is no basis for assuming that
sensory information reaches us via separate "channels". From
this point of view, we might just as well think of ourselves
as having only a single sense, sensitive to various features
in a global array of stimulation.
Still, Aristotle's insight, that we have a number of different sensory modalities, remains by far the more popular view. And for most of that time, authorities considered that Aristotle's five senses exhausted the list -- so much so that an entire genre of art developed out of the idea.
For example, five of the series of six late-medieval (c. 1500) French tapestries known as The Lady and the Unicorn depict the five senses (the sixth tapestry, on the left, is entitled "My Sole Desire"). The tapestries can be viewed at the Musee National du Moyen Age, in Paris -- and they also appear in some of the "Harry Potter" films.
The Lady and the Unicorn (c. 1500)Musee National de Moyen Age, Paris
In 1617-18, Jan Brueghel the Elder, together with his friend Peter Paul Rubens, initiated the genre in Dutch art with a series of five allegorical paintings depicting The Five Senses. You can view them today at the Museo del Prado in Madrid, Spain.
Brueghel and Rubens: The Five Senses (1617-18)
Museo del Prado, Madrid
The Dutch genre was brought to its apex (arguably) by the teenaged Rembrandt van Rijn -- in fact, these are his earliest known surviving works. One of these, depicting taste, has not been seen in 400 years. But there's hope that it might be recovered another, "The Unconscious Patient", depicting smell, was discovered in a basement and originally put on auction with an asking price of $500-800 (it eventually sold for $87,000!). The four surviving paintings were displaced together for the first time in ages at the Ashmolean Museum in Oxford, under the title "Sensations: Rembrandt's First Paintings". The extant paintings were given allegorical titles:
- "A Pedlar Selling Spectacles (Allegory of Sight)". Yes, that's how "peddler" is spelled by the museum.
- "Three Singers" (Allegory of Hearing".
- "Unconscious Patient (Allegory of Smell)". That's not ether or chloroform, which weren't introduced into surgery until the 19th century. Rather, that's smelling salts, intended to bring the patient back to consciousness.
- The "Allegory of Taste" is the one that's missing, and apparently we don't know what it depicts.
- "Stone Operation (Allegory of Touch). No, not the removal of kidney stones, but rather a procedure in which quack surgeons removed "stones" from the skull to cure patients of their headaches.
Rembrandt's SensationsExhibit at the Ashmolean Museum, Oxford, September-November 2016
Exteroception and Proprioception
On the contrary, I identify nine different sensory modalities, or general domains in which sensation occurs. These modalities may be arranged hierarchically (of course!), beginning with a division (initially proposed by Sherrington in 1906)into two broad categories,exteroception and proprioception. Within each of these categories, there are (of course!) further subcategories.
Exteroception refers to sensations that arise from stimulation of sensory receptors located on or near the surface of the body, and includes three subcategories.
In the distance senses there is no direct contact between the distal stimulus and the sense organ; rather, radiated energy travels a distance between the distal stimulus and the receptor organ. The distance senses include vision (seeing), in which light waves stimulate the rods and cones in the retina of the eye; and audition (hearing), in which sound waves stimulate hair cells on the basilar membrane of the cochlea of the inner ear.
In the chemical senses the distal stimulus makes contact with a receptor organ, initiating a chemical reaction which gives rise to sensory experience. The chemical senses include gustation (tasting), in which chemical molecules in food and drink stimulate taste buds surrounding the papillae of the tongue; and olfaction (smelling), in which airborne chemical molecules stimulate receptors in the olfactory epithelium of the nose.
In the skin senses, also known as somesthesis, the distal stimulus makes contact (more or less) with the surface of the skin, stimulating receptors buried underneath. The skin senses include the tactile sense, in which mechanical pressure from an object stimulates receptors buried in the skin, and the thermal sense, in which the temperature differential between the object and the skin (that is, whether the object is relatively warm or cold, compared to the skin itself) stimulates corresponding receptors buried in the skin. There is also a sensation of pain.
Proprioception refers to sensations concerning the position and motion of the body.
In kinesthesis (sensation of the motion of the body), the stretching of the muscles, contracting of the tendons, and movement in the joints stimulates specialized nerve endings located nearby.
In equilibrium, also known as the vestibular sense, gravitational force pulls on crystals suspended in liquid in the semicircular canals and the saccule and utricle of the inner ear; these crystals then fall on hair cells arranged in various orientations. Equilibrium is sometimes classified as a form of exteroception.
Sherrington also identified a category
of interoception, which receives stimulation
from internal tissues and organs, such as the viscera and the
blood vessels. Because interoception does not typically give
rise to conscious sensations -- you can't detect changes in
blood pressure, for example -- we won't consider this category
of sensation further.
- At the same time, there are possible exceptions, where
interoception does appear to give rise to conscious
sensation -- for example, when you feel hungry or thirsty,
chilly or feverish. Hunger has to do with
blood-sugar levels, and thirst has to do with cell fluids,
and it's not clear that people really have sensory
awareness of these physiological parameters.
- But people do feel something when they're
hungry or thirsty -- though maybe these sensations arise
from contractions in the stomach or dryness in the mouth
-- which wouldn't exactly be interoception, at least in
- People with hypoglycemia often claim that they can tell when their blood-sugar levels have gone down. But it's not clear that they're sensing blood-sugar levels directly. Instead, they may be making an inference about their blood-sugar levels based on other symptoms, such as dysphoria or dizziness.
- It's been claimed that difficulties with interoception
are risk factors for eating disorders (such as anorexia,
bulimia, or binge eating) and body
dysmorphic disorder. Certainly, patients with
eating disorder engage in eating behaviors (including
fasting) that are discordant with their actual internal
states (see "Inside the Wrong Body" by Carrie Arnold, Scientific
American Mind, May-June 2012). (More about these
disorders in the lectures on Psychopathology and
- Interoceptive skill is sometimes measured with the "heartbeat test" developed by Hugo Critchley and his colleagues: scores on this test are highly correlated with laboratory measures of interoceptive awareness.
- Sit in a comfortable chair, take a few deep breaths;
start a stopwatch, and count your heartbeats for 1
minutes (without taking your pulse!). This is
your estimated pulse.
- Then take your pulse by putting your fingers on your
wrist or neck and counting for one minute. Then
rest for a minute and do it again. Take the
average, to get your average pulse.
- Subtract your average pulse from your estimated pulse.
- Divide the absolute value of the difference by your average pulse to get a decimal.
- Subtract this decimal from1.
- A score of 0.80 or higher indicates "very good" interoceptive skill.
- A score of 0.60 to 0.79 indicates "moderately good" skill.
- A score or 0.59 or less indicates "poor" interoception.
- Cognitive-behavioral therapists often use biofeedback to help people learn how to control their internal physiological states. The whole point of biofeedback is that people can't sense these states directly, and need special monitors to gain this awareness indirectly. (More on biofeedback in the lectures on Psychopathology and Psychotherapy.)
For more on interoception, see "How Do You Feel? Interoception: The Sense of the Physiological condition of the Body" by A.D. Craig in Nature Reviews Neuroscience (2002).
Notice that many of the eight senses are variations on two different mechanisms. In vision, gustation, olfaction, and (probably) the temperature sense, the proximal stimulus is a chemical reaction of some kind. In audition, the tactile sense, kinesthesis, and equilibrium, the proximal stimulus is mechanical in nature. It seems likely that, over the course of evolutionary time, these senses progressively differentiated from two primitive sensory modalities, involving photochemical and mechanical stimulation, respectively.
Defining the Sensory Modalities
How do we know how many sensory modalities there are -- that Aristotle was wrong? The organization of the sensory modalities illustrates the basic concept of transduction: the conversion of some physical stimulus energy into a neural impulse. Looking over the sensory modalities, it appears as if each sensory receptor is specifically responsive to a different proximal stimulus energy (which, in turn, radiates from a distal stimulus). In transduction, a particular type of physical energy is converted into a neural impulse, which is carried via a sensory tract through the thalamus (which functions as a kind of sensory relay station) to a particular part of the brain known as a sensory projection area. These four features differentiate among the various modalities of sensation.
Here's how the system works, in simplified form, for each of the nine sensory modalities.
Thus, in vision, the proximal stimulus consists -- naturally -- of light waves, electromagnetic radiation emitted by or reflected from a distal stimulus. Not all light waves give rise to the experience of seeing, however; in humans, only wavelengths of 380-780 nanometers (nm, or billionths of a meter) are visible. Other wavelengths, such as infrared (greater than 780 nm) and ultraviolet (less than 380 nm) are not visible to the human eye, though they might be visible to other animals.
In any event,
light waves pass through the cornea of the
eye, and then through the pupil, the size of
which is controlled by the iris. Under
conditions of dim light, the pupil widens to let more light
in; in bright light, the pupil contracts. The light is then
focused by the lens of the eye, so that it
falls on the retina on the inside back of
the eyeball. This is the retinal image.
There, light waves impinge on two types of receptor organs:rods
and cones. The rods, of which there are
about 100 million, are stimulated by different levels of
brightness (intensity); they are responsible for black-white
vision, and are especially important for seeing at night. The
cones, of which there are about 7 million, are stimulated by
different wavelengths of light; they are responsible for color
vision, and are particularly useful during the day.
A note on myopia (nearsighteness). The lens focuses the image on the retina. Unfortunately, various conditions cause the eye to become slightly elongated, causing the lens to focus the image on the interior space of the eyeball just before the retina -- what is known, in optics, as a refractory error. As a result, the images of distant objects are blurry -- hence the name "nearsightedness". Nearsightedness is pandemic in the world, an increase that can be attributed, in part, to increases in up-close reading of smartphone screens!
Once transduction has taken place, by a photoelectric process, the neural impulse travels via bipolar cells to ganglion cells and then are carried over the optic nerve (which is the exclusively afferent cranial nerve II).Different fibers of the optic nerve meet and cross at the optic chiasm,a division in the optic nerve which projects each half-field to the opposite hemisphere of the brain. Thus, retinal images from the left visual field of each eye project to the right hemisphere; and retinal images from the right visual field of each eye project to the left hemisphere.
It is this fact which permits the "split brain" experiments which have been so important in teaching us about hemispheric specialization: stimuli presented to the left visual half-field project to the right cerebral hemisphere, and vice-versa. In the intact brain, these images are also transmitted between the hemispheres; but this transmission is impossible when the corpus callosum has been severed. Of course, by strategically moving the eyes, an image projected on the right half-field can also be brought into the left half-field, and vice-versa. For this reason, in split-brain experiments the stimuli are presented very briefly, and disappear before the eyes have a chance to move.
Farther along on their journey to the brain, the visual impulses pass through the lateral geniculate nucleus (or LGN), a bundle of neurons which is part of the thalamus. Each cell in the LGN corresponds to a particular part of the retina; thus, the LGN processes information about the spatial organization of the visual image.
With the exception of olfaction, all afferent impulses pass through the thalamus on the way to their sensory projection areas (olfactory impulses go directly to the limbic area instead). Thus, the thalamus acts as a kind of a sensory relay station, directing impulses representing various modalities to their appropriate cortical projection areas.
From there, the neural impulses continue through to the primary visual cortex, or Area V1, in the occipital lobe (also known as Brodmann area 17), an the very pole of the occipital lobe. Again, each cell in the visual cortex corresponds to a particular part of the retina -- a feature known as retinotopic organization.
Perhaps the first
neuroanatomical drawing ever made was by the Arab physician
Ibn al-Haytham, around 1027 CE. It shows the optic nerves
leading from the two eyes, meeting at the optic chiasm, and
then proceeding to the brain.
Setting the Biological Clock
The human body is subject to a number of biological rhythms, of which the most noticeable is the daily cycle of waking and sleeping. The sleep-wake cycle is known as a circadian rhythm (from the Latin circa, "around" of "about", and dies, "day"; thus, "about a day" or "around the day"). The circadian rhythm produces regular changes in activity levels, hormonal levels, and body temperature throughout the day, and has the function of synchronizing behavior and physiological states to the changing state of the environment. In humans and many other animals, the circadian rhythm is diurnal, meaning we are active during the day; in rats and some other animals, the rhythm is nocturnal, meaning that they are active during the night.
Circadian rhythms are a subset of a larger
class of biological rhythms which also includes:
- infradian rhythms, over periods longer than 1 day, such as the 28-day menstrual cycle in human females;
- circannual rhythms, such as the seasonal cycles of hibernation and mating in some nonhuman animals; and
- ultradian rhythms, over periods shorter than 1 day, such as the 90-minute sleep cycle superimposed on our basic circadian rhythm.
The existence of biological clocks was originally inferred by Curt Richter and other investigators from behavioral evidence. In the absence of any light clues, activity levels, hormonal levels, and body temperature vary regularly on a cycle that is about 25 hours in length. In theory, the 24-hour cycle is entrained by the actual pattern of light and dark in the environment.
More recently, the biological substrate of circadian rhythms has been located in the suprachiasmatic nucleus (SCN) of the hypothalamus, a structure which is located just above the optic chiasm in the brain (the optic chiasm is the place where the optic nerves from the two eyes come together). The SCN receives information about environmental light through a special neural pathway known as the retinohypothalamic pathway, permitting light to serve as a Zeitgeber (from the German Zeit, "time", and geber, "giver"; thus, "timegiver").
But where does the information about light come from? The retina of the eye, obviously. For a long time it was thought that the rods and cones provided this information, as well as information about the rest of the visual world. However, in 2002 David Berson and his associates reported the discovery of a new kind of photoreceptor cell in the retina, existing alongside the rods and the cones employed in vision (Science, 2/8/02). It is these cells, a subset of retinal ganglion cells that appear to contain the photopigment melanopsin, not the rods and cones themselves, that provide information to the SCN.
As discussed below, we know that people who
lack certain types of cones suffer from various forms of colorblindness.
This new discovery suggests that there might be people who,
because they lack these specialized photoreceptor cells,
might suffer a kind of timeblindness. Whether any
such people actually exist remains to be determined by
Of more immediate practical significance, the
existence of these Intrinsically photosensitive Retinal
Ganglion Cells (ipRGCs) has implications for sleep
disorders -- particularly those suffered by those who sleep
with their cell phones or tablets by their side, or who go
to sleep with the telephone on. The reason is that
while these ipRGSs are sensitive to any strong light (such
as the light of the rising sun), they are particularly
sensitive to "blue" light -- which is precisely what the
"blue" light-emitting diodes in flat screens emit. and
to make things worse, the "blue" LEDs emit stronger light
than red and green ones do. So as long as these
screens are illuminated, they're stimulating those ipRGCs,
which send messages to your SCN which say, effectively,
"Sun's rising, time to wake up and go to work hunting and
gathering". And those screens are on all
night, for all too many of us. And so all too many of
us don't get the sleep we need. So do yourself a
favor, and practice "electronic abstinence". Turn off
the TV and keep your cell phone in another room while you
sleep. Even better, turn those screens off a couple of
hours before you go to bed.
The Eye and Evolution
The eye often figures in arguments against Darwin's theory of evolution by natural selection. As early as 1802, before Darwin had ever published a word -- indeed, before Darwin (1809-1882) was even born -- William Paley, an English clergyman, argued that the human eye was so intricate that it could only have been intentionally designed (in Natural Theology or Evidences of the Existence and Attributes of the Deity):
To buttress his argument, Paley coined
the analogy of the divine watchmaker.
In our time, Paley's "watchmaker" argument has been revived by Michael Behe, in Darwin's Black Box (1996) in the form of irreducible complexity. Behe's idea is that some biological structures, like the eye, are simply too complex to have evolved incrementally, through natural selection of purely random mutations. If the eye evolved, its evolution must have been directed -- and directed by God.
Darwin himself echoed Paley's argument.
Nevertheless, Darwin went on to conclude that the eye, like everything else in the biological world, had developed through the same process of natural selection:
Darwin's reasoning has been vindicated by modern evolutionary biology: precisely the gradations he discussed have been observed in animals which are alive today.
If, that is, the organism has a brain to receive them. Some organisms, like the box jellyfish, don't have anything like a brain, so they're not capable of perception as we defined it earlier. That is, they don't have the wherewithal to generate internal, mental images of the objects in their environment. So you need more than just an eye to see. You've also got to have a brain.
Eyes didn't just evolve from simple to complex, either. The eyes of individual species evolved in a manner that supported each species's adaptation to its ecological niche.
And so it goes, al of these species differences produced by evolution, by building gradually on a primitive base.
In audition, the proximal stimulus consists -- again, naturally -- of sound waves -- mechanical vibrations in the air, with frequencies ranging from 20 to 20,000 cycles per second, are funneled by the auricle of the outer ear into the auditory canal and set the tympanic membrane into sympathetic vibration.
Other species have different ranges of audible frequencies -- for example, bats, which use ultrasonic frequencies to navigate.
These vibrations then pass through bony structures of the hammer (or malleus), anvil (incus), and stirrups (stapes) to the oval window, thence to the basilar membrane of the cochlea, which vibrates against hair cells in the organ of Corti.
The impulses then pass through the cochclear nucleus, then the inferior colliculus, and the medial geniculate nucleus of the thalamus
auditory projection area, also known as A1,
consists of Brodmann areas 41 and 42 in the temporal
lobe of the brain, adjacent to the lateral fissure.
Different segments of Area 41 are responsive to different
frequencies of sound, a principle known as tonotopic
Cochlear Implants for the Deaf
Understanding how transduction is accomplished in the auditory modality helps us to understand some of the most frequent causes of deafness. One major form of deafness is known as conduction deafness, in which there is some impediment (wax buildup, or a problem with the bones) to the transmission of sound waves through to the inner ear. In many cases, conduction deafness can be treated medically or surgically, or alleviated by a hearing aid. Sensory-neural hearing losses arise from damage to the neural structures themselves -- in the cochlea, the auditory nerve, or even the temporal lobe.
One very common cause of sensory-neural
deafness is the loss of hair cells in the cochlea (in the
most common forms of congenital deafness, where a child is
born deaf, the degeneration of hair cells occurs during
fetal development). If the hair cells are damaged, no neural
impulses can arise to be transmitted along the auditory
nerve to the auditory projection area. There is no treatment
for this condition at present. However, it is possible to
employ cochlear implants that will make up
for the loss of hair cells. These are electronic devices
which pick up sound waves from the environment, and
transduce them into electrical impulses which then directly
stimulate the auditory nerve. Assuming that the nerve and
the projection areas are not damaged, some degree of hearing
can be restored (see below for the discussion of Muller's
"Doctrine of Specific Nerve Energies"). It should be
understood, however, that at present cochlear implants are
of limited utility.
- A normal cochlea contains thousands of nerve cells to be
stimulated by the basilar membrane, but at present (2014)
the most advanced cochlear implants employ fewer than a
two dozen electrodes, thus greatly reducing the
possibility of pitch discrimination. One early user
of cochlear implants has said that they make everyone
sound like "R2D2 with laryngitis" (Andrew Solomon,
"Defiantly Deaf",New York Times Magazine,
- There are some risks associated with cochlear implants,
and they destroy whatever residual hearing capacity the
individual might have (for this reason, the devices are
sometimes implanted in only one ear).
- Moreover, unless the devices are implanted in infancy, the person may never acquire spoken language (see the Supplement on Language, below, for a discussion of critical periods in language acquisition).
Until recently, it was thought that these hair cells do not grow further aver the fetal period, and if damaged do not regenerate; now there is some evidence from animal experiments that both these assumptions may be false. This new evidence raises the possibility that it may be possible to develop medical treatments that will promote the growth or regeneration of cochlear hair cells in children and adults (though we are a long way from this point now).
Many individuals in the Deaf community (the capital "D" denoting a culture rather than a medical condition) oppose cochlear implants and any other medical or surgical treatments on the ground that they imply that deafness is a disability that can be prevented or cured, rather than a matter of individual differences with which everyone, deaf and hearing, ought to cope. Such arguments have gained force as we move further toward multicultural awareness. Regardless of what position one takes on that issue, however, it is quite clear that deaf individuals fare better when they learn to communicate using American (or some other national) Sign Language, rather than relying on vocalization and lip-reading. Forbidding deaf children to learn sign language, a position advocated by Alexander Graham Bell and others in the 19th century, retards their intellectual development and essentially renders them illiterate. (Unless forcibly prevented from doing so, deaf children who are not taught a sign language will spontaneously develop one.) ASL is a full-fledged language with its own grammar (Signed English, by contrast, is just a translation of written English into signs), and the evidence is that deaf children have the best outcome when they are raised bilingually, acquiring ASL as well as written English.
Ordinary hearing aids simply amplify the sound waves that reach the eardrum. In essence, they function no differently than loudspeakers or earphones. But what if you don't have a functioning eardrum, or the structures of the middle ear are damaged? Under these circumstances, no matter how loud the sound is, the sound wave can't get through to the cochlea to be transduced into a neural impulse. Accordingly, cochlear implants bypass the damaged parts of the ear, and stimulate the auditory nerve directly.
Cochlear implants consist of a microphone and miniature radio transmitter, worn behind the ear much like a standard hearing aid. The microphone sends electronic signals through wires to a speech processor carried in the person's pocket, which converts the sound into a coded signal that the brain can understand. This signal is then sent back to the radio transmitter, which "broadcasts" to a miniature radio receiver implanted in the person's head. The receiver then sends the signals over another wire to an array of electrodes implanted in the cochlea (hence the name of the device), which provides appropriate stimulation to the auditory nerve. Then the auditory nerve completes the job by transmitting neural impulses to the temporal lobe (see Rauschecker & Shannon, "Sending Sound to the Brain",Science, 2/8/02).
Of course, this process depends on the
auditory nerve being intact. But, perhaps someday, the same
sort of process could be used to send signals directly to
the auditory cortex of the temporal lobe. Remember: it
doesn't matter how the signal gets there; it only matters
where it goes!
For treatment of psychosocial issues surrounding deafness and Deaf culture, see the Solomon article cited earlier; also two books by Harlan Lane:When the Mind Hears and Mask of Benevolence; also When the Mind Hears by Oliver Sacks.
Retinal Implants for the Blind
Analogous retinal implants, intended to
restore vision in patients suffering blindness caused by
degeneration of the retinal tissue (such as retinitis
pigmentosa or macular degeneration), are currently under
development.In one such device, an artificial retina
consisting of a camera mounted on glasses, which sends
information to a microcomputer that, in turn, stimulates
electrodes implanted in the retina of the eye -- which, in
turn, stimulates the rods and cones in the retina. In
retinitis pigmentosa, the rods and cones are no longer
sensitive to light, but they can convert the electrical
impulses from the artificial retina into neural impulses
which travel over the optic nerve to the visual centers of
the occipital cortex. See Zrenner, "Will Retinal Implants
Restore Vision?" (Science, 2/8/02), and "The Bionic
Eye" by Jerome Groopman (New Yorker,
In fact, the prototype of an artificial retina was introduced by UCB's Lawrence Livermore Laboratory in 2010. It used 16 (the "first generation") or 60 (the "second generation") microelectrodes, and allowed patients to distinguish light from dark areas and detect movement. Lawrence Livermore scientists are currently working on a "third generation" implant which would allow some perception of form, and estimate that an implant with only 1,000 microelectrodes would actually permit facial identification. The 60-electrode version was named "Invention of the Year" by Time magazine in 2013.
Vestibular Implants for the Dizzy
A similar idea for vestibular implants is intended
to help patients who suffer from vertigo and other disorders
of balance. In one proposal, a miniature gyroscope
senses head rotation, sending signals to to electrodes in
the inner ear which connect to the vestibular nerve.
For more details, see "Regaining Balance with Bionic Ears"
by Charles C. Della Santina, Scientific American,
Sensory Substitution Technologies
Some blind people can profit from retinal implants, some deaf people can profit from cochlea implants, and all of this makes sense given what we know about the organization of the sensory modalities. Somehow, electrical signals representing a stimulus have to get to the relevant sensory projection area in the brain. Ordinarily this is accomplished by the sensory receptors and the sensory nerves, but it is possible, using modern electronic technology, to sidestep these parts of the process.
Another approach to rehabilitation was pioneered by Paul Bach y Rita, a neurologist at the University of Wisconsin Nature, 1969). Bach y Rita connected a television camera to a matrix of 400 electrodes attached to the back of a blind patient. The camera converted the scene to a low-resolution image (essentially, 400 pixels), which was transmitted to the electrodes in the form of mild electrical shocks. The patient could then "feel" the scene through the tactile receptors in his or her skin. And it worked. Bach y Rita's subjects, all of whom had been blind from birth, learned to decode the pattern of somatosensory stimulation, and thus "see" the scene in front of them.
There are limits on this technology. The equipment was bulky, but more important, the resolution of the scene was limited by what is known as the two-point threshold. If you stimulate two different points on the skin, and gradually bring the two points closer and closer together, there will come a point (sorry) where the two stimuli are indistinguishable, and the two physical points are perceived as only a single point. Although the skin is pretty sensitive, the two-point threshold effectively limits the resolution of the somatosensory "scene". You can make the physical matrix of electrodes smaller and smaller, that's just a matter of technology. But you can't reduce the two-point threshold -- unless you switch targets for the somatosensory stimulation to some other area of the body. In more recent work on a device known as the BrainPort, investigators following in Bach y Rita's footsteps (he died in 2006, but not before establishing an engineering company to develop his device) have applied a miniaturized electrode matrix, embedded in something that looks like a flat lollipop, to the tongue -- a very sensitive piece of skin with a very low two-point threshold. And it works. Similarly, the vOICe, the product of another company, translates the visual scene into a pattern of sound; yet another device, intended for the deaf, translates auditory stimuli into patterns of tactile stimulation, useful for burn patients who have lost tactile sensitivity in their skin. sending signals to to electrodes in the inner ear which connect to the vestibular nerve.
All this technology works because, as Bach y Rita famously put it, "You hear with our brain, not with your ears". Aside from practical applications in rehabilitiation technology for
Gustation, the sense of taste, begins with certain chemical molecules carried on food and drink (and, for that matter, anything else you put in your mouth), dissolved in saliva (remember Pavlov's Nobel-Prize-winning research on the salivary reflex?).
The dissolved molecules then fall on taste
buds in the papillae (bumps) of
the tongue, as well as the palate and the throat (and even in
the lining of the intestine). Each papilla can contain
up to 700 taste buds, and there are about 10,000 taste buds in
all. The dissolved molecules then bind to chemical
receptors on the taste buds, just like the
"lock-and-key"mechanism described for synaptic transmission
(see the lectures on Biological
Bases of Mind and Behavior; also he lectures on Psychopathology and Psychotherapy,
where the lock-and-key mechanism is discussed in the context
of psychiatric drugs).
Neural impulses generated by the taste buds (and maybe similar receptors in the palate) are carried over an afferent portion of the glossopharyngeal nerve (cranial nerve IX) as well as other cranial nerves, such as the facial nerve (VII) and the vagus nerve (X).
The primary gustatory area, sometimes designated G1, is located in the anterior insular cortex, located deep inside the lateral fissure that separates the frontal and temporal lobes, and perhaps the frontal portion of the operculum that folds over the lateral fissure -- as well as that portion of somatosensory cortex that receives sensory impulses from the tongue.
Neural impulses generated when these molecules make contact with receptor cells embedded in the olfactory epithelium.
The resulting neural impulses are carried over the olfactory nerve (I), which begins in the olfactory bulb buried underneath the frontal lobe of the cerebral cortex. Of all the sensory systems, only the olfactory system does not use the thalamus as a sensory relay station.
The primary olfactory cortex, or O1, lies in the limbic system, including the prepyriform cortex and periamygdaloid cortex in the limbic system.
The standard textbook story is that human olfaction is pretty poor, especially when compared to other species. Paul Broca (he of Broca's aphasia) is responsible for starting this rumor, based on the observation that the human olfactory bulbs (we have two of them, one for each hemisphere) are relatively small, compared to the rest of the brain; Sigmund Freud (he of psychoanalysis) promoted the idea as well. In fact, the proportion of neurons in the oflactory bulb, compared to the rest of the brain, is pretty constant across a wide range of mammalian species. But what really matters is relative sensory acuity, and there, it turns out, humans don't do so badly: we're worse than some animals with some odorants, but better than most with others For a review and refutation of ideas about human "microsmaticity", see "Poor Human Olfaction is a 19th-Century Myth" by John P. McGann, in Science (2017). McGann points out that, while dogs can distinguish lots of different dogs' urine on a fire hydrant, people can distinguish the smell of lots of different types of wine!
A special case of olfaction
is the activity of pheromones, chemical
signals that serve as a medium of communication, especially in
nonhuman animals (which is one reason why your dog or cat
spends so much time marking its territory, and sniffing out
the marks of other animals that may have violated its space).
Pheromones may play a role in human behavior, as well.
- A paper by Martha McClintock (Nature, 1971) reported that roommates and close friends at Wellesley College, then as now an all-women's residential college, tended to synchronize their menstrual cycles (she obtained the same effect among cloistered Roman Catholic nuns).
- A paper by Noam Sobel (Science, 2011) reported that men show diminished sexual arousal and desire when exposed to "emotional tears" produced by women crying in response to sad stimuli.
Both effects are presumably mediated by
pheromones carried either in menstrual blood or tears. These
are chemical substances, carried over the air and into the
nose, so technically they count as olfactory stimuli. But the
interesting thing is that pheromones themselves do not give
rise to conscious sensations. Still, they effect the behavior
of others who are exposed to them.
Although the indirect evidence for
pheromones is pretty convincing, direct evidence has been
harder to come by. A great deal of research has focused
on two candidates: androstadienone (AND),
derived from male sex hormones, and estratetraenol ("EST),
derived from female sex hormones. Both are found in
sweat, and both have been turned into so-called "pheromone
perfumes" marketed as aphrodisiacs. Recently, however, a
placebo-controlled, double-blind experiment found no effects
of either substance on the classification of androgynous faces
(e.g., sexually aroused heterosexual men should have
classified them them as female, sexually aroused heterosexual
women should have classified them as male), or on ratings of
other photographs for sexual attractiveness (Hare et al. Royal
Society Open Science, 20107).
For more information on pheromones, see:
- "The Scent of Your Thoughts" by Deborah Blum (Scientific American, 10/2011;
- "50 Years of Pheromones" by Tristram D. Wyatt (Nature, 2009, 457, 262-263).
For a contrary view, see:
- The Great Pheromone Myth by Richard E. Doty and Christopher Hawkes (2010).
Interactions Among Smell, Taste, Touch, Vision, and Sound
Gustation and olfaction, although mediated by
different neural structures, interact with each other:
- A head cold clogs the nasal passages, preventing the person from smelling properly; but food is also tasteless.
- Cigarette smoke clogs the taste pores, but perfume is also odorless to cigarette smokers.
- In a famous demonstration, blindfolded tasters have
difficulty identifying the taste of a strawberry if they
cannot also smell it.
Taste also interacts with the tactile sense: what is known
as "mouthfeel" -- the texture of food, whether it is crunch
or creamy, for example -- will also affect how it tastes.
- Pringles potato chips, which are manufactured so that each one is precisely identical to all the others, seem crisper and fresher if the sound of their "crunch" is amplified through earphones (Spence & Zampini, J. Sensory Studies, 2004).
Taste also interacts with color: a strawberry dessert tastes much sweeter when served in a white container, compared to a black one.
And it even interacts with shape.
The "Pringles" study won the the 2008 Ig Nobel Prize for Nutrition, but all of these crossmodal effects reflect a basic principle of sensory integration: the sensory modalities do not operate independently of each other. Rather, the activity of different sensory systems is integrated to create a coherent, unified sensory experience.
All of these
stimulus modalities come together to produce flavor
of any food is given by a combination of taste, smell, and
touch. This is easily demonstrated by tasting a
strawberry (without, of course, knowing that it is a
strawberry -- otherwise you're liable to commit the
stimulus error! -- while holding your nose. You will
taste the sweet of the fruit, but you will not taste the
flavor of strawberry until you breathe again.
Similarly, irritants like carbonation, mint, and pepper
contribute to flavor through the tactile modality.
What pepper contributes to the taste of food is -- not to
put too fine a point on it -- tactile pain.
Actually, most of
flavor comes from the nose, not the tongue. When
we chew and swallow our food, airborne molecules from
the food are forced up through the nose through the retronasal
pathway, where they bind with odor
receptors. Amazingly, the brain distinguishes
between ordinary odors, entering the nose through the
nostrils in what is known as orthonasal olfaction,
and those which enter the nose from behind, in what is
known as retronasal olfaction. But, also
amazingly, the same receptors are involved in both kinds of
olfaction. Instead, the brain keeps track of whether
we're sniffing something (orthonasal olfaction) or
chewing and swallowing retronasal olfaction).
How? Possibly, by distinguishing whether receptors in the
olfactory epithelium and being stimulated at the
same time as the taste buds are.
- Taste is the sensory experience that arises from stimulation of our taste buds.
- Smell is the sensory experience that arises from orthonasal stimulation through the nostrils.
- Flavor is the
sensory experience that arises from retronasal
stimulation through the retronasal pathway.
The Tactile Sense
The proximal stimulus for touch is mechanical
pressure, which causes deformation of the skin or
hair shafts. The tactile sense is also sensitive to vibration
and electrical stimulation. This stimulus stimulates a number
of mechanoreceptors embedded in the skin,
- free nerve endings buried in the epidermis of the skin;
- "perifollicular"or "basket" endings wrapped around the base of hair follicles;
- Merkle's disks;
- Meissner's corpuscles; and
- Pacinian corpuscles
- (and there may be others).
Neural impulses arising from these receptor organs are then carried over the afferent tracts of the spinal nerves, and up the afferent tract of the spinal cord as well as over some of the cranial nerves (e.g., the trigeminal nerve (V) and the facial nerve (VI).
The Thermal Sense
The stimulus for temperature, the sense of warmth and cold is a temperature differential between the stimulus and the skin itself. Note that "cold" and "warm" are not the effective proximal stimuli: a stimulus is felt to be "cold" only if its temperature is lower than that of the skin with which it makes contact. Same goes for "warm".
Positive and negative differentials stimulate neural activity in the Krause end-bulbs (for relatively cold stimuli) and Ruffini end-organs (for relatively warm stimuli). Simultaneous stimulation of both receptors generates the sensation of hot as opposed to warmth. The free nerve endings are also sensitive to temperature differentials.
Neural impulses are then carried over the afferent tracts of the spinal nerves, and up the afferent tract of the spinal cord as well as over some of the cranial nerves (as noted above).
And these impulses, too, finally arrive at the somatosensory projection area of the parietal lobe, and are distributed to the appropriate region of the somatosensory homunculus.
Cutaneous Pain (Nociception)
There are free
nerve endings in the epidermis of the skin that are
sensitive to pain; but they are also sensitive to touch and
- Fast pain -- sharp or prickling, with a rapid onset and rapid offset -- appears to be mediated by A-delta fibers in the spinal (and some cranial) nerves.
- Slow pain -- aching, throbbing, or burning, with a slow onset and slow offset -- appears to be mediated by C-fibers in the same nerves.
Fast pain sensations then travel up the spinal cord along the neospinothalamic tract, while slow pain sensations then travel up the spinal cord along the paleospinothalamic tract.
Fast pain impulses pass through the
thalamus to the somatosensory cortex,
while their slow pain counterparts terminate in the brainstem.
Pain sensations usually result from
injury to some tissue, like the skin. But chronic pain
can persist even after the injury has healed. In many
cases of chronic pain, neurons in the "pain pathway" become
abnormally excitable, and discharge impulses toward the
brain even though they're not receiving any input from
A Mechanism for Acupuncture?
The distinction between slow and fast pain forms the basis for a prominent theory of how acupuncture works. The theory is that fast and slow pain are in an antagonistic relationship to each other -- meaning that one can counteract, or inhibit, the other. According to the theory, slow pain signals, stimulated by acupuncture needles and processed at the level of the brain stem, prevent fast pain signals from getting past the brain stem, and through the thalamus, to the somatosensory projection area. It's a pretty good theory.
Usually, we think of pain as a skin sense, but pain is more complicated than this. Almost any stimulus will produce pain if it is intense enough: sound, light, heat, or cold are good examples (it is not clear if intense tastes and odors are actually painful, no matter how unpleasant they might be). Moreover, pain is also produced by muscle fatigue, and by the introduction of salt and other chemicals into wounds and other openings in the skin. So setting aside cutaneous pain, in some sense there is no unique proximal stimulus for pain. And because pain can be experienced in (almost) all sensory modalities, in a sense there is are no unique receptor organs for pain, either -- and no specific sensory tract, and no specific projection area.
The situation is complicated further by the fact that, regardless of origin, pain appears to have two broad components:
- Sensory pain, which provides information about the locus and extent of injury. Sensory pain activates the somatosensory cortex, which represents the location of the pain on the somatosensory homunculus.
- Suffering, which has more to do with the meaning of the pain. Suffering activates the anterior cingulate gyrus in the frontal lobe of cerebral cortex.
In the final analysis, the experience of pain is substantially determined by the meaning of the stimulus. In doctors' offices, people flinch when there isn't much pain. The responses of injured children differ markedly depending on whether an audience is present. Placebos provide genuine relief of pain. Soldiers experienced different levels of pain from objectively similar wounds in World War II, Korea, and Vietnam. And many religious rituals in tribal cultures (e.g., fertility, puberty) would appear objectively painful, but elicit no signs of pain from the participants.
The bottom line is that pain is often more a problem of perception than of sensation. The experience of pain depends not so much on the transduction of stimulus energies, but rather of higher mental processes involving perception, memory, and thought.
Review of the Skin Senses
So just to review how the skin senses work. Mechanical pressure, vibration, or tissue inflammation stimulate specialized sensory receptors in the skin. This generates neural impulses which travel over the afferent branch of the spinal nerves to the spinal cord. There they may generate an involuntary spinal reflex in response to the stimulus. Then the sensory impulses travel up the ascending tract of the spinal cord to the somatosensory projection area in the parietal lobe. There they might generate a voluntary response to the stimulus.
stimulate nerve endings that are embedded in these body
- neuromuscular spindles embedded in muscles;
- neurotendinous organs, also known as Golgi tendon organs, embedded, naturally enough, in the tendons;
- free nerve endings in the joints between bones.
These impulses, too, are carried over the afferent tracts of the spinal nerves, and up the afferent tract of the spinal cord as well as over some of the cranial nerves.
And finally to the somatosensory projection area of the parietal lobe.
Equilibrium, the sense of balance, begins with gravitational force which pulls on otoliths, tiny crystals suspended in the semicircular canals, and the saccule and utricle of the vestibular sac, both structures located in the inner ear. The semicircular canals are arranged at approximate right angles to each other, and detect rotation of the head. The saccule and utricule detect the position of the head with respect to gravity.
These structures contain
hair cells, which are stimulated by the
crystals, generating neural impulses in much the same manner
as the cochlea does for audition.
- Rotary motion of the head will cause the otoliths to fall on different hair cells in the semicircular canals, thus signaling a change in the organism's orientation with respect to gravity.
- Linear motion of the head in a forward direction causes the otoliths to fall on hair cells in the posterior portion of the vestibular sac, while motion in a backward direction stimulates hair cells in the anterior portion.
The neural impulses are then carried over the vestibular branch of the vestibulocochlear nerve (VIII).
Unlike kinesthesis and the skin senses, neural impulses relating to equilibrium and change of motion carried over the vestibular division of cranial nerve VIII project to the cerebellum.
The Doctrine of Specific Nerve Energies
To repeat, in the
abstract, each modality of sensation is characterized by
- the proximal stimulus (or type of physical energy falling on a receptor organ);
- the receptor organ which is stimulated (ordinarily, prepared to respond to a particular proximal stimulus);
- the sensory tract leading from the sensory surfaces to the brain, including the afferent tract of the spinal nerves and the afferent tract of the spinal cord, as well as certain cranial nerves; and
- the sensory projection area in the cortex, a particular area of the brain which is the final destination of afferent impulses arising from the sensory receptors and transmitted along the sensory tracts.
At first glance, it
seems that each sensory receptor organ is specific to a
different kind of proximal stimulus -- which would mean that
the various sensory modalities are actually defined by
particular kinds of proximal stimuli.
- In vision, electromagnetic radiation falls on the rods and cones in the retina.
- In audition, mechanical vibrations in air stimulate hair cells in the basilar membrane of the cochlea in the inner ear.
- In gustation, chemical molecules stimulate the taste buds surrounding the papillae of the tongue.
- In olfaction, they stimulate receptors in the olfactory epithelium.
- In touch, mechanical pressure stimulates a number of specialized receptors, such as "basket" endings, Merkel's disks, and Meissner's and Pacini's corpuscles, buried in the skin.
- In temperature, a heat differential stimulates the Krause end-bulbs and the Ruffini end-organs.
- In pain, tissue inflammation stimulates a-delta and C-fibers.
- In kinesthesis, movements of the skeletal musculature stimulate specialized nerve endings, such as the neuromuscular spindles and the neurotendinous organs.
- And in equilibrium gravitational force and acceleration acts on crystals that stimulate hair cells.
This would be a great, and simple, system. Unfortunately, it turns out that the modality of sensation is not uniquely associated with any particular proximal stimulus or receptor organ.
For example, other stimuli can lead to a particular sensation. If you close your eyes and press your finger lightly on your eyelid, you will experience the sensation of touch stemming from pressure that deforms the surface of the skin. But you will also experience the sensation of vision, in terms of various lights and colors. Thus, a particular proximal stimulus (pressure) somehow stimulates the receptors for vision as well as those for touch. Similarly, if you box someone's ears (Please don't do this, just take my word for it!) the victim will experience touch (to put it mildly), but also a ringing of the ears.
Electrical stimulation of various sensory receptors can also give rise to sensations -- which is how cochlear implants (and, soon, retinal implants) work.
Proof that the modality of sensation can't be defined in terms of either the proximal stimulus or the sensory receptor comes from electrical stimulation of the afferent nerves leading from the receptor organ to the central nervous system, or of the sensory projection areas in the cerebral cortex. Either case produces sensory experience in some modality, vision or audition or whatever, even though there has been no proximal stimulus in the usual sense nor any activity in any sensory receptor per se.
The same cause, such as electricity, can simultaneously affect all sensory organs, since they are all sensitive to it; and yet, every sensory nerve reacts to it differently; one nerve perceives it as light, another hears its sound, another one smells it; another tastes the electricity, and another one feels it as pain and shock. One nerve perceives a luminous picture through mechanical irritation, another one hears it as buzzing, another one senses it as pain. . . He who feels compelled to consider the consequences of these facts cannot but realize that the specific sensibility of nerves for certain impressions is not enough, since all nerves are sensitive to the same cause but react to the same cause in different ways. . . (S)ensation is not the conduction of a quality or state of external bodies to consciousness, but the conduction of a quality or state of our nerves to consciousness, excited by an external cause.
According to Muller, the modality of sensation is not determined by the proximal stimulus alone, but rather by the stimulation of particular nerves. These nerves are ordinarily responsive to some specific proximal stimulus, but they can also respond to others. You can have a sensory experience without a proximal stimulus, so long as some particular part of the nervous system is activated.
- Edgar Adrian, a British neurophysiologist, showed that all nerves operate by the same "energy" -- namely, electrical action potentials.
- Wilder Penfield, a pioneering Canadian neurosurgeon, showed that electrical stimulation of the various primary sensory projection areas led patients to have corresponding sensory experiences. They heard sounds when stimulated in the auditory cortex, saw colors and shapes when stimulated in the visual cortex, and felt touches when stimulated in the somatosensory cortex.
implication is that the modality of sensation is not
determined by the proximal stimulus, nor by the sensory
receptors or even by the sensory nerves (as Muller thought),
but by the sensory projection area where the neural impulses
carried by the sensory nerves end up. It doesn't matter
where the impulse starts out -- it matters where it ends up.
Theoretically, if we could connect up the auditory nerve to
the visual projection area, we would see sounds rather than
And, in fact, not just in theory: Roger Sperry, and American neurophysiologist, studied certain species of amphibians, whose optical pathways actually cross completely. Instead of the usual situation, where the left visual field of each eye projects to the right hemisphere, and vice-versa, the whole left eye projects to the right hemisphere (and vice-versa). Sperry observed that frogs would strike at small, bug-like objects presented to their left eyes. He then severed the optic nerves and regrew them so that the right eye connected to the right hemisphere, and vice-versa. After the regeneration, the animals actually reversed their responses -- they behaved as though the stimuli were on their right side, for example, when they were actually on the left. (Sperry won the Nobel Prize, in part, for these experiments).
So, in the final analysis, it's the visual projection area that gives us the sensation of seeing -- not light waves, or the rods and cones, nor the optic nerve.And it's the auditory projection area that gives us the sensation of hearing -- not sound waves, or the hair cells in the cochlea, nor the cochlear nerve.
On this assumption,
here's how nine sensory modalities might be defined:
- In vision, the primary visual cortex, V1: Brodmann area 17 in the occipital lobe.
- In audition, primary auditory cortex, A1: Brodmann areas 41 and 42 in the temporal lobe.
- In gustation, primary gustatory cortex, G1: in the anterior insula and frontal operculum of the frontal lobe.
- In olfaction, primary olfactory cortex in the prepyriform cortex and periamygdaloid complex of the limbic system.
- In touch, the somatosensory cortex of the parietal lobe, S1, corresponding to Brodmann areas 1-3, with the location of touch given by activity in some portion of the somatosensory homunculus.
- In temperature, S1 again, showing the location of the cold, warm, or hot stimulus.
- In pain, S1 yet again, showing the location of the pain stimulus.
- In kinesthesis, S1 again, showing which body parts are in motion.
- And in equilibrium, the cerebellum (finally, we get out of S1!).
The definition of a sensory modality is very clear for vision and audition, and pretty clear for olfaction and gustation. But other modalities of sensation are a little more muddled.
The Skin Senses
As should be clear by comparing the pathways for touch, temperature, and kinesthesis, the skin senses are hopelessly complex. At least seven different nerve endings are found in the skin, but there is little isomorphism (one-to-one relation) between the type of receptor, and the corresponding sensory experience when that receptor is stimulated electrically. To make things worse, neural impulses from all six receptor types are transmitted along the same afferent tracts to the spinal cord and up to the brain, and all project to the same somatosensory cortex in the parietal lobe. So the puzzle is: how are the skin senses kept separate? There is a prize for the person who comes up with the best answer (it's awarded in Stockholm).
Pain is another problematic modality, in large part because the experience of pain depends so much on the meaning of the pain stimulus. As far as sensory pain is concerned, the critical projection area is in S1, the primary somatosensory cortex. But then again, even very bright lights and very loud sounds can be painful, and these stimuli register in the visual and auditory areas, respectively. So the projection area for sensory pain will correspond to the modality in which the sensation occurs. But suffering, which transcends any particular sensory modality, seems to be processes in the anterior cingulate gyrus and other structures in the medial frontal lobe. The point here is that pain is both a sensation and a perception.
Maybe what's needed is a different
approach entirely. A different take on the skin senses
was proposed by Melzack and Wall (1962), who noted that
there is not, in fact,a one-to-one correspondence between
the physiological and experiential aspects of
somesthesis. They criticized the class definition of
the sensory modalities (or, at least, the skin senses) in
terms of specific receptor organs sensitive to specific
physical stimuli, which generated neural impulses which
traveled over specific afferent pathways to specific
projection areas in the brain. Following Hebb (1955), and
Skinner (1938), they criticized this conceptual nervous
system, which envisioned a kind of direct neural
channel from the skin to the brain. (Another example
of a conceptual nervous system is the reflex arc
introduced in the lectures on the Biological Bases of Mind
and Behavior: obviously, things are more complicated
than circuits of three neurons!). They acknowledged
that there are many different types of receptors embedded in
the skin, but they argued that these receptors do not
correspond to any particular modality (such as the tactile
or thermal sense), or any quality within a modality (such as
warmth or cold).
Melzack and Wall made the same argument with respect to the peripheral nerve fibers that connect the receptor organs to the central nervous system. The original idea was that the different modalities (or qualities) were associated with fibers of different size:
- "A" fibers have a relatively large diameter, and are surrounded by a myelin sheath: they conduct neural impulses at a fast velocity from one point to another. Traditionally, these were associated with touch -- although a subtype of A fiber, known as an A-delta fiber, was also associated with "fast" pain -- the sharp, immediately localizable pain you feel immediately after an injury.
- But they're not associated just with "fast"
pain. As the name "A-delta" implies, there are
several different subtypes of "A" fibers.
- Several subtypes of "A-beta" fibers carry information about various aspects of touch, such as vibration, movement, indentation, stretching, and the movement of longer body hairs.
- And even "A-delta" fibers respond to touch as well
as "fast pain", such as the movement of shorter body
- "C" fibers also have a small diameter, but they are
unmyelinated, and have a low conduction velocity.
C fibers were also associated with "slow", dull,
aching pain that long outlasts the immediate injury.
- But, as with "A" fibers, "C" fibers are not
association just with "slow pain".One subtype
of C fiber conducts information related to
temperature, while another conducts pleasurable
sensations, such as those that arise from a backrub.
- In case you're wondering, there are also "B" fibers: these are also myelinated, like A fibers, but have a relatively small diameter, and a low velocity of conduction. They are mostly associated with the autonomic nervous system, not somesthesis.
But careful investigation determined that these various fibers do not "monopolize" a particular modality or quality. For example, the C fibers also carry impulses generated by pressure, light touch, and itch. And the same has to be said of the central pathways that carry somesthetic impulses up the spinal cord to the brain. So, contrary to earlier speculation, the dorsal spinal tract is not uniquely associated with touch, and the lateral spinal tract is not associated uniquely with pain.
In an attempt to resolve this
situation, Melzack and Wall proposed that the modality of
skin sensation is determined by the specific temporal and
spatial pattern of neural impulses generated by a whole host
sensory receptors embedded in the skin. Each receptor
organ is sensitive to a particular range of stimuli.
Some receptors may be sensitive to warmth, but also to light
touch. Other receptors might be sensitive to touch and
the chemical changes that result from injury to the
skin. And so on. As they put it, "Every
discriminably different somesthetic perception is produced
by a unique pattern of nerve impulses (p. 350).
Melzack and Wall (1965) elaborated their theory in their gate control theory of pain. Again, they examined the failures of specificity theory, and concluded that "pain" isn't transduced by any specific receptor, nor carried along any particular afferent tract. As they noted:
The stimulation of a single tooth results in the eventual activation of now less than five distinct brain-stem pathways. Two of these pathways project to cortical somatosensory areas I and II, while the remainder activate the thalamic reticular formation and the limbic system, so that the input has access to neural systems involved in affective as well as sensory activities (p. 976).
Instead, they proposed that pain is produced by neural impulses generated by the free nerve endings and other receptors, which are carried over both the large-diameter, myelinated A-delta fibers and the small-diameter, unmyelinated C fibers to the dorsal horn of the spinal cord. There they connect to transmission (T) cells that carry the impulses further toward the brain, and inhibitory cells which -- as their name implies -- create a kind of gate through which the pain signals must pass.
- Both the A-delta fibers excite the T cells.
- The A-delta fibers inhibit the inhibitory cells, "opening" the gate and allowing the pain signals to proceed.
- The C fibers excite the inhibitory cells, "closing" the gate preventing the pain signals from going further.
- Because the C fibers carry more than just pain signals, pressure and touch can actually relieve pain; and so can warmth -- which is why we often rub the place where we've been hit; or apply a warm compress (or even a cold one).
- And this may also be why acupuncture works: the
low-level pain of the acupuncture needles is carried
over the C fibers, which close the gate to high-level
pain carried by the A-delta fibers.
- Another class of A fibers, known as A-beta fibers, carry non-nociceptive signals, such as touch,also excite the inhibitory cells, closing the gate.
- Again, this non-nociceptive stimulation can relieve
- The gate can also be closed by "top-down" signals generated in the brain, permitting psychological modulation of pain, as in placebo effects or hypnosis.
But what about itching?
The story begins with specific itch-inducing irritants,
known as pruritogens (pruritus is the medical
term for itching). These chemicals include histamine,
chloroqune, and cowhage. In response to a bug bite,
for example, the immune system attacks foreign chemical
bodies deposited by the insect. The immune system then
releases histamine, which stimulates specific receptors,
causing a sensation of itching. There are also
specific receptors for non-histaminic irritants, which also
cause itching. These are known as mas-related
G-protein-coupled receptors (Mrgprs). What do
you do when you feel an itch? You scratch it,
that's what you do. And scratching damages the
epidermis, which stimulates the release of more histamine
and other pruritogens, making the situation even
worse. That's why a common treatment for itching are
antihistamines. But histamines don't always solve the
problem, because there are other pruritogens that
antihistamines don't affect. There was once a theory
that itching was just a mild form of pain -- or maybe a
special form of touch sensation. But the discovery of
pruritogens, which don't cause pain, and receptors which are
sensitive to pruritogens but not sensitive to pain, seems to
suggest that "itching" is a separate tactile modality from
touch, temperature, and pain.
For more on itching,
see "The Maddening Sensation of Itch" by Stephani
Sutehrland, Scientific American, May 2016.
And finally, there's time. We speak of people having a "sense of time", but is time really a sensory modality? And if it is, what is the proximal stimulus, and what is the projection area? Some neuroscientists claim to have identified a projection area for time in the basal ganglia, but there is almost certainly not a single projection area for time, any more than there is one for pain. On the other hand, we know that the body contains a fairly large number of biological clocks, beginning with the one that regulates the diurnal cycle of waking and sleeping. A classic study by Hoagland found that fever patients (actually his wife) consistently underestimate the passage of time, suggesting that increases in body temperature speed up whatever clock resides in our brain. Similarly, time seems to pass quickly when we're excited, but slowly when we're bored. As with pain, our "sense" of time is probably better thought of as a matter of perception. For an engaging early discussion of the problem of time, see Robert Ornstein's On the Experience of Time (1969).
Where It Ends Up
So, although in principle, the modality of sensation is determined by four factors (proximal stimulus, receptor organ, sensory tract, and sensory projection area), in the final analysis the projection area is the most important. If the auditory nerve were connected to the visual cortex, we would see sounds instead of hearing them (we know this because direct electrical stimulation of the visual cortex produces visual sensations).Regardless of where the sensory impulse comes from, what matters is where it goes.
On that principle, maybe Aristotle got it right after all: there is only a single skin sense, with pressure, warmth, cold, and pain being qualities of tactile sensation. But let's not go there.
Considering the role of the proximal stimulus, receptor organ, afferent tract, and cortical projection area in sensation...
If a tree falls in a forest and no one
hears it, does it make a sound?
Here are some possible answers:
Who cares? (Shunryu Suzuki Roshi,
founder of the Tassajara Zen Mountain Center, Big Sur,
Nine Ways of Knowing the World ...and More?
So, based on what we now know, there appear to be nine different ways of knowing the world: Aristotle's five -- seeing, hearing, tasting, smelling, and touching; two additional skin senses -- temperature and (cutaneous) pain; and Sherrington's two proprioceptive senses -- kinesthesis and equilibrium.
There may well be others. Recall that
Sherrington identified a category of interoception
that processes stimulation from internal tissues and organs,
such as the viscera and the blood vessels. We'll talk more
about interoception in the lecture supplement on Motivation,
where we cover basic biological motives like hunger and
A Magnetic Sense?
Studies of migratory animals,
especially birds and fish but also some mammals, suggest
that some animals mammals respond to the earth's geomagnetic
field. While many migratory animals respond to celestial
cues, this magnetic sense may also serve as an aid to
- In one experiment, birds were exposed at sunset to a simulated geomagnetic field that was oriented east-west instead of north-south. When the birds were released after dark, the headed west instead of continuing on their usual migratory path toward the north. A control group, which was not exposed to the artificial magnetic field, proceeded north as usual. The next night, however, the birds in the experimental group headed north again, suggesting that light and magnetism interact, allowing the birds to use the sun (which moves on an east-west axis) to calibrate their internal magnetic compass.
- Another study showed that when a flock of birds (like ducks or geese) lands on a body of water, they usually land in alignment with magnetic north or south.
In fact, there may be two magnetic
- A magnetic map sense that helps the animal orient on the (magnetic) north-south axis; this would help migrating animals head north or south.
- A magnetometer that senses the strength of the geomagnetic field, which gets stronger the nearer one is to the poles; this would tell migrating animals when to stop their journey.
If there's a magnetic sense, what's
the sense organ? That's still pretty much a mystery.
- Some investigators have reported that homing pigeons and rainbow trout -- two species known to respond to geomagnetic fields -- have, in their beaks, structures that are lined with magnetite -- very small magnetic particles (also in their inner ears). These same structures are also densely populated with nerve endings, which could generate information about changes in the orientation of these small magnetic particles through the nervous system. However, other investigators have disputed this specific claim, suggesting that the beak structures are actually macrophages -- products of the immune system which often contain iron. While acknowledging that these and other animals seem to have a magnetic sense, the critics argue that the sensory receptors -- much less the afferent tracts and cortical projection areas -- haven't yet been identified.
- Another possibility is that there is a chemical in the birds' eyes that is sensitive to magnetic fields, effectively allowing them to "see" magnetic fields much the way they (and we) see light. Evidence from this comes from studies showing that, when landing, birds do not move their heads around, as they would if they were trying to avoid colliding with other birds in the flock. Instead, they tend to look straight ahead as if they were "seeing" something much like an airplane pilot's "head-up" display.
Another controversial modality raises the question of extrasensory perception (ESP) -- the acquisition of information without mediation by the sensory system(s). ESP is thought to be manifested by telepathy (thought transference),clairvoyance (perception of objects that are not influencing sensory receptors), and precognition (perception of future events). It also covers action without mediation by effector systems, as manifested by psychokinesis (PK; manipulating an object without touching it).
These phenomena are often associated with the occult, and reflect beliefs in supernatural powers. But they are also of interest to scientists. (1) Research may reveal previously unrecognized sensory modalities. (2) Even if not, the experience of telepathy, clairvoyance, etc. is psychologically interesting, even if claims of ESP are not valid.
In fact, there is very little compelling evidence for either ESP or PK. Most ostensible psychics are in fact professional magicians or illusionists, and the trickery involved in their demonstrations can be detected by their professional peers. Moreover, most laboratory studies of ESP and PK in ordinary people don't effectively prohibit cheating (by either experimenter or subject), and are poorly controlled in other respects.
Even the best ESP/PK
research poses numerous empirical problems.
- There are numerous cases of cheating and trickery, or at least the opportunity for same.
- The size of the alleged effect decreases with increases in the rigor of the experimental test.
- There are huge individual differences in ESP/PK ability: only a very few people appear to be "sensitive".
- There is a tremendous lack of consistency even in the performance of "sensitive" subjects.
- The phenomena claimed by proponents of ESP/PK violate fundamental physical laws.
The evidence is not all in, and it is
best to keep an open mind, but when one removes outright
fraud, poor methodologies, and capitalization on chance,
there is very little phenomenon left to explain.
So there are nine ways of knowing the
world, each associated with its own neural system. But
these nine sensory modalities do not operate independently
of each other. Rather, information from the separate
modalities is blended, and the various modalities influence
Perhaps the clearest demonstration of
this is known as the McGurk effect (McGurk
& MacDonald, 1976; see Rosenblum, 2008)).
- Subjects watch a video clip of a person pronouncing a distinct syllable, such as ga.
- At the same time, they listen to an audio clip of a person pronouncing a different syllable, such as ba.
- What the subject hears, however, is something else entirely -- the syllable ba.
Thus, visual and audio information are
blended together to produce a sensation that is quite
different from either stream of stimulation.
The opposite phenomenon, in some
sense, is known as visual capture. If you
watch two actors conversing on a television set where the
sound comes from a single speaker, you still perceive the
voices as coming from different people (Thurlow & Jack,
1973). Ventriloquists use this trick too, which is why
visual capture is also called the ventriloquism effect.
In either case, sounds are perceived as coming from a
different direction than their actual source; and this
auditory effect is created by visual stimulation.
The Doctrine of Modularity sometimes
conveys the impression that the brain is like a Swiss Army
Knife, with lots of different tools contained in a single
case. And, in some respects, that's true. There
do seem to be different modules in the brain, each
specialized for a particular function. And, in some
sense, that is true of the sensory modalities as well.
But just as with the brain, the different pieces -- in this
sense, the different sensory modalities -- work together to
create a unitary sensory experience
For more on sensory integration, see "A Confederacy of Senses" by Lawrence D. Rosenblum, Scientific American, January 2013.
The Sensory Qualities
Each sensory modality is associated with a unique sensory experience -- e.g., vision, taste, touch, and balance. We now know something about how these experiences are related to proximal stimuli, and how these stimulus energies are transduced into neural impulses by sensory receptors and carried over various sensory tracts to the several projection areas. But transduction is only the first problem of sensation.
Within each modality,
there are different sensory qualities.
- Some of these, like intensity, are universal -- they are found in all modalities (for a discussion of intensity, see the section below on Sensory Psychophysics).
- In the visual domain, intensity is referred to as brightness.
- In the auditory domain, intensity is referred to as loudness.
- Other qualities are modality-specific:
- for vision, hues of red, blue, etc.; brightness; and saturation;
- ("Color" refers to a combination of hue, brightness, and saturation.)
- for audition, high and low pitch; and timbre;
- for gustation, tastes like sweet and sour, etc.;
- for olfaction, odors like fragrant and putrid;
- and for touch, qualities like roughness or wetness.
- If Aristotle's right, and there's only a single sense of touch, then warmth, cold, and pain might be considered qualities of tactile sensation as well.
The Psychophysical Principle
The 19th-century psychophysicists began by attributing each quality of sensation to a specific proximal stimulus. And, in fact, their research was guided by the General Psychophysical Principle that each psychological quality of sensation was associated with a particular physical property. And, in fact, psychophysics was generally dedicated to tracing out these psycho-physical correlations.
Here's how the psychophysical principle might work out, with respect to the two most commonly studied sensory modalities, vision and audition.
- Normally, a short-wave visual stimulus of 465 nanometers in wavelength gives rise to the sensation of pure blue;
- 495 nm, pure green;
- 570 nm, pure yellow;
- 700 nm, pure red.
Saturation is varied by adding gray to a particular hue. So, adding gray to pure red produces a pinkish sensation.
- "Middle C" on the piano corresponds to about 262 cycles per second;
- Second-space "C" on the bass clef, about 131 cps;
- Third-space "C" on the treble clef, about 523 cps.
Note that an interval of an octave doubles the frequency.
Modern orchestras tune to "A-440", second-space A on the treble clef, which corresponds to a frequency of 440 cps.
is related to the shape, or complexity, of the sound wave.
- A violin or a flute produces a fairly "pure" sound wave, with a fundamental but not too many overtones.
- A raspy instrument like an oboe has a very complex sound wave, consisting of the fundamental plus odd-numbered harmonics.
Based on this simple psychophysical principle, then, we could speculate that the quality of a sensory experience is determined by the physical properties of the stimulus which impinges on the sensory receptors.
But we know that that simple principle doesn't work to determine the modality of sensation. Rather, according to Muller's Doctrine of Specific Nerve Energies, the modality of sensation is determined by the particular neural structures that process the transduced sensory input. It turns out that this is also the case when it comes to the qualities of sensory experience.
Doctrine of Specific Fiber Energies
In fact, Hermann von Helmholtz, a 19th-century German physiological psychologist, proposed an extension of Muller's Doctrine of Specific Fiber Energies (actually, Helmholtz was Muller's student). Just as each modality has its own associated neural structures, so Helmholtz proposed that each quality is differentiated in a similar manner. There may well be particular psychophysical relationships, just as vision is normally stimulated by light waves and hearing by sound waves. But Helmholtz also argued that there are also separate separate neural structures corresponding to each different quality of sensation, and that these are decisive. As with the modality of sensation, in the final analysis what distinguishes among the various qualities of sensation are the particular neural structures which are activated -- not the proximal stimulus itself.
Helmholtz' proposal stimulated a search for these separate, quality-specific "energies", and the particular receptors that transduce them into neural impulses. But a problem presents itself almost immediately: in most modalities there are literally thousands of discriminably different qualities.
You get a sense of how the doctrine
worked with Helmholtz's own early example, auditory pitch.We
have already seen that vibration of the tympanic membrane
sets up sympathetic vibrations of the basilar membrane
against hair cells. Looking through a microscope, he
determined that the basilar membrane actually consists of a
large number of fibers stretched across a frame, not unlike
a harp or a piano. These fibers vary in length and
thickness. Accordingly, Helmholtz suggested (in his
monograph On the Sensations of Tone, 1863) that
each frequency causes a different fiber to vibrate against
its corresponding hair cell. Complex stimuli set a number of
different fibers into vibration, corresponding to the
fundamental frequency, overtones, etc. This is now known as
the place theory of pitch.
Helmholtz's speculations were supported by research by Georg von Bekesy (1960), who observed that high-frequency tones produced strongest vibrations in portions of the basilar membrane nearest the eardrum, while lower-frequency tones produce the strongest vibrations near the tip of the cochlea. This appeared to confirm that pitch is coded by the place of maximal stimulation, and Bekesy won the 1961 Nobel Prize in Physiology or Medicine.
Helmholtz's theory of pitch perception that has played an important role in theories of musical harmony. The English Novelist George Eliot made use of it in her last novel, Daniel Deronda (1876). See J.M. Picker,Victorian Soundscapes (2003).
However, the pure place theory has problems, not the least of which is that, at low frequencies, the basilar membrane vibrates as a whole, at a frequency roughly corresponding to the frequency of the stimulus.
- At low frequencies, the frequency principle applies. This involves a direct translation of the frequency of the sound wave to the frequency of vibration of the basilar membrane. Pitch, then, is coded by the frequency of the neural impulse. But periodicity can't work for frequencies much above 1000 cps, because of ceilings on the frequency of the neural impulse imposed by the refractory periods of the neurons involved.
- Above 5000 cps, the place principle takes over, in which certain portions of the basilar membrane vibrate more strongly than others.
- Between 1000-5000 cps, both principles operate -- which is probably why pitch discrimination is so good in this range. To compensate for the fact that neural impulses can't fire at more than about 1,000 cps, yet another factor comes into play: the volley principle proposed by Wever, according to which neighboring neurons take turns firing, so that one neuron is firing while another is in its refractory period. (Think about military battles of the 18th- and 19th centuries, before the invention of repeating firearms, where one rank of soldiers fired their muskets, and then knelt down to reload while the next rank rose up and fired. Or, for that matter medieval archers.) set-piece Thus, pitch is coded not by the frequency with which the basilar membrane vibrates, or the frequency by which individual hair cells stimulated by the basilar membrane fire, but by the overall, or aggregate, frequency of neural impulses.
Wait a minute: That's three principles, whereas the term "duplex theory" implies that there should only be two. Note that, if you think about it for a second, that's really only two principles, because the frequency principle and the volley principle are actually the same frequency principle: that the frequency of stimulation is coded by the frequency of the resulting neural impulse (at low frequencies, the neural impulse is generated directly; at higher frequencies, by means of volleys). That, and the place principle, makes up the "duplex".
Later on, Helmholtz applied of the Doctrine of Specific Fiber Energies to in color vision (in his Physiological Optics, which was published serially from 1856-1866, and appeared complete in 1867).
all the different color patches that may be found at a paint
store: as of 2006, Pantone,
the Dutch paint company favored by high-market
interior designers, offers a color wheel with 3,039 specific
hues, including about 300 shades of blue. Or, for that
matter, the "millions of colors" that can be produced on a
high-quality computer screen. There are probably more colors
than can actually be resolved by the human eye (although the
human eye is pretty good at this, being able to distinguish
among 7 million different shades of color produced by unique
combinations of hue, brightness,
and saturation). Brightness
has to do with how light or dark a color is;
saturation has to do with the purity of the
Just to illustrate the infinite variety of colors that can be produced by combining hue, variety, and saturation, every year Pantone releases a "Color of the Year" (a competing "Color of the Year" is advertised by the Color Marketing Group: interestingly, theirs doesn't match up with Pantone's!).
Here's an interesting exercise. Dig out your favorite photo-processing software (I use Photoshop), go to the color swatches, and try to reproduce these colors by blending red, green, and blue -- just like your eye does. Make some adjustments, as an experiment, and see what happens to the color.
For 2010, it was "Turquoise".
- In the RGB system, "Turquoise" is obtained with R=68, G=184, and B=172.
For 2011, it was "Honeysuckle".
- In the RGB system, "Honeysuckle" is obtained with R=214, G=80, and B=118.
For 2012, it was "Tangerine Tango".
- In the RGB system, "Tangerine Tango" is obtained with R=221, G=65, and B=36.
For 2013, it was "Emerald".
- In the RGB system, "Emerald" is obtained with R=0, G=152, and B=116.
For 2014, it was "Radiant Orchid".
- In the RGB system, "Radiant Orchid" is obtained with R=177, G=99, and B=163.
For 2015, it was "Marsala".
- In the RGB system, "Marsala" is obtained with R=150, G=79, and B=276.
For 2016, the Color of the Year was actually a pairing of two colors, "Rose Quartz" and "Serenity".
- In the RGB system, "Rose Quartz" is obtained with R=247, G=202, and B=201.
- In the RGB system, "Serenity" is obtained with R=146, G=168, and B=209.
For 2017, the Color of the year was "Greenery", which some observers noted is the color of the marijuana leaf!
- In the RGB system, "Greenery" is obtained with R=136, G=176, and B=75.
A 2012 consumer survey identified opaque couché, a combination of dark brown and olive green (technically, Pantone 448C; in the RGB system, it's coded as74/65/42), as the "ugliest color in the world" (frankly, it looks like...). In 2016, it was used on cigarette packages in Australia and the United Kingdom, in an attempt to deter people from smoking. Coming soon, maybe, to a cigarette pack near you?
So where does color, or more properly hue, come from?
In the 19th century, Thomas Young discovered that visual hue was related to the wavelength of the light stimulus, as described above (Young also translated the Rosetta Stone). Shorter wavelengths are associated with blue and green, longer wavelengths with red and violet. So, one solution to the problem of color vision would be to have a different receptor for each particular wavelength -- somewhat analogous to Helmholtz's original place theory of pitch perception. But that won't work, for the simple reason that there are too many wavelengths! Do the math: In order for every 1 nm of wavelength of light in the visible spectrum to stimulate a different receptor, there would have to be (at least) 400 different receptors; to make things worse, since we can see any color at any point in space, all 400 receptors would have to be located at each point on the retina. The retina is just not big enough.
Rather, there must be some set of elementary colors, with specific receptors sensitive to each; and the remaining colors must represent blends of these basic elements. What might these basic colors be?
Red, Orange, Yellow, Green, Blue, Indigo, and Violet
-- the acronym "ROY G BIV" you learned in elementary school. When you pass white light through a prism, it breaks up into these colors. Passing sunlight through raindrops has the same effect, yielding rainbows; in fact, the English poet John Keats accused Newton of "unweaving the rainbow".
Actually, we now recognize only six primary colors. Newton wanted there to be seven, so that the colors of the rainbow would parallel the notes of the diatonic musical scale (C-D-E-F-G-A-B), and the number of planets (he didn't know about Uranus or Pluto) so he made a somewhat arbitrary distinction between indigo and violet. These days, color scientists tend to speak only of violet, or perhaps purple. Long before Newton, Aristotle asserted that there were just four primary colors, each one related to the four elements of Greek physics -- earth, air, water, and fire.
If we mix all the colors together, we get white again; if there is an absence of light, we get black.
Thus, sticking with Newton's count for purposes of exposition, perhaps there are seven color receptors, each sensitive to a particular segment of the visible spectrum corresponding to pure versions of the seven primary colors. Stimulation of one of these receptors would yield one of the primary colors. Stimulation of two or more would yield various blends of color. Stimulation of all at once would produce white, while the absence of stimulation would produce black.
Actually, anatomical evidence indicates that there are two kinds of receptors in the retina:
- Rods are sensitive only to the presence of light: stimulation of rods produces white and gray; no activity produces black.
- The cones are sensitive to different wavelengths of light, and stimulation of them produces different hues.
So, maybe there are seven different cones, each corresponding to one of the seven primary colors. Maybe, but other considerations narrow down the number of cones that are needed to produce color vision.
For example, it appears that four colors are psychological primaries:
red, yellow, green, and blue.
Orange and violet are usually perceived as mixtures (i.e., people can identify some combination of red, yellow, green, and blue in them).
In fact, these four colors are also
physical primaries, meaning that any hue can be
produced as a combination of them.
Impressionism, the Bauhaus School, and Color Theory
The French school of painting known as Impressionism was influenced by scientific advances in understanding sensation and perception -- and in particular color theory as it evolved in the hands of Helmholtz, Hering, and others. There are good grounds for supposing that the Impressionists took an active interest in the emerging scientific understanding of color perception, and deliberately employed certain techniques to take advantage of these principles.
An early example is The Potato Harvest painted by Camille Pissarro (1874; private collection). It's typical, if early Impressionism: the brush strokes visible, not a lot of detail, a glimpse; an impression. What's interesting in the present context is the way color is handled. The peasant woman's apron, for example, is orange and blue, which are complementary colors; so are the colors of her blouse, yellow and violet. The opponent-process mechanism that is responsible for color perception makes these colors even more vivid than they would ordinarily be. Pissarro wasn't just painting what he saw, which is our usual, and conventional, take on Impressionism. Rather, has taken his knowledge of color theory and used it to reflect how the perceiver sees color. [Source: R. Brettell,From Monet to Van Gogh: A History of Impressionism (2002).]Link to further commentary from the Museum of Modern Art.
The French impressionist painter Georges Seurat also used additive mixture in his pointillist paintings, such as "Sunday Afternoon on the Island of La Grande Jatte -- 1884" (put painted in 1886-1888; now in the Art Institute of Chicago); it's additive mixture which interests us here, because it turns out that the eye does something like this as well). We know that he was greatly influenced by physiological and psychological theories of color perception offered by Helmholtz and Hering: As he is supposed to have said, color is mixed in the eye, not on the palette.
Pissarro, Seurat, and other Post-Impressionists justified their "pointillist" technique by referring to the emerging understanding of how the eye worked: the rods and cones in the retina create little "points" of light. But, as Brettell tells the story, the relationship of vision science to "Sunday Afternoon" was a little post-hoc (and he should know, as he was the curator in charge of the painting for eight years at the Art Institute of Chicago). In fact, the painting, like every other painting up to that time, was originally executed in strokes of paint, not dots. The dots were applied over the brushstrokes, so that the colors would mix "in the eye", not on the palette or the canvas. (Later, there were more dots, an experimental pigment intended to brighten the canvas. But the experiment failed, and the new pigments eventually faded, actually dulling the appearance of the painting.)
For detailed information on Seurat's technique, see the catalog for Seurat and the Making of "La Grande Jatte", an exhibition at the Art Institute of Chicago, 2004 (also "How Seurat Worked Up to Sunday" by Holland Cotter,New York Times, 08/20/04); Steven Sondheim's musical,Sunday in the Park with George (1984), is inspired by this painting: Seurat's love interest in the play is named "Dot").
Color theory also played an important role in the work of the Bauhaus school of architecture, arts, crafts, and design that operated in Germany from 1919 to 1933. Teaching at the Bauhaus was organized in terms of Vorkurs, or preliminary course, that all students were required to complete before going on to more advanced work. One of the teachers of this course was Johannes Itten, who devised this spherical representation of the classic color circle to teach students the fundamentals of color theory and composition:Color Sphere in 7 Light Values and 12 Tones (lithograph, 1921).
"The Bauhausler (as the school's faculty and students were called) adored charts and diagrams of all sorts, and Itten's construct suggests both a Buddhist mandala and the Periodic Table of Elements. The upper three quarters of Color Sphere are dominated by a twelve-pointed, multicolored star inscribed within seven concentric circles, flattened like an orthographic projection of the globe; if cut out and assembled, the conjoined segments would form a sphere. The bottommost portion of this two-part composition is devoted to a grid of rectangles arranged in the color order of the light spectrum, from red at one end to violet at the other." [Martin Filler, "The Powerhouse of the New", reviewing a retrospective exhibit at the Museum of Modern Art, New York, in the New York Review of Books (06/24/2010), from which the illustration is taken.]
The contemporary German artist Gerhard
Richter has done a number of paintings on the theme of
color, including 256 Farben (256 Colors), which
you can see in the Fisher Collection housed at the San
Francisco Museum of Modern Art (he's also done a couple
depicting 1024 and even 4096 colors).
the modern American artist Ellsworth Kelly made a series
of eight collages, entitled "Spectrum Colors Arranged by
Chance I to VIII", each collage employing a subset of 18
hues. Spectrum V can be viewed at the Metropolitan Museum
of Art in New York City.
Michel Pastoureau, a French medievalist who specializes in the study of heraldry, is writing a series of books devoted to the "histories" of various colors:
- Blue: The History of a Color (2001)
- Black: The History of a Color (2008)
- Green: The History of a Color (2014)
These volumes are to be followed by subsequent histories
of red and yellow. At present, he has no plans for
treatises on the other primary colors, or white. But
When subjects are asked to make similarity judgments about colors, the results can be presented in a color circle in which similar colors are adjacent to each other, and complementary colors (those which when mixed additively produce gray) are opposite to each other. Thus, beginning at 12:00 and moving clockwise, red is followed by orange, with yellow at 3:00, green at 6:00, blue at 9:00, and then indigo. Note that orange is produced by additively mixing red and yellow, while blue and red in various proportions produce indigo and violet. The color solid adds brightness and saturation to represent each of the 7 million distinct colors.
The color circle and color solid are important because they shows that we only need four primary hues of red, yellow, green, and blue, along with brightness and saturation, to produce all 7 million visible colors. That means we only need four types of cones after all, each sensitive to red, yellow, green, or blue. This is a technique employed by artists who mix colors on the palette from a relatively small set of basic pigments. And, in fact, you only need three primary colors:
Television, and the computer you're using right now, relies on the RGB system to produce all the millions of colors.Technically, artists and theater lighting designers produce colors by subtractive mixture, while television and computers produce them by additive mixture.
All of this gets accounted
for in the trichromatic theory of color vision,
originally proposed by Thomas Young in 1802 (Young also
proposed the wave theory of light, and translated the
Rosetta stone) and by Hermann von Helmholtz in 1866 (in his
Treatise on Physiological Optics, 1856-1867). The
trichromatic theory, also known as the Young-Helmholtz
theory, gets its name from the idea that there
are three types of cones, each maximally sensitive to one of
three primary colors:
- "red" (sensitive to long wavelengths),
- "blue" (sensitive to short wavelengths), and
- "green" (sensitive to medium wavelengths).
Any light stimulus activates each of these receptors to different degrees, depending on the mix of wavelengths in it. Pure blue light would stimulate the blue short-wave cone, but not the long-wave cone (or not very much). Purple light would stimulate both the short-wave and the long-wave cone.
Technically, color sensations aren't based strictly on the wavelength of light that falls on the retina. Hue and saturation also make a difference, as does the surrounding illumination. These facts can combine to create a number of illusions of color. But we're going to ignore them for purposes of exposition, and just focus on the relation between the wavelength of the physical stimulus and the corresponding sensation of color.
But so far, all this is logic. We've shown on physical and logical grounds that you could get color vision from just four color receptors, but we haven't shown that this is how the visual system actually does it. But would we have gone through all this if it didn't? Naaaahhhh. Still, it's important to look at the psychological evidence.
First, consider that
color blindness comes in two principal forms:
- the loss of red and green (this is most common);
- the loss of blue and yellow.
Total loss of color, known as achromatopsia, is relatively rare.
This suggests that red and green are coded by the same receptor system, which in turn is different from that which codes yellow and blue. If red and green, and yellow and blue, were on different receptor systems, then we could get other combinations of color blindness.
Actually, there are two
forms of red-green colorblindness, but these have almost
indistinguishable effects on color vision.
- Protanopia, loss of the pigment associated with long wavelengths ("red vision");
- Deuteranopia, loss of the pigment associated with medium wavelengths ("green vision").
Blue-yellow colorblindness is known as tritanopia, and reflects loss of the pigment associated with short wavelengths ("blue vision").
The clincher comes from the phenomenon of negative afterimages. If you stare at a color patch (the more saturated the better) for a while, and then shift your gaze to a white or light-gray surface, you will see its complementary color (if you shift your eyes around the room, you will take your afterimage with you until it dissipates: it's in your eyes, not on the surface). Thus, staring at red produces a negative afterimage of green, and vice-versa; staring at yellow produces a negative afterimage of blue, and vice-versa.
The American pop and "neo-Dadaist" artist Jasper Johns uses negative afterimages to great effect in his paintings of targets and American flags.
Some of these explicitly refer to negative afterimages. Note, in "Target" and "Flags" the presence of fixation points in the left and right, or top and bottom, portions of the painting. Johns once characterized his oeuvre as an exploration of "how we see and why we see the way we do" (Jasper Johns: Seeing with the Mind's Eye, exhibition at the San Francisco Museum of Modern Art, 2012-2013). Like Seurat, Johns was greatly influence by the psychology of visual sensation and perception. We'll see more of his work later.
Actually, and importantly, you can get achromatic afterimages as well: staring at a drawing of black ink on white paper will (if you do it right) produce a white-on-black afterimage.
But where does yellow come from? And the afterimages? Theoretically, yellow could come from a mixture of red and green, and this is what Young and Helmholtz thought, but the problem is that yellow is perceived as a pure color, not a mixture (remember the psychological primaries?). And the trichromatic system, itself, can't produce negative afterimages.
To cope with
these problems, Leo Hurvich and Dorothea Jameson, at the
University of Pennsylvania, have proposed the
opponent-process theory of color vision (this is
actually a more sophisticated version of a theory originally
proposed by the 19th-century German physiologist Ewald
Hering). The opponent-process theory holds that neural
impulses arising from the rods and cones excite six
neural processes (localized in the retina, optic
nerve, and lateral geniculate nucleus) which are themselves
arranged into three opposing pairs:
- red/green, and
Each member of the pair
serves as an antagonist of the other, so
that excitation of one member inhibits the other.
Thus, pure red light stimulates the long-wave cone, which
sends neural impulses to the red/green neural system,
exciting the red element but suppressing activity in the
other. When the red light is turned off, excitation of the
red element ceases, and so does the inhibition of the
opposing green element; the disinhibition of the
suppressed green element produces a green negative
afterimage. The same thing happens with stimulation by blue
light, which when it ceases produces a yellow afterimage;
and stimulation by white light, which when it cases produces
a black afterimage.
- The opponent-process theory of color vision, initially proposed by Hering and later later confirmed experimentally by Hurvich & Jameson, begins by accepting Helmholtz's idea that there are three color receptors (cone types) in the retina, along with one type of rod.
- Output from the cones is further processed by another set of structures which are organized into antagonistic pairs (not unlike muscles and tendons, or the sympathetic and parasympathetic branches of the autonomic nervous system).
- One opponent process consists of a red-green pair.
- Stimulation by medium-wavelength light activates the green element and inhibits the red element.
- Stimulation by short- and long-wavelength light activates the red element and inhibits the green element.
- Another opponent process consists of a blue-yellow pair.
- Stimulation by short-wavelength light activates the blue element and inhibits the yellow element.
- Stimulation by medium- and long-wavelength light activates the yellow element and inhibits the blue element.
- And, finally, there is a light-dark pair.
- The presence of light stimulates the light element and inhibits the dark element.
- The absence of light stimulates the dark element and inhibits the light element.
- So, while there are only 3 kinds of cones, as Helmholtz proposed, there are actually four color elements, arranged in antagonistic pairs.
- The experience is color is produced by mixtures from these four elements.
- Which is why yellow is perceived as a pure color, not a mixture.
- When the stimulus is terminated, the previously activated element in each pair is inhibited, and its antagonist is disinhibited.
- Thus, turning off a short-to-medium-wavelength light replaces the experience of green with the experience of red.
- Turning off a long-wavelength light replaces the experience of red with the experience of green.
- Turning off a short-wavelength light replaces the experience of blue with the experience of yellow.
- Turning off a medium-to-long wavelength light replaces the experience of yellow with the experience of blue.
- Turning off light entirely replaces the experience of brightness with the experience of darkness, and vice-versa.
- The result of this disinhibition is chromatic (colored) and achromatic (black-and-white)negative afterimages.
- The fact that negative afterimages show that sensory experience isn't determined by the proximal stimulus (wavelength) or the receptor organ (rods and cones) but by specific neural systems. We now know that these opponent-processes are located in the lateral geniculate nucleus of the thalamus.
- In principle, if we could change the connections between the cone elements and the opponent processes, we'd see short-wavelength light as red and long-wavelength light as blue.
The antagonistic relationship between
red and green, and between blue and yellow, means that,
ordinarily, we cannot see red and green, or blue and yellow,
simultaneously. However, under special circumstances
people can see "forbidden" colors such as greenish
red, or yellowish blue, etc. This phenomenon, while
difficult to observe and replicate, has led to some
revisions in the opponent-process theory of color vision --
revisions that are too technical for an introductory
course. If you're interested, see "Seeing Forbidden
Colors" by Vincent A. Billock and Brian H. Tsou, Scientific
The opponent-process theory explains why colorblindness comes in the various forms that it does. The red and green processes are tied together, so that a weakness in one creates a weakness in the other as well.
The experience of color is determined by the brain, not by the stimulus or the receptor.
Thus, the 7 million colors which we can see are produced by the stimulation of rods and three kinds of cones in the retina, which in turn activate and inhibit three color systems located elsewhere in the neural system supporting vision. But where? The best evidence to date, based on research by Russell and Karen DeValois, another husband-and-wife pair of vision scientists, this time working at UC Berkeley, is that the opponent-processes themselves are located in the lateral geniculate nucleus.
More on Color Blindness
The 18th-century English chemist John Dalton, who was himself color blind, provided the first systematic analysis of color confusions in this condition. He himself could not see red, which looked to him like a shade of mud (this was also true of his brother). Dalton attributed his color blindness to a bluish tint to the vitreous humor inside his eyeball; this would, indeed, screen out red hues. Of course, Dalton had no way of knowing whether this was in fact the case. After his death, and following his instructions, his eyeballs were autopsied to determine whether the vitreous humor was in fact blue. It was not. Now we know that color vision is due to visual pigments in the cones which are differentially responsive to short, medium, and long wavelengths of light, and to the red-green and yellow-blue opponent processes farther on in the visual system. It turns out that the visual pigments are under genetic control (red-green color blindness is due to the absence of the corresponding genes, which are ordinarily found on the X chromosome). Recent DNA analysis of Dalton's tissue, which has been preserved since his death in 1844, reveals that he lacked the gene for the red-sensitive pigment. See Hunt et al., Science, 267, 984-988).
Why is red/green more vulnerable to loss
than yellow/blue? We don't know, exactly, but there are
some theories. According to a biological principle known
as Jackson's Law, phylogenetically older systems are less
vulnerable to damage. So, perhaps, the yellow/blue system
evolved earlier than the red/green system, and so is less
vulnerable to damage. Of course, that's circular
reasoning, but in fact there are reasons for thinking that
this is actually the case. Fortunately, those reasons are
beyond the scope of this course (though we'll see an
interesting variant on this later, when we talk about the
Whorffian hypothesis of the relationship between thought
This is the essence of a theory of color
vision proposed by Christine Ladd-Franklin, a pioneering
woman psychologist. Like other early female
psychologists, Ladd-Franklin was denied both a PhD and a
faculty appointment on account of her sex.
Nevertheless, she undertook graduate studies at Hopkins,
worked in the laboratories of both Muller and Helmholtz,
and lectured widely. Her studies of vision, and
especially color vision, were published from 1892 to
- Ladd-Franklin accepted the essence of Helmholtz's trichromatic theory of color vision, which was that any visible hue could be produced by some combination of red, green (or yellow), and blue.
- But she also accepted Hering's argument that the visible spectrum consists of all four of these chromatic elements -- plus white, the "achromatic" sensation of color.
- Taking a leaf from Darwin's book, Ladd-Franklin argued that the tetrachoric color sense was a product of evolution by natural selection.
- During the "Carboniferous" period, about 300-350 million years ago, there were no colored plants or animals, and so sensitivity to color was no aid to survival.
- The Cretaceous period, about 50-150 million years ago, saw the emergence of colored flowers -- as well as bees, which are sensitive to only two colors, yellow and blue.
- Later, in birds and mammals, sensitivity to yellow differentiated into a sensitivity to red and green.
- The end-point of evolution, then was a tetrachromatic color sensitivity, arranged -- as Hering had argued -- into two opposing pairs:
- Red-Green; and
- The theory explains the different incidence of color blindness in terms of Jackson's principle, that those processes which evolved earliest are most resistant to loss through brain damage. Thus:
- red-green colorblindness is more common than yellow-blue colorblindness; and
- achromatopsia, or the total loss of color
sensitivity is the rarest of all.
Opponent Processes in Lateral Inhibition
An interesting variant on opponent processes occurs in
lateral inhibition, which is a physiological basis for brightness
Imagine an array of nine light-sensitive cells:
| | | | | | | | |
^ ^ ^ ^ ^ ^ ^ ^ ^
X Y Z A B C D E F
A bar of bright light is shined on the retina, in such a way as to illuminate cells A-C, while cells X-Z and D-F remain in dim light. Ordinarily this would activate cells A-C strongly (say, 10 units each), but cells X-Z and D-F would be less activated (say, 1 units each). And without lateral inhibition, cells A-C would all be activated to the same extent. The result would be that the observer would perceive a bar of light against a darker background.
But there is lateral inhibition, such that activation of a cell covering one region of space inhibits activation of cells covering adjacent regions -- that's the "lateral" in lateral inhibition. So:
- Z isn't very active, so doesn't inhibit A.
- Activation of A inhibits B, but not Z (which has
only a little activation to begin with).
- So, suppose activation of B goes down by 1 unit, to 9.
- Activation of B inhibits A and C.
- So activation of A goes down to 9.
- And activation of C goes down to 9 as well.
- Activation of C inhibits B, but not D (which has
only a little activation to begin with).
- So activation of B goes down another 1 unit, to 8.
- D isn't very active, so it doesn't inhibit C.
Therefore, summing, both A and C are less inhibited than
B is, because B gets a double whammy from A and C. And Z
gets a hit from A, while D gets a hit from C.
The result is to sharpen the border between dark and light regions of space.
- The region covered by A will appear brighter than region X-Z is to the left of it, and the region B to the right of it.
- And the region covered by C will appear brighter than region D-F to the right of it, and the region B to the left of it.
Opponent Processes in Motivation
The black-white, red-green, and yellow-blue opponent processes may remind you of the antagonistic relations between muscles and tendons in the skeletal musculature, or between the sympathetic and parasympathetic divisions of the autonomic nervous system, the general-adaptation syndrome, the dual-center theory of hunger, and the relations between excitation and inhibition in classical conditioning. The opponent-process theory of color vision was the inspiration for Richard Solomon's opponent-process theory of acquired motivation, which attempts to account for the temporal dynamics of affect (Solomon was a close colleague of Hurvich and Jameson's at Pennsylvania). There are lots of other analogies: opponent processes are a very popular mechanism in the nervous system and elsewhere in the body.
More on Seeing Color
For an interesting discussion of the various ways in which nature is "colored", and differences in color vision among various species of animals, see "Seeing Red...and Yellow...and Green...and" by Philip Ball in Natural History, March 2002. See also Colour: Art and Science, edited by Trevor Lamb and Janine Bourriau (1995). In the introduction to their book, Lamb and Bourriau write:
Although the idea of 'colour' may seem a simple concept, it conjures up very different ideas for each of us. To the physicist, colour is determined by the wavelength of light. To the physiologist and psychologist, our perception of colour involves neural responses in the eye and the brain, and is subject to the limitations of our nervous system. To the naturalist, colour is not only a thing of beauty but also a determinant of survival in nature. To the social historian and linguist, our understanding and interpretation of colour are inextricably linked to our own culture. To the art historian, the development of colour in painting can be traced both in artistic and technological terms. And for the painter, colour provides a means of expressing feelings and the intangible, making possible the creation of a work of art.... In the field of colour, the arts and the sciences now travel in unison, and together they provide a rich a d comprehensive understanding of the subject.
Other Specific Fiber Energies
there appear to be five basic tastes:
- bitter, and
- umami, a Japanese word that roughly translates as "savory" or "delicious taste", prominent in soy sauce and monosodium glutamate (MSG). This fifth basic taste was not added to the list until Japanese psychologists and physiologists started doing research on taste; the Europeans who did the initial research in the late 19th and early 20th centuries didn't know anything about soy sauce -- or about umami as such. Umami was identified in 1908 by Kikunae Ikeda, a chemist at Tokyo Imperial University.
- MSG and other substances high in umami stimulate
special glutamate receptors on the tongue.
- A sixth taste has been proposed: kokumi ("rich taste"), prominent in garlic and onions and perhaps stimulated by gamma-glutamyl peptides (and thus chemically related to the glutamate that figures prominently in umami).
- Research is underway to determine whether there are any receptors on the tongue specific to GGP; alternatively, GGP may stimulate the same glutamate receptors as umami.
- More prosaically, it's also been proposed that "fat"
is a sixth taste, based on evidence that there are
specific receptors for fatty acids.
All other tastes appear to be
produced by mixtures of these elements.
to an interview with Linda Bartoshuk.
Umami and Ketchup
Heinz ketchup, by far the most popular ketchup in America, if not the entire world as well (way more popular than Hunt's or Del Monte), is a carefully constructed balanced mix of all five basic tastes. It's the umami, which comes from ripe tomatoes, combined with the sweetness provided by an extra dose of sugar, and the sourness provided by added vinegar, that really makes the difference (see "The Ketchup Conundrum" by Malcolm Gladwell,New Yorker 09/06/04; see also "Ketchup and the Collective Unconscious by Elizabeth Rozin,Journal of Gastronomy, 1988, as well as Rozin's book,The Primal Cheeseburger (1994).
For a long time, it was thought that
receptors specific to the four "basic tastes" -- sweet,
sour, salty, and bitter -- were located in discrete areas of
the tongue. The sensation of sour was thought to be most
acute at the sides, least at the tip and base. The
sensations of sweet and salty are most acute at the tip,
least at the sides, and base. The sensation of bitter is
most acute at the base, and on the throat and the palate.
This suggests that there may be four different receptors,
each sensitive to a different basic taste, and each uniquely
distributed on the tongue -- after the manner of the "tongue
map" proposed by E.G. Boring in 1942, based on Hanig's
research. Unfortunately, we now know that the tongue
map is based on a misinterpretation of Hanig's
findings. There actually are differences in
sensitivity of different areas of the tongue to the
different elementary tastes, but these differences are very
small -- too small to justify identifying a particular taste
with a particular area.
Moreover, histological studies have
failed so far to reveal differences in the taste buds at
various sites on the tongue, so the precise organization of
taste remains something of a mystery. While it's true, to
some extent, that different areas of the tongue are
differentially sensitive to the various basic tastes,the
situation is more complicated than this -- not least by the
discovery of umami, that fifth basic taste, which
made Boring's "taste map" more than a little quaint, much
like maps of the world printed before 1492. More critically,
it appears that clusters of cells, each sensitive to
different chemical molecules, are dispersed widely across
the tongue. Much in the manner of color vision, these
clusters may be maximally sensitive to particular molecules,
they are also responsive to others. And much in the manner
of color vision, the output from these receptors seem to be
integrated at high levels of the nervous system to yield the
sensation of taste.
Our best guess now is that there is
one receptor for each of four of the basic tastes: sweet,
sour, salty, and umami, distributed across the tongue.
For bitter, however, there appear to be dozens of receptors;
and, to make things even more interesting, they're not just
found in the tongue. Cells that respond to "bitter"
tastes are found in lots of different regions of the body --
in the brain, in the respiratory system, in the
gastrointestinal system, even in the genitourinary
system. It appears that these bitter receptors are
part of a second kind of immune system which responds
specifically to poisonous substances. Of course, by
the time a poison triggers a bitter receptor in the colon or
the bladder, it's already been ingested. But that's
where this second immune system kicks in, triggering various
bodily responses that fight bacteria in various ways -- for
example, by flushing them out of the body in urine.
(See "Bitter Taste Bodyguards" by R.J. Lee and N.A. Cohen, Scientific
Tasting "Hot" and "Cold"
What about things like chilies, that taste "hot", or like mint, which tastes "cold"? The substances themselves aren't all that hot, necessarily: if you chomp down on a lukewarm jalapeno chili, or get some in your eye while you're cooking (Don't try this at home!), you'll feel plenty of heat; and even peppermint tea or a menthol cigarette (Don't do this either!) feels somehow cool. Researchers at UCSF discovered in 1997 that a particular protein on the surface of some nerve cells responds to both heat ad capsaicin, the central ingredient that makes chilies, and other peppers, peppery. In 2002, the same group reported that another protein responds to both cold and menthol, the ingredient that makes peppermint and other mints minty.
there appear to be six distinct odor qualities:
With respect to specific fibers
corresponding to these qualities, situation seems to be
rather more complicated than for taste. People can recognize
about 10,000 different odors, but there are only --
only! -- about 1,000 different cells buried in the
olfactory epithelium, each presumably responsive to a
specific odorant molecule. In 1991, Richard Axel and Linda
Buck reported that each of these cells is, in fact,
sensitive to more than one chemical, though to different
degrees -- in somewhat the same way that the rods and cones
in the retina respond to many different wavelengths of
light. The aggregate responses of these cells, then,
constitute a complex -- think of a musical chord -- that
represents each different identifiable odor. For their work,
Axel and Buck received the 2004 Nobel Prize in Physiology or
While people are extraodinarily good at distinguishing one odor from another, they are extraordinarily bad at identifying and naming those odors -- although this ability can improve with practice. This fact led Yeshurun and Sobel (2010) to suggest that the primary quality of olfaction is not spicy or fragrant, etc., but rather simply pleasantness-unpleasantness. Remember factor analysis, from the lectures on Methods and Statistics in Psychology? When subjects were asked to describe odors, the primary factor turns out to be a dimension of pleasantness-unpleasantness. And the same primary factor emerged when investigators analyzed the molecular features of a large number of different odorants. The implication is that pleasantness is central both to the chemical structure of the odor stimulus and the resulting sensory experience. On the other hand, it may be that pleasantness is a kind of "superfactor" that subsumes other basic qualities, like spicy, fragrant, and ethereal vs. resinous, putrid, and burned as subordinate qualities. Something similar has been proposed for the structure of intelligence, as discussed later in the lectures on Thinking.
In the tactile
sense, there is some controversy about basic
qualities, but the following have been offered as
- roughness (vs. smoothness)
- softness (vs. hardness)
- wetness (vs. dryness)
there are actually two different temperature senses:
- cold and
Each is associated with a different receptor organ: cold, the Krause end-bulbs; warm, the Ruffini end-organs. The sensation of hot comes from the simultaneous stimulation of both structures.
In pain, we can
distinguish between fast and slow pain,
corresponding to "C" and "A-delta" fibers. We can also
distinguish between sensory pain, which registers
in the somatosensory cortex of the parietal lobe, and suffering,
which registers in the anterior cingulate gyrus of the
Sensory Psychophysics and Signal Detection
Work on the sensory qualities illustrates the basic psychophysical principle:
every psychological property of a sensation is related to some physical property of the corresponding stimulus.
For example, in audition:
- pitch (high or low) is related to the frequency of vibration;
- loudness (loud or soft) to the amplitude of vibration; and
- timbre (e.g., the difference in sound between a clarinet and an oboe) to the shape of the sound wave, and the number of harmonics it produces.
Each of these properties is transduced by the sensory receptors, transmitted to the cortex, and detected by the observer.
But not all stimuli are detected by the observer. Some stimulus energies don't fall on the sensory surfaces. Others do, but are too weak to be sensed by the organism. Classical psychophysics holds that detection of a stimulus is largely a function of the intensity of that stimulus. Thus, we next turn our attention to intensity -- the one quality that all sensory modalities have in common.
Just as each neuron has
a threshold for firing, so each modality has a
threshold for detection -- for conscious
awareness of a stimulus event.
- The absolute threshold is the weakest stimulus that can be detected in a modality. The human sensory apparatus is remarkably sensitive: under optimal circumstances, we can see a candle flame 30 miles away on a dark night; and smell a single drop of perfume in a 6-room house.
- The relative threshold is the smallest change in intensity that can be detected -- the "just-noticeable difference" (JND) in stimulation. Notice that the absolute threshold is a special case of the relative threshold, as it represents the JND between nothing and something.
In psychophysical terms, intensity is related to the amount of stimulus energy falling on the sensory surfaces. The more intense the stimulus, the stronger the sensation.
The Psychophysical Laws
This would seem fairly obvious. But, in fact, there is no isomorphism between physical and sensory intensity. For example, the relative threshold -- the amount of change in physical energy needed before that change is detectable -- depends on the intensity of the original stimulus. If the original intensity is low, even a very small change is noticeable. If it is high, a relatively large change is required.
This principle is represented in Weber's
dI/I = C,
where I = physical intensity;
If the intensity of the original stimulus is
Then change is first noticeable at
110 units, not 101
220 units, not 201
You may wish to do some other problems like this, using different Weber Fractions and different values of I, just to get a feel for the enterprise.
Weber's Fraction differs for each sensory modality, and for each species, and provides a universal index of the sensitivity of a modality. Thus, if a modality is associated with a small Weber's Fraction, it is highly sensitive (small JNDs). In humans, vision and audition are the most sensitive modalities, taste and smell the least. But in cats, Weber's Fraction for smell is 1/300 that of humans.
A more general principle relating sensory intensity to physical intensity is Fechner's Law:
where S = sensory intensity;
Thus, for every value of I, a corresponding value of S is given by the formula. Ignore the constant k, and assume logarithms to the base 10 (you may wish to get out your calculator and work through some other examples). Thus,
If the physical intensity is
Then the sensory intensity is
0 units (log 101 = 0)
1 unit (log 1010 = 1)
2 units (log 10100 = 2)
2.3 units (log 10200 = 2.3)
Notice that according to Fechner's Law,
sensation changes more slowly than stimulation.
As stimulation grows from 1 to 200 units, sensation grows only from 0 to 2.3 units). This follows from Weber's Law: 10 additional units of stimulation makes a big difference to sensation at the bottom end of the scale, but a progressively smaller difference as we move towards the top.
If you plot these values on a logarithmic scale, you get a perfectly straight line. The early psychophysicists loved straight lines like this, because they thought they represented truly elegant mathematical laws -- showing that psychology was a real science after all.
Fechner's Law holds
across a wide range of situations, but there are some
exceptions. For example:
- within limits perceived length changes precisely with actual length; and
- experienced pain grows more rapidly than the pain stimulus.
where S = sensory intensity;
In words, Stevens' Law states that
for any quality of sensation, there is an exponent that relates changes in sensation to changes in stimulation.
Get out your calculators again, and we'll work through some examples.
Fechner's Law is a special case of Stevens's Law, where the
exponent, N, is less than 1.0 (an exponent of β means the
square root; an exponent of 1/3 means the cube root, etc.).
As an example, consider what happens when N = 1/2 (or,
If the physical intensity is
Then the sensory intensity is
1 units (11/2= 1)
3.16 units (101/2= 3.16)
10 units (1001/2= 10)
14.14 units (2001/2= 14.14)
Again, although the precise values differ from those given in the illustration of Fechner's Law, notice that big changes in I yield small changes in S, just as predicted by Fechner.
If the physical intensity is
Then sensory intensity is
1 units (13/2= 1)
31.6 units (103/2= 31.6)
1000 units (1003/2= 1000)
2828 units (2003/2= 2828)
Notice that physical intensity grew from 1 to 200 units, but sensory intensity grew from 1 to 2828 units. Thus, even a small change in I produces a big change in S.
For a long time, Stevens' Law was held to be a general psychophysical law that applied to all modalities, and all qualities within a modality, under any circumstances. It was assumed to describe the operating characteristic (an term appropriated from engineering) of the sensory transducers:
- most sensory receptor systems compress stimulation (thus following Fechner's Law), while
- some sensory systems expand stimulation (as in the "pain" exception).
That this is not exactly
the case is witnessed by the fact that there are
"psychophysical" relations that follow Stevens's Law, but
have nothing to do with physical sensation, or transduction.
- Stevens found an exponent that related the magnitude of the crime (number of dollars stolen or people murdered) to the length of the sentence imposed on convicted criminals;
- Stevens also found another exponent that related the size of the audience to the number of speech dysfluencies (stutterings, etc.) displayed by a public speaker.
- Lewis Fry Richardson (1947) found that a power function, with an exponent less than 1), relates the severity of war (in terms of casualties) to their frequency: there are many more low-casualty wars than there are high-casualty ones.
- And in an application of "Richardson's power law", Neil Johnson and his colleagues (2011) found the same pattern in the incidence of terrorist attacks, and of insurgent attacks in the Iraq and Afghanistan wars followed the same pattern: small incidents are very frequent, large incidents (like the 9/11 attacks) are very rare.
These kinds of relations go far
beyond the transduction of stimulus energies. Still,
Fechner's Law (Stevens's Law with the exponent < 1) is a
general principle that applies to judgments of all
kinds. You'll encounter it everywhere, if you just
look for it -- frequently enough that it's easy to think of
it as some kind of law of nature.
Signal Detection Theory
More important, there evolved a gradual appreciation that the psychophysical laws themselves were misleading.
One troublesome aspect
of the psychophysical laws was that they were derived from
sterile laboratory conditions, in which there were no
special rewards for detecting a stimulus, and no special
costs for missing one, or "detecting" one that is not
actually there. This is not characteristic of sensory
detection in the real world, where -- at times, at least --
detection accuracy may literally make the difference between
life and death.
- Consider a hungry eagle flying over the desert: if it spots a mouse, it might eat and live another day; but if it misses one, or if it expends energy attacking a figment of its imagination, its probability of survival is reduced.
- Consider, again, a soldier on guard: if there are reports of enemy in the area, she will be especially alert and responsive to any unusual noise; if not, she may slack off somewhat.
So analyses of detection need to take both motivation and expectancy into account.
gave rise to signal detection theory, a
form of psychophysics which takes into account both
- the sensitivity of the nervous system, often represented by the statistic d' ("dee-prime");
- the decision criterion set by the observer, reflecting his or her goals and expectations, and often represented by the statistic "beta".
In the basic signal detection experiment, a stimulus, or signal, is presented against a background of noise. Some trials present both signal and noise, others present just noise (these are known as catch trials); and the task of the subject is to say, on each trial, whether the signal is present. The task is made fairly difficult by the fact that the signal is just slightly more intense than the noise. This situation injects considerable uncertainty into each trial.
Thus, the general signal-detection
experiment is structured as follows:
By comparing the rate of
hits with the rate of false alarms, signal-detection theory
generates two statistics:
- d'(pronounced "dee-prime"), a measure of sensitivity;
- β(pronounced "beta"), a measure of the observer's bias or decision criterion.
experiments employ other measures of sensitivity and
criterion, but d' and β are the ones
you'll encounter in most elementary discussions.
- As a rule,d' increases with the difference between the hit rate and the false alarm rate.
- As a rule, β is affected by the observer's overall tendency to say "Yes" (in which case β > 1, indicating a liberal bias, or a low criterion for detecting signals) or "No" (in which case β < 1, indicating a conservative bias, or a high criterion for detecting signals).
The beauty of signal detection theory is that it permits investigators to measure the observer's sensitivity independent of (i.e., uncontaminated by) his or her criterion.
Here is specimen signal-detection
performance an observer with high sensitivity and no bias:
his hit rate is 100% (50% hits is 100% of trials when the
signal is on), and his false alarm rate is 0. For technical
reasons, it is not possible to calculate d' and
β under these circumstances, but both values would be
Here is performance by an observer
with less sensitivity to the stimulus, but still no bias:
the false alarm rate is still 0, but the hit rate is now
only 80% (40% hits is 80% of the trials when the signal is
present). In this case,d' = .84, and β =
Here is performance by an
observer who has no sensitivity at all (perhaps he's
blindfolded), but a strong "liberal" bias toward saying
"yes", the stimulus is present, even when it's not --
generating an 80% hit rate, but also an 80% false-alarm
rate. In this case,d' = 0. For technical reasons,
it's not possible to calculate β, but I use the
example anyway to give you an idea of what a low criterion
would do to performance.
And here is
performance by another observer who has no sensitivity at
all, but a strong "conservative" bias toward saying "no",
the stimulus is not present, even when it is -- generating a
hit rate of only 30%, but, more critically, an equivalent
false-alarm rate of 30%. In this case, again,d' =
0. For technical reasons, it's not possible to calculate
β, but I use the example anyway to give you an idea
of what a high criterion would do to performance.
Here's a more realistic
example of an observer who actually has some sensitivity
(the blindfold is removed, or maybe he's seeing through only
one or two layers of cloth), but also a liberal response
bias -- he says "yes" on 60% of the trials, but his hit rate
is 80%, much greater than his false alarm rate of 40%. In
this case,d' = 1.09, and β = .72.
another more realistic example of an observer with some
degree of sensitivity, but also a conservative response
bias: he says "yes" only 30% of the time, but his hit rate
of 50% is still greater than his false-alarm rate of 10%. In
this case,d' = 1.28, and β = 2.27.
Within this framework, the observer's expectations are manipulated by varying the proportion of catch trials, in which the signal is actually off. If there are relatively few such trials, subjects will be biased toward saying "yes" even when they are uncertain -- after all, the signal is usually present. If there are relatively many such trials, they will be biased toward saying "no".
stimulus is present on 70% of the trials, the observer will
tend to adopt a liberal response bias, lowering his
criterion for saying "yes". Still, his hit rate of 74% is
greater than his false-alarm rate of 60%.In this case,d'
= 0.39, and β = .84.
when the stimulus is present on only 30% of the trials, the
observer will likely adopt a conservative response bias,
raising his criterion for saying "yes". Even so, his hit
rate (60%) is greater than his false-alarm rate 17%).In this
case,d' = 1.21, and β = 1.53.
In addition, the observer's motivations may be manipulated by means of a payoff matrix. If there is a high payoff for hits, and no penalty for false alarms, the subject will be very liberal in saying "yes": he or she will make lots of false alarms, but that's all right. If there is a high penalty for false alarms, the same subject will become very conservative in saying "yes", because each error costs dearly.
Here is a payoff matrix
intended to induce a liberal response bias: the observer
gets 25 cents for every hit, but loses only 10 cents for
every false-alarm. Under these circumstances, assuming a
balanced design with 50% catch trials, the observer will win
more than he loses.
And here is a
payoff matrix intended to induce a conservative response
bias: the observer gets only a dime for every hit, and loses
a quarter for every false alarm. Under these circumstances,
the observer will be very careful, adopting a relatively
high criterion for saying "yes".
Signal-detection theory has a number of important practical applications. For example, imagine a radiologist examining an X-ray, or CT or MRI scan, for evidence of a tumor (here's a mammogram). The tumor, if it's present, is a "signal" that is visible against a background of "noise" generated by other tissues in the area. So, this is a classic problem in signal detection. If the patient has a family history of breast cancer, that increases the risk that she too has a tumor, so the radiologist may adopt a relatively low criterion for detecting the tumor -- meaning, of course, that there is an increased risk of a false alarm (it might just be a benign fibroid mass). But then again, he also has to consider the costs of a false alarm (e.g., an unnecessary biopsy or exploratory surgery) against the costs of a miss (e.g., that a tumor goes undetected, and thus untreated, and metastasizes to other parts of the body). So, again, the performance of a radiologist on a task like detecting tumors is a matter, not just of the physician's sensitivity, but also of the criterion he or she has set, based on expectations and goals.
SDT also has a number of theoretical implications. In the first place, it denies that there is such a thing as absolute sensitivity. Sensitivity varies with expectations and motives. This undercuts the older view of the passive observer, as discussed in the lectures on learning: the stimulus is transduced by the receptor organ, transmitted through the nervous system, and -- provided it is intense enough -- detected by the observer. And it substitutes a view of the active observer: the stimulus is transduced and transmitted as before, but the observer must make a judgment about the stimulus: whether it is present, or whether it has changed. These decisions are influenced by the person's goals and expectations.
If, indeed, there is no such thing as
absolute sensitivity, maybe the whole idea of sensory
thresholds is misleading as well. The concept of the sensory
threshold comes to us from Johann Herbart (1811), who
proposed that every sensation must cross a limen
(German for "border", or "threshold". derived from the Latin
limes, meaning frontier) in order to be represented
in consciousness. But Herbart suggested that sensations that
do not cross the threshold are still present in the nervous
system -- even though they don't influence behavior.
very first experiment on subliminal perception was, in fact,
the very first psychological experiment published in the
United States -- in 1884, by Charles S. Peirce (pronounced
"Perss"), a philosopher at Johns Hopkins, and Joseph
Jastrow, his graduate student (and the first person to
receive a PhD in psychology from an American university).
Peirce and Jastrow were interested in Herbart's concept of
the limen, and they wanted to determine whether
there was any such thing -- that is, a level of stimulation
so weak that it has no effect on the perceiver.
In their experiments, an observer was required to distinguish between the heaviness of two weights, or the brightness of two lights -- thus, their experiments were actually concerned with difference thresholds. In addition to making the judgment of relative weight (or brightness), the observers were also asked to rate their confidence in their judgment. As the difference between the two weights diminished, so did the observers' confidence in their judgments. But even when the observers had zero confidence in their judgments, meaning that they had no conscious appreciation of the difference between the two stimuli, still their forced-choice judgments were more correct than would be expected by chance.
Here's an instance of the value of statistical analysis. We could debate all day whether a 60% hit rate was larger than the 50% hit rate expected by chance. But Peirce and Jastrow actually devised new statistical techniques to demonstrate that this difference was, indeed, statistically significant.
Later, the concept of
subliminal perception became quite controversial, with some
commentators (like Vance Packard) warning about the threat
of "hidden persuaders", and others asserting that subliminal
perception didn't really occur, and that investigators like
Peirce and Jastrow just didn't establish their thresholds
rigorously enough. Over the past 100 years, however, a large
number of experiments like Peirce and Jastrow's have been
performed, using a variety of different methodologies.
- Sometimes, as in Peirce & Jastrow, the stimulus is too weak to be consciously perceived.
- Or the stimulus is presented too briefly to be consciously perceived.
- The stimulus may be accompanied by a "masking" stimulus that renders it consciously imperceptible.
- The stimulus may be presented in such a manner that it is not consciously attended.
In all of these paradigms, the stimulus has some effect on the perceiver's experience, thought, or action, even though the perceiver does not consciously perceive the stimulus. This is not a matter of simple guessing, because in these cases the hit rate is greater than the false-alarm rate -- that is, in the terms of signal-detection theory,d' is above zero.
Because in many of these
cases the stimulus in question is not truly subliminal, in
the sense of being weak, but rather is brief, or masked, or
simply unattended, the concept of subliminal
perception has been replaced by the concept of implicit
perception (Kihlstrom et al, 1992).
- Explicit perception entails conscious perception of the stimulus.
- Implicit perception refers to any effect of a stimulus on the perceiver's experience, thought, or action.
It turns out that explicit and implicit perception can be dissociated, so that the stimulus can affect the perceiver's behavior even though it is not consciously perceived. That is what we mean, these days, when we refer to "subliminal" perception -- even though, technically, the stimulus may not be truly subliminal, it is still processed outside conscious awareness.
Implicit perception is real, and it's interesting, but that doesn't mean that it's all that powerful. Peirce and Jastrow's observers were right only about 60% of the time, after all. In fact, truly subliminal effects on perception are relatively weak, and analytically limited. There's only so much you can do with a subliminal stimulus; anything more requires conscious perception. There's no reason to fear "subliminal" advertising, or any other form of hidden persuasion. Subliminal perception can't make you go out and buy a Coke at the movies -- but it might influence you to buy a Coke, rather than a Pepsi, if you're already thirsty.
This page last revised