Note: Some material in this supplement is taken from my essay on "The Automaticity Juggernaut" (2008).
details on automaticity, see the
lecture supplement on "Attention and Automaticity"
Everyone knows what attention is. It is the taking possession by the mind, in clear and vivid form, of one out of what seem several simultaneously possible objects or trains of thought. Focalization, concentration, of consciousness are of its essence. It implies withdrawal from some things in order to deal effectively with others, and is a condition which has a real opposite in the confused, dazed, scatterbrained state which in French is called distraction, and Zerstreutheit in German.
In a very real sense, modern scientific research on consciousness began with studies of attention and short-term memory inspired by James' introspective analyses. Consciousness has a natural interpretation in terms of attention, because attention is the means by which we bring objects into conscious awareness. Similarly, maintaining an object in primary (short-term, working) memory is the means by which we maintain an object in conscious awareness after it has disappeared from the stimulus field. Attention is the pathway to primary memory, and primary memory is the product of attention.
attention faded into the
background during the
heyday of the
but was revived in the
context of applied
conducted around World
War II, well prior to
the outset of the
Research on such
problems as air traffic
control and efficient
processing of telephone
numbers made two facts
The earliest psychological theories of attention were based on the idea that attention represents a "bottleneck" in human information processing: by virtue of the bottleneck, some sensory information gets into short-term memory, and the rest is essentially cast aside.
The first formal theory of attention was proposed by Broadbent (1958): aside from its substantive importance, Broadbent's theory is historically significant because it was the first cognitive theory to be presented in the form of a flowchart, with boxes representing information-storage structures, and arrows representing information-manipulation processes.
Broadbent's theory, in turn, was based on Cherry's (1953) pioneering experiments on the cocktail party phenomenon. At a cocktail party, there are lots of conversations going on, and individual guests attend to one and ignore the others. Cherry argued that attentional selection in such a situation was based on physical attributes of the stimulus, such as location, voice quality, etc. He then simulated the cocktail-party situation with the dichotic listening procedure, in which two different auditory messages are presented to different ears over earphones; subjects are instructed to repeat one message as it is played, but to ignore the other. The general finding of these experiments was that people had poor memory for the message presented over the unattended channel: they did not notice certain features, such as switch in language, or a switch from forwards to backwards speech. However, they did notice other features, such as whether the unattended channel switched between a male and a female voice.
From experiments like this Broadbent concluded that attention serves as a bottleneck, or perhaps more accurately as a filter. The stimulus environment is exceptionally rich, with lots of events, occurring in lots of different modalities, each with lots of different features and qualities. All of this sensory information is held in a short-term store, but people can attend to only one "communication channel" at a time: Broadbent's is a model of serial information processing. Channels are selected for attention on the basis of their physical features, and semantic analysis occurs only after information has passed. Attentional processing is serial, but people can shift attention flexibly from one channel to another.
model has two important
Broadbent's filter model of attention laid the foundation for the earliest multi-store models of memory, such as those proposed by Waugh & Norman (1965) and Atkinson & Shiffrin (1968). In these models, stimulus information is briefly held in modality-specific sensory registers. A limited amount of sensory information is transferred to short-term memory by means of attention, and is maintained in short-term memory by means of rehearsal. Under certain circumstances, information in short-term memory can be copied into long term memory, and information in long-term memory can, in turn, be copied into short-term memory.
Broadbent's filter model of attention was a good start, but it was not quite right. For one thing, at cocktail parties our attention may be diverted by the sound of our own name -- a phenomenon confirmed experimentally by Moray (1959) in the dichotic listening situation. In addition, further dichotic listening experiments by Treisman (1960) showed that subjects could shift their shadowing from ear to ear to follow the meaning of the message; when they caught on to the fact that they were now shadowing the "wrong" ear, they shifted their attention back to the original ear. But the fact that they shifted their attention at all suggested that they had processed the meaning of the unattended message to at least some extent. These findings from Moray's and Treisman's experiments meant that there had to be some possibility for preattentive semantic analysis, permitting people to shift their attention in response to the meaning and implications of a stimulus, and not just its physical structure.
For these reasons, Treisman (1964) modified Broadbent's theory. In her view, attention is not an absolute filter on information, but rather something more like an attenuator or volume control. Thus, attention attenuates, rather than prohibits, processing of the unattended channel; but this attenuator can also be tuned to contextual demands.
At a cocktail party, you pay most attention to the person you're talking to -- attentional selection that is determined by physical attributes such as the person's spatial location. At the same time, you are also attentive for people talking about you -- thus, attentional selection is open to semantic information.
The bottom line is that attention is not determined by physical attributes, but rather can be deployed depending on the perceiver's goals. This situation is analogous to signal detection theory, where detection is not merely a function of the intensity of the stimulus, and the physical acuity of the sensory receptors, but is also a function of the observer's expectations and motives.
Like Broadbent's model, Treisman's model is historically important because it is the first truly cognitive theory of attention. It departs from the image of bottom-up, stimulus-driven information processing, and offers a clear role for top-down influences based on the meaning of the stimulus. Broadbent (1971) subsequently adopted Treisman's modification.
Treisman's revised filter model has important implications for consciousness. In her model, preattentive processing is not limited to information about physical structure and other perceptual attributes. Semantic processing can also occur preattentively, at least to some extent. The question is:
How much semantic processing can take place preattentively?
Treisman's theory altered the properties of the filter/attenuator, but retained its location early in the sequence of human information processing. The next stage in the evolution of theories of attention was to move the filter to a later stage, and then to abandon the notion of a filter entirely.
theories of attention
(Deutsch & Deutsch,
1963; Norman, 1968) held
that all sensory input
channels are analyzed
sensory information has
been analyzed, then
attention is deployed
based on the pertinence
of analyzed information
to ongoing tasks.
The debate between early- and late-selection theories formed the background for the controversy, which we will discuss later, concerning "subliminal" processing -- i.e., processing of stimuli presented under conditions where they are not detected. The conventional view, consistent with early-selection theories, is that subliminal processing either is not possible, or else is limited to "low level" perceptual analyses. The radical view, consistent with late-selection theories, is that subliminal processing can extend to semantic analyses as well -- because semantic processing, as well as perceptual processing, occurs preattentively.
The debate between early-selection and late-selection theories was very vigorous, and you can still see vestiges of it today. But, like so many such debates, it seemed to get nowhere (similar endless, fruitless debates killed structuralism; in modern cognitive psychology, similar debates have been conducted over "dual-code" theories of propositional and analog/imagistic representations).
theorists cut through
the seemingly endless
debate between early-
theories by altering the
definition of attention
from some kind of filter
(with the debate over
where the filter is
placed) to some kind of
mental capacity. In
(1973) defined attention
as mental effort.
In Kahneman's view, the
resources are limited,
and vary according to
his or her level of
resources are allocated
according to a "policy",
which in turn is
determined by other
likened attention to a
spotlight (Posner et
al., 1980; Broadbent,
1982). In the
attention illuminates a
portion of the visual
illumination can be
spread out broadly, or
it can be narrowly
focused. If spread
out broadly, attentional
"light" can be thrown on
a large number of
objects, but not too
much on any of
them. If focused
"light" can provide a
detailed image of a
single object, at the
expense of all the
others. As the
load increases, the
scope of the attentional
Similarly, attention can be likened to the zoom lens on a camera (Jonides, 1983; Eriksen & St. James, 1986). As load increases, the lens narrows, so that only a small portion of the field falls on the "film". But at low loads, a great deal of information can be processed, though there will be some loss of detail.
The spotlight metaphor also raised the question of whether the attentional beam can be split to illuminate two (or more) non contiguous portions of space.
spotlight metaphor has
valuable in illuminating
(sorry) various aspects
example, Posner has
different aspects of
associated with its own
The traditional view, associated with early-selection "filter" theories, was that elementary information-processing functions are preattentive, performed unconsciously (or, perhaps better put, preconsciously), and requiring no attention. By the same token,complex information-processing functions, including (most) semantic analyses, must be performed post-attentively, or consciously.
The revisionist view, associated with late-selection theories and capacity theories, agreed that elementary information-processing functions were preattentive, performed unconsciously or preconsciously. But it asserted that complex processes, including semantic analyses, could be performed unconsciously too, so long as they were performed automatically.
Early in the history of cognitive psychology, there was a tacit identification of cognition with consciousness. Elementary processes might be unconscious, in the sense of preattentive, but complex processes must be conscious, in the sense of post-attentive. But the evolution of attention theories implied that a lot of cognitive processing was, or at least might be, unconscious.
LaBerge & Samuels (1974; LaBerge, 1975) argued that complex cognitive and motoric skills cannot be executed consciously, because their constituent steps exceed the capacity of attention. Thus, at least some components of skilled performance must be performed automatically and unconsciously. LaBerge &Samuels defined automatic processes as those which permit an event to be immediately processed into long-term memory, even if attention is deployed elsewhere.
their concept of
automaticity with a
model of the
hierarchical coding of
stimulus input in
illustrated by the
Stroop effect. In
the Stroop experiment,
subjects are presented
with an array of letter
strings printed in
different colors, and
their task is to name
the color of the ink in
which the string is
In the Stroop effect, the presence of a word interferes with color-naming -- especially if the word is itself a contradictory color name (but, interestingly, even if the word and the color are congruent). The "Stroop interference effect" occurs regardless of intention: Clear instructions to ignore the word, and focus attention exclusively on the ink color, do not eliminate the interference effect. The explanation is that we can't help but read the words -- it occurs automatically, despite our intentions to the contrary.
The Stroop effect comes in many forms, not all of which involve colors and color words. For example, Stroop interference can be observed when subjects are asked to report the number of elements in a string, and the elements themselves consist of digits rather than symbols that have nothing to do with numerosity.
& Snyder (1975a,
automatic and strategic
& Zacks (1979, 1984)
automatic and effortful
Like Posner & Snyder
and Schneider &
Schneider & Shiffrin argued that while processing specific pieces of information could be automatized, generalized skills could not. In contrast, Spelke, Hirst, & Neisser (1976) demonstrated that a generalized skill (taking dictation while reading at a high level of comprehension) could be automatized, given sufficient practice. In their experiment, subjects (actually paid as work-study students!) performed a divided attention task in which they read prose passages, and simultaneously took dictation for words. The subjects practiced this task for 17 weeks, 5 sessions per week -- with every session employing different stories and lists. The result was that reading speed progressively improved, with no decline in comprehension. Apparently, the subjects automatized the dictation process, so that it no longer interfered with reading for comprehension.
But what about the dictated lists? Initially, the subjects had very poor memory for the words presented on the dictation task. On later trials, they showed better memory for the dictation list, and were able to report when the list items contained rhymes or sentences. They also showed integration errors, such that they remembered lists like The rope broke Spot got free Father chased him as Spot's rope broke. Integration errors indicate that the subjects processes the meaning of the individual sentences and the relations among the sentences automatically -- ordinarily this would be considered to be a very complex task.
Although Spelke et al. wished to cast doubt on the whole notion of attention as a fixed capacity, they also expanded the boundaries of automaticity by showing that even highly complex skilled performance could be automatized, given enough practice.
notion of automaticity
has attracted a great
deal of interest, based
on the widely shared
belief that there are
which generally share
Most work on automaticity assumes that there are two categories of tasks (and, correspondingly, two categories of underlying cognitive processes), automatic and controlled. An alternative view, proposed by Larry Jacoby, is that every task has both automatic and controlled components to it, in varying degree. Jacoby has developed a process-dissociation procedure, based in turn on the method of opposition, to determine the extent to which performance reflects the operation of controlled and automatic processes.
In a typical example of the method of opposition, subjects might first study a list of words, such as density. Then they might receive a stem-completion test, in which they are presented with three-letter stems, and asked to complete these stems with the first word that comes to mind. Some of the stems are "targets" drawn from items of the "old" study list, such as den____. Other items are new, unstudied "lures" such as nec____. A typical finding is that subjects are more likely to complete old stems with items from the study list, such as density (as opposed to dentist). This is known as a priming effect. Priming is often held to be an automatic consequence of list presentation.
In order to
study the nature of this
priming effect, Jacoby
performance under two
Similarly, Jacoby assumes that targets can appear on the Exclusion task only by virtue of automatic priming. Thus,
By simple algebra, then,
To see how the process-dissociation procedure works in practice, consider the matter of age differences in memory. It is widely known that old people have poorer memories than the young. Moreover, it turns out that age differences in memory are greatest on tests of free recall; recognition testing often abolishes the age difference entirely.
question is why
this is so, and there
are lots of
based on Mandler's
theory of recognition,
is that recall requires
active retrieval of
trace information from
can be mediated by two
One explanation of the age difference in memory, then, is that aging impairs retrieval but spares familiarity. This will produce a decrement in recall among the elderly, but not in recognition -- not so long as the elderly rely on the automatic familiarity process, anyway.
determine whether age
differences in memory
differences in the
controlled or automatic
components of memory
processing, Jacoby and
his colleagues put young
and old subjects in the
described above, under
both Inclusion and
the Inclusion condition,
the subjects were
instructed to complete
each stem with an item
from the studied
wordlist -- or, failing
that, with the first
word that came to
mind. In the
they were instructed to
complete each stem with
the first word that came
to mind, provided
that it was not a item
from the studied
other words, the age
difference in memory
performance is due
entirely to the
component (where the old
have lower values than
the young). There
are no age differences
in the automatic
component (as Hasher
& Zacks would
The process-dissociation procedure has been challenged in terms of its underlying assumptions -- for example, if automatic processes are components of controlled processes, then automatic and controlled processes are not strictly independent of each other, as Jacoby's formulas assume. However, the process-dissociation procedure has been widely embraced as a means for isolating, and measuring, the automatic and controlled components of performance on any task, whether nonsocial or social in nature.
Traditionally, automaticity has been studied in the relatively sterile confines of the cognitive laboratory, but automaticity can also be observed in the "real world". Most such observations fall into the province of social psychology, which from a cognitive point of view is the study of mind in action: how percepts, memories, and beliefs translate into actual interpersonal behavior.
early days, cognitive
tacitly focused on
thought, as represented
by such topics as
Partly as a reaction to
what seemed (to some) to
be a cold,
ultra-rational view of
social interaction, some
have begun to offer
An early illustration of automaticity concerned behavior at the photocopy machine (Langer et al., 1978). Subjects making photocopies tended to allow an experimental confederate to interrupt their work, even if the confederate failed to give a good reason. They just automatically complied with a social request.
current emphasis on
automaticity in social
behavior is, actually, a
revival of a traditional,
precognitive position in
social psychology which
held that social thought
and action is constrained
influences -- a situationism
exemplified by Stanley
Milgram's studies of
obedience to authority,
and Stanley Schachter's
emotion. (I think
that it is not a
coincidence that Langer
was a Milgram student).
Within cognitive psychology, it has now become commonplace to assume that many aspects of human performance are mediated by some combination of automatic and controlled processes. That was true in social psychology, as well, giving rise to a number of "dual-process" theories of various aspects of personality and social interaction.
An excellent example of
a dual-process theory are the "two systems"
in judgment and decision-making proposed by
Daniel Kahneman, recipient of the Nobel
Prize in economics (and former UCB
psychology professor). "System 1" is
automatic, fast, and unconscious, and is
involved in "heuristic" reasons and aspects
of "hot cognition" involving emotion,
stereotypes, prejudice, and the like.
"System 2" is controlled, slow, and
conscious; it is the "algorithmic reasoning
of "cold" cognition, logical and
tasks set the two systems on a race to task-completion. But
because System 1 is faster than System
2, it tends to win out.
Similarly, some social psychologists have begun to assert that social cognition and behavior is dominated by automatic, unconscious processes, such that controlled, conscious processes play little role in behavior -- as in the following quotes:
of all this has been the emergence of what I
have called the automaticity juggernaut
(Kihlstrom, 2008) -- the wholesale embrace,
by large number of social psychologists, of
the following propositions.
Carl Sagan's "Great Demotions"
The concept of automaticity was an important advance in cognitive theory, as it offered a resolution of the dispute between early- and late-selection theories of attention (Pashler, 1998). According to the early-selection view, pre-attentive, preconscious processing was limited to analyses of the physical features of a stimulus; in theory, analysis of meaning required the conscious deployment of attention. According to the late-selection view, even meaning analyses were conducted preattentively. Automaticity theory permitted complex, semantic analyses to be carried out preattentively, and thus preconsciously, so long as they were automatized -- for example, through extensive practice. In later developments, automaticity became detached from attention theory, and was re-interpreted in terms of memory (J. R. Anderson, 1992; G. D. Logan, 1988). In addition, cognitive psychologists began to develop experimental paradigms, such as the process-dissociation procedure (L. L. Jacoby, 1991), by which they could estimate the contributions of automatic and controlled processes to task performance.
Following its embrace by cognitive psychology, the concept of automaticity quickly spread to other domains, particularly personality and social psychology. For example, Nisbett and Wilson (1977) clearly had automaticity in mind when they argued that we are consciously aware of the contents of our minds, such as beliefs and attitudes, but unaware of the processes that generated those contents: "We have no direct access to higher-order mental processes such as those involved in evaluation, judgment, problem solving, and the initiation of behavior."
Similarly, Langer asserted that most social interactions are unreflective and mindless, following highly learned, habitual scripts that require very little conscious attention and deliberation:
[M]indlessness may indeed be the most common mode of social interaction" (E. Langer, Blank, & Chanowitz, 1978).
"Unless forced to engage in conscious thought, one prefers the mode of interacting with one's environment in a state of relative mindlessness.... This may be the case, because thinking is effortful and often just not necessary" (E. J. Langer, 1978).
Along these lines, Taylor and Fiske (1978) argued that people are "cognitive misers" laboring under limited cognitive capacity, and preferring "top of the head" judgments to reasoned, thoughtful appraisals. Smith and Miller (1978) were perhaps the first to explicitly invoke the concept of automaticity, as it was then emerging in cognitive psychology, in a commentary on the Nisbett/Wilson paper. From their point of view, limitations on introspective access occurred because salient social stimuli are processed, and responded to, automatically.
Thereafter, a number of social psychologists explicitly referred to the concept of automaticity in designing and interpreting experiments on attitudes and social judgments. For example, Higgins (1981) distinguished between two sources of automatic priming effects on social judgments, chronic and temporary. Bargh (1982) showed that presentation of self-relevant adjectives over the unattended channel in a dichotic listening task could disrupt shadowing performance, after the matter of the "cocktail-party phenomenon"; and that parafoveal presentation of hostile trait adjectives could bias interpretation of the "Donald story" used in studies of impression formation and person memory (Bargh & Pietromonaco, 1982). By the end of the 1980s, the concept of automaticity had been applied across a large number of domains in personality and social psychology, including prejudice, the self-concept, emotion, trait ascriptions, and ruminative thought. A landmark volume edited by Uleman and Bargh (1989) contained chapters detailing the role of automatic, unintended thoughts in a variety of domains, including the activation of self-beliefs and ruminations in anxiety and depression; the influence of feelings on thought and behavior; the ascription of personality traits and the formation of characterological impressions; heuristic information processing in persuasion; and ironic rebound effects.
For example, John Bargh (1984) famously argued that
As Skinner argued so pointedly, the more we know about the situational causes of psychological phenomena, the less need we have for postulating internal conscious mediating processes to explain these phenomena.
Bargh's position, consider an experiment by
Bargh et al. (1996). In the experiment,
subjects performed a "scrambled sentences"
task in which they were given a jumble of
words and asked to arrange them into complete
The purpose of this cover task was to prime the concepts of rudeness and politeness.
At the end of the experiment, the subjects emerged from the lab room to find the experimenter engaged in a conversation with another person, who was actually an experimental confederate. The conversation continued for up to 10 minutes, all the while the experimenter assiduously ignored the waiting subject. The main result was that subjects in the Rude condition were more likely to interrupt the experimenter than were subjects in the Polite condition, with subjects in the Neutral condition falling somewhere in between.
Bargh's interpretation of this experiment is that reading "rude" words automatically primed the subjects to interpret the experimenter's behavior as rude, and thus more likely to behave rudely in turn, interrupting his conversation.
In Bargh's view, most social behavior is preattentive and automatic in nature. It occurs in response to an environmental trigger, in a manner analogous to priming, independent of the person's conscious intentions, beliefs, attitudes, and choices, and also independent of the person's deployment of attention.
For Bargh, automaticity in social
behavior begins with a preconscious
analysis of the situation.
In his analysis of social
behavior, Bargh advocates social ignition
over social cognition. In his
view, automaticity pervades everyday life, and
conscious awareness is largely an after-the-fact
rationalization of what we have thought and
done. In his view, the earlier (if tacit)
emphasis on consciousness in social behavior is
a holdover from an earlier embrace of serial
processing in models of attention and
cognition. But now, he asserts, cognitive
psychology emphasizes parallel processing -- as
in McClelland and Rumelhart's connectionist
"parallel distributed processing" models of
cognition. Bargh is a cognitive social
psychologist, but he no longer equates cognition
After 1989, the concept of automaticity proliferated rapidly through personality and social psychology (Bargh, 1994). A PsycInfo search reveals that prior to 1975, the terms automatic or automaticity had appeared in the abstracts of only 29 articles published in personality and social psychology journals -- and most of these had to do with automatic writing and other aspects of spiritualism. Another six were added by 1980; in the 1980s, there were 40 such articles; and in the 1990s, 115 (for comprehensive coverage of these studies, see D.M. Wegner & Bargh, 1998). By 2006, the new millennium had added more than 181 new papers -- a geometric increase of interest in automaticity, as opposed to the almost perfectly linear increase in the total number of articles published over the same span of time.
Of course, the concept of automaticity gained popularity in its home territory of cognitive psychology, as well -- but with a difference. Cognitive psychologists have maintained a distinction between automatic and controlled processes, and have spent a great deal of effort in assessing their differential contributions to task performance -- as in the process dissociation paradigm (e.g., L. L. Jacoby, 1991). At first, social psychologists followed suit, resulting in a number of "dual-process" theories of attitudes, persuasion, and the like, which described the interplay between automatic and controlled processes (e.g., Chaiken & Trope, 1999). Fairly quickly, however, this balanced perspective began to be replaced by a more single-minded focus on automaticity. For example, Gilbert (1989) argued for the benefits of "thinking lightly about others". And Bargh (2000,p. 938) argued that even intentionally controlled behavior was ultimately automatic in nature, "controlled and determined" by "automatically operating processes". Thus, rather than taking a balanced view of the differential roles of automatic and controlled processing in social interaction, some social psychologists seem to have embraced a view of social thought and action as almost exclusively automatic in nature.
This evolutionary development can be clearly seen in the work of John Bargh, who has been one of the foremost proponents of the concept of automaticity within social psychology. In 1984, writing on "The Limits of Automaticity", Bargh was critical of Langer's position that social interaction proceeded mindlessly:
"A better summary of the mindlessness studies would be that... when people exert little conscious effort in examining their environment they are at the mercy of automatically-produced (sic) interpretations..... Automatic effects are... typically limited to the perceptual stage of processing. There is no evidence... that social behavior is often, or even sometimes, automatically determined (Bargh, 1984, pp. 35-36).
But only five years, later, his position had shifted considerably, as in the editorial introduction to Unintended Thought:
"As most social psychological models implicitly assumed the role of deliberate, calculated, conscious, and intentional thought, the degree to which unintended [automatic] thought did occur in naturalistic social settings became of critical importance.... Langer (1978) emphatically rejected the assumption of deliberate, conscious, thought as typically underlying social behavior.... Our own research programs have followed in this tradition..." (Bargh & Uleman, 1989, pp. xiv-xv).
And in his own contribution to that volume:
Is this to say that one is usually not in control of one's own judgments and behavior? If by "control" over responses is meant the ability to override preconsciously suggested choices, then the answer is that one can exert such control in most cases.... But if by "control" is meant the actual exercise of that ability, then the question remains open.... My own hunch is that control over automatic processes is not usually exercised.... [I]t would appear that only the illusion of full control is possible, as the actual formation of a judgment or decision.... A fitting metaphor for the influence of automatic input on judgment, decisions, and behavior is that of the ambitious royal advisor upon whom a relatively weak king relies heavily for wisdom and guidance (pp. 39-40).
Only one year later, Bargh took a further step, asserting that automaticity pervades the information processing system, such that automatically evoked mental representations automatically generate corresponding motives, which in turn automatically generate corresponding behaviors (Bargh, 1990; Bargh & Gollwitzer, 1994). Thus, merely reading words related to rudeness or politeness can affect whether a subject will interrupt the experimenter's conversation, while reading words related to the elderly stereotype will lead subjects to walk more slowly down the hall (Bargh, Chen, & Burrows, 1996) (see also Ferguson & Bargh, 2004).
In a chapter describing "The Automaticity of Everyday Life" (1997), Bargh continued to expand the role of automatic processes:
"[T]he more we know about the situational causes of psychological phenomena, the less need we have for postulating internal conscious mediating processes to explain these phenomena.... [I]t is hard to escape the forecast that as knowledge progresses regarding psychological phenomena, there will be less of a role played by free will or conscious choice in accounting for them.... That trend has already begun..., and it can do nothing but continue (Bargh, 1997a, p. 1).
Later in the same chapter, Bargh asked, "Is Consciousness Riding into the Sunset?":
"Automaticity pervades everyday life, playing an important role in creating the psychological situation from which subjective experience and subsequent conscious and intentional processes originate... (p. 50).
Actually, in the typical Western, the hero rides into the sunset only after rescuing the sheriff, vanquishing the villain, and kissing the girl -- a pretty good situation. The image Bargh really seems to have in mind is of the sun setting on consciousness -- or, perhaps, consciousness on an ice floe, like the elderly Eskimo, floating out to sea. But just in case the reader missed the message, Bargh quickly repeats it:
"I emphatically push the point that automatic, nonconscious processes pervade all aspects of mental and social life, in order to overcome what I consider dominant, even implicit, assumptions to the contrary (p. 52).
In response to criticism that he might have overestimated the role of automatic processes in social interaction, Bargh (1997b) initially conceded that his "insinuation" that "conscious involvement is... entirely absent" from social interaction might have been "more tactical than sincere" (p. 231). Nevertheless, at the end of that same paper, he reasserted the overwhelming dominance of unconscious automaticity over conscious control:
Bloodied but unbowed, I gamely concede that the commentators did push me back from a position of 100% automaticity -- but only to an Ivory soap bar degree of purity in my beliefs about the degree of automaticity in our psychological reactions from moment to moment (p. 246).
For those who are too young to get the reference, the implication is that social cognition and behavior is 99.44% automatic.
Thus, it no surprise that Bargh has continued to assert "The Unbearable Automaticity of Being":
[M]ost of a person's everyday life is determined not by their conscious intentions and deliberate choices but by mental processes that are put into motion by features of the environment and that operate outside of conscious awareness and guidance (Bargh & Chartrand, 1999, p. 462).
At the same time, Bargh seemed to admit to a softening of his views, allowing that social-psychological processes involve "a complex interplay between both controlled (conscious) and automatic processes (p. 601).
But a 2013 paper paper
by Huang and Bargh asserted the automaticity principle.
Beginning with doubts about conscious control, and asserting the
power of situational influences and the limits of introspective
access. While acknowledging that "dual-process" models
allowed for conscious control as well as automaticity, they
argued that unconscious processes were the predominant influence
on how a person perceives the world and how that person behaves
And a 2014 paper seemed to contain a full-throated reassertion of the power of automatic processes.
not alone in believing that automatic processes
dominate experience, thought, and action, and
relegating deliberate, conscious activity to the
For example, Wegner and Schneider (1989) described a "war of the ghosts in the mind's machine" between automatic and controlled processes, they also suggested that the former tended to win out over the latter:
"When we want to brush our teeth or hop on one foot, we can usually do so; when we want to control our minds, we may find that nothing works as it should.... Even William James, that champion of all things mental, warned that consciousness has the potential to make psychology no more than a tumbling-ground for whimsies" (p. 288).
So great was their enthusiasm for unconscious, automatic processes that these authors actually misquoted James. Here he is in full, criticizing von Hartmann (1868/1931) precisely for taking the position advocated by Wegner and Schneider -- that unconscious processes rule the universe:
"[T]he distinction between the unconscious and the conscious being of the mental state is the sovereign means for believing what one likes in psychology, and of turning what might become a science into a tumbling-ground for whimsies (James, 1890/1980, p. 163, emphasis original).
Given that this passage occurs in the context of James' 10-point critique of the notion of unconscious thought, it is clear that James considered unconscious processes, not conscious ones, to be the "tumbling-ground for whimsies".
Nevertheless, Wegner published a book entitled The Illusion of Conscious Will, whose argument he summarized as follows:
[T]he real causal mechanisms underlying behavior are never present in consciousness. Rather, the engines of causation operate without revealing themselves to us and so may be unconscious mechanisms of mind. Much of the recent research suggesting a fundamental role for automatic processes in everyday behavior (Bargh, 1997) can be understood in this light. The real causes of human action are unconscious, so it is not surprising that behavior could often arise -- as in automaticity experiments -- without the person's having conscious insight into its causation" (D.M. Wegner, 2002, p. 97) ( see also D.M. Wegner & Wheatley, 1999).
Wegner's book included a diagram depicting an "actual causal path" between the "unconscious cause of thought" and "thought", and another between the "unconscious cause of action" and "action", but only an "apparent causal path" between thought and action.
Similarly, Wilson has suggested that conscious processing may be maladaptive because it interferes with unconscious processes that are more closely tuned to the actual state of affairs in the outside world:
"...Freud's view of the unconscious was far too limited. When he said... that consciousness is the tip of the mental iceberg, he was short of the mark by quite a bit -- it may be more the size of a snowball on top of that iceberg. The mind operates most efficiently by relegating a good deal of high-level, sophisticated thinking to the unconscious.... The adaptive unconscious does an excellent job of sizing up the world, warning people of danger, setting goals, and initiating action in a sophisticated and efficient manner. It is a necessary and extensive part of a highly efficient mind (2002, pp. 6-7) (for a critique, see Kihlstrom, 2004b).
The automaticity juggernaut has ranged well beyond the confines of academic psychology. Summarizing much of this research and theory, Sandra Blakeslee, a science correspondent for the New York Times, informed her readers that "in navigating the world and deciding what is rewarding, humans are closer to zombies than sentient beings much of the time" (February 19, 2002). More recently, and drawing largely on Gilbert's and Wilson's work, Malcolm Gladwell, a staff writer for the New Yorker, has written a trade book, Blink, touting the virtues of "thinking without thinking" (Gladwell, 2005).
"The part of our brain that leaps to conclusions... is called the adaptive unconscious, and the study of this kind of decision making is one of the most important new fields in psychology. The adaptive unconscious is not to be confused with the unconscious described by Sigmund Freud, which was a dark and murky place filled with desires and memories and fantasies that were too disturbing for us to think about consciously. This new notion of the adaptive unconscious is thought of, instead, as a kind of giant computer that quickly and quietly processes a lot of the data we need in order to keep functioning as human beings (p. 11).
As this chapter was being finished, Gladwell's book had been on the New York Times non-fiction best-seller list for almost 18 months, attesting to the popularity of the concept of automaticity. It has also drawn a stern retort by Malcolm LeGault, entitled Think: Why Critical Decisions Can't Be Made in the Blink of an Eye:
"Predictably, as if filling a growing market niche, a new-age, feel-good pop psychology/philosophy has sprung up to bolster the view that understanding gleaned from logic and critical analysis is not all that it's cracked up to be.... In Blink, Mr. Gladwell argues that our minds possess a subconscious power to take in large amounts of information and sensory data and correctly size up a situation, solve a problem, and so on, without the heavy, imposing hand of formal thought (p. 8).
Gladwell's book has also inspired a parody from the pseudonymous Noah Tall, entitled Blank: The Power of Not Actually Thinking At All:
The part of our brain that leaps to conclusions that are reached without any thinking involved is called the leapative concluder or, in some circles, the concussive unconscious, because the unexpected hunches that suddenly slam into the brain of those who are receptive to unexpected hunches often feel exactly like being hit on the head by a heavy iron frying pan with a nonstick cooking surface.... The only reason humans have survived as long as we have despite our forgetfulness, laziness, and downright stupidity is because that tiny frying pan in our head hits us upside the unconscious when our conscious is goofing off (Tall, 2006, pp. 7-8).
Experimental evidence indicates that automatic processes play some role, under some conditions, in social cognition and behavior. On this much we can agree. But what might be called the Doctrine of Automaticity goes way beyond such restricted conclusions to assert that automatic processes pervade human experience, thought, and action; conscious awareness is largely an afterthought; and conscious control is an illusion. Humans are, in this view, a special class of zombies, virtual automatons who are conscious, as La Mettrie had argued, but for whom consciousness plays little or no functional role in thought and action. The purpose of consciousness is to erect personal theories about why things happen as they do, and why we do what we do. But, on this view, consciousness is largely irrelevant to what actually goes on. Bargh puts the point concisely:
"As Skinner argued so pointedly, the more we know about the situational causes of psychological phenomena, the less need we have for postulating internal conscious mediating processes to explain these phenomena (Bargh, 1997a p. 1).
Of course, the progress of science will by its very nature correct popular misunderstandings of how the world works, and occasionally reveal surprising, even unpleasant, truths about ourselves. Sigmund Freud famously situated himself in line with Copernicus, who taught us that Earth is not at the center of the universe, and Darwin, who taught us that humans are creatures of nature just like any other. For Freud, the third blow against "human megalomania" was his discovery (as he claimed it was) that conscious experience, thought, and action was determined by unconscious, primitive drives:
[H]uman megalomania will have suffered its third and most wounding blow from the psychological research of the present time which seeks to prove to the ego that it is not even master in its own house, but must content itself with scanty information of what is going on unconsciously in the mind (Freud, 1915-1917/1961-1963, p. 285) (see also Bruner, 1958).
Bargh has explicitly situated himself in this line of scientific progress, substituting for Freud's irrational "monsters from the Id" a view of humans as operating not necessarily irrationally, but whether rational or not, operating mostly on automatic pilot, uninfluenced by conscious deliberation: "[W]e are not as conscious, or as free, as we thought we were" (Bargh, 1997a, p. 52). Henceforth, we must live with "the unbearable automaticity of being" (Bargh & Chartrand, 1999).
Like Bargh, Wegner and Smart (1997) also replaced Freud's third discontinuity, substituting automaticity for irrationality. For the record, there also seems to be a fourth discontinuity, between humans and machines, which some visionaries, like Mazlish (1993) and Kurzweil (1999) see as being erased by advances in artificial intelligence. Of course, the idea that humans are simply machines -- if machines made of meat -- is entirely consonant with the idea that human experience, thought, and action are the product of unconscious processes operating automatically.
It would be one thing if the Doctrine of Automaticity were backed by sound scientific evidence. Then, we would have no choice but to shrug our shoulders, cast off our sentimental beliefs in conscious control, and free will, and find some way to bear "the unbearable automaticity of being", just as we have learned to live with the knowledge that the Earth is not the center of the universe, and that humans are not the products of Special Creation. But in fact, the Doctrine of Automaticity is not true -- or, at least, it is not backed by sound scientific evidence. There are at least three reasons for thinking that the Third Discontinuity, at least the one erased by Bargh and Wegner (never mind Freud) is not quite ready to be expunged.
The first reason, paradoxically, is that the theoretical underpinnings of the concept of automaticity have begun to unravel (G.D. Logan, 1997; Moors & DeHouwer, 2006; Pashler, 1998). In particular, the resource theories of attention on which the concept was originally based have come into question. For example, there does not seem to be a single pool of attentional resources. Nor does even extensive practice with a task render its performance effortless. There is even some data that suggests that attentional capacity is not limited -- at least, that its limits are very wide indeed. As noted earlier, alternative theories of automaticity have been proposed, particularly based on memory rather than attention memory. These revisionist theories preserve the legitimacy of the concept of automaticity, but tend to undercut the various features by which automatic processes are recognized. So, for example, in Anderson's (1992) proceduralization view, automatic processes are engaged only when an appropriate cue is presented in the context of a particular goal state; and in Logan's (2002) instance-based theory, automatic processes are only evoked if the subject has the appropriate mental set. Nor, once evoked, do processes proceed to conclusion in a ballistic fashion.
One response to this state of affairs is to abandon the assumption that the distinction between automatic and controlled processes is a qualitative, all-or-none matter; rather, it is argued, automaticity varies by degrees (Bargh, 1989, 1994). This response is fine, and almost certainly correct, but it has the unfortunate consequence of making it difficult to know precisely when a process is automatic, and when it is not. What happens, for example, if a process seems to run off unintentionally, but nevertheless consumes attentional capacity? And, of course, the concession that some tasks are performed more or less automatically undercuts the fundamental message of "the automaticity of social life" (Bargh & Williams, 2006).
Moreover, it should be noted that the shift to a continuous view of automaticity has been accompanied by a certain slippage in the operationalization of the concept in psychological experiments. For example, in his earliest research Bargh employed a dichotic listening task (Bargh, 1982) or parafoveal presentation (Bargh & Pietromonaco, 1982) in an effort to conform to a relatively strict operational definition of automaticity. Similarly, Fazio et al. (1986) and Devine (Devine, 1989) employed extremely short prime-target intervals, in an attempt to prevent their subjects from employing controlled processes. But in more recent work, such strictures are often abandoned. For example, Bargh and his colleagues have presented words in subjects' clear view, and asked them to pronounce them (Bargh, Chaiken, Raymond, & Hymes, 1996), or to assemble them into sentences (J. A. Bargh et al., 1996) -- tasks that would seem to involve conscious processing. Granted, in these cases the subjects were not specifically instructed to process the relevance of the words to certain attitudes and stereotypes, thus approximating the unintentional nature of automatic processing. But this reliance on only a single feature is a considerable departure from the concept of automaticity as it was originally set out in cognitive psychology.
In fact, within social psychology the concept of automaticity seems to be invoked whenever subjects engage in processing that is incidental to the manifest task set for them by the experimenter -- whether this is shadowing text, detecting visual stimuli, pronouncing words, or assembling sentences. But just because something is done incidentally does not necessarily mean that it has been performed unintentionally, much less automatically. In many situations, subjects may have plenty of processing capacity left over, after the manifest task has been performed; and they may use some of it, quite deliberately, to perform other tasks that interest them -- such as critically analyzing the experiment's cover story, or speculating about the experimenter's true hypotheses (Orne, 1962, 1973).
Most critically, the social-psychological literature on automaticity rarely contains any actual comparison of the strength of automatic and controlled processes. These were features of some of the earliest experiments on automaticity: in studies already described, for example, Fazio et al. (1986), and Devine (1989) also employed relatively long prime-target intervals in their experiments, in an attempt to compare the effects automatic and controlled processing. Within cognitive psychology, there has been considerable interest in developing techniques such as the process-dissociation procedure (PDP) (L. L. Jacoby, 1991) to directly compare the contributions of automatic and controlled processes to task performance. For example, Jacoby and his colleagues (1997) showed convincingly that successful recognition was mediated mostly by controlled retrieval in young subjects, but mostly by automatic familiarity in the elderly. The PDP has its critics (e.g., Curran & Hintzman, 1995), but the point is that cognitive psychologists tend to assume that both automatic and controlled processes contribute to task performance, and try to disentangle them. By contrast, an increasingly popular view within social psychology is that automatic processes dominate, and controlled processes are largely irrelevant.
Bargh is a leader of the automaticity movement within cognitive psychology, but he has a large number of compatriots and followers.
among these was
Daniel Wegner (now
that conscious will is
an illusion, and plays
no causal role in either
behavior. Just so
he won't be
presents a diagram of
the relations between
unconscious thought and
Here's what I had to say about this idea in Behavioral and Brain Sciences (2004):
In his Meditations of 1641, Descartes asserted that consciousness, including free will, sharply distinguished man from beast, and thus initiated the modern philosophical and scientific study of the mind. As time passed, however, philosophers of a more materialist bent began denying this distinction, most visibly Julien Offray de la Mettrie, whose Man a Machine (1748) claimed that humans were conscious automata, and Shadworth Holloway Hodgson, whose The Theory of Practice (1870) introduced the term epiphenomenalism. Although materialist monism was highly attractive to those who would make a science of psychology, William James, in his Principles of Psychology (James, 1890/1980, p. 141), dismissed "the automaton-theory" as "an unwarrantable impertinence in the present state of psychology (italics original).
James was clearly committed to a causal role for consciousness, and thus for free will, but his statement implied a willingness to alter his view, as warranted, as psychology advanced. And, indeed, the behaviorist revolution carried with it a resurgence in the automaton theory, reflected in Watson's emphasis on conditioned reflexes and Skinner's emphasis on stimulus control (Tolman's purposivist interpretation of learning was an exception). On the other hand, the cognitive revolution implied an acceptance of James' functionalist view: the primary reason to be interested in beliefs, expectations, and mental representations is that they have some causal impact on what we do. In fact, modern cognitive psychology accepts a distinction between automatic and controlled mental processes (e.g., Logan, 1997; Shiffrin & Schneider, 1984): automatic processes are inevitably evoked following the presentation of some cue, incorrigibly executed, consume little or no cognitive capacity, and are strictly unconscious; controlled processes lack these properties, and are -- although many scientific psychologists do not like to use the term -- reflections of "conscious will".
To many of us, this seems to be a perfectly reasonable compromise, but Wegner's book appears to be a reassertion of the automaton-theory in pure form. Its very first chapter argues that "It usually seems that we consciously will our voluntary actions, but this is an illusion" (Wegner, 2002, p. 1) as well. Just to make his point clear, Wegner offers (Figure 3.1, p. 68) a diagram showing an "actual causal path" between an unconscious cause of action and conscious action, and another "actual causal path" between an unconscious cause of thought and conscious thought, but only an "apparent causal path" (italics original) -- the experience of conscious will -- between conscious thought and conscious action. And he concludes with Albert Einstein's image of a self-conscious but deluded moon, blithely convinced that it is moving of its own accord. In Wegner's view, apparently, we are conscious automata after all.
Wegner musters a great deal of evidence to support his claim that our experiences of voluntary and involuntary action are illusory, including an entire chapter devoted to hypnosis. In fact, Wegner goes so far as to note that "hypnosis has been implicated in many of the curiosities of will we have discussed" (p. 272). Certainly it is true that hypnotic subjects often feel that they have lost control over their percepts, memories, and behaviors. This quasi-automatic character of hypnotic experiences, bordering on compulsion, even has a special name: the classic suggestion effect (Weitzenhoffer, 1974). However, I think that Wegner's interpretation of this effect is off the mark. In my experience, hypnotized subjects do not experience a "transfer of control to someone else" (p. 271) -- namely, the hypnotist. Rather, they typically experience the phenomena of hypnosis as happening by themselves. This experience of involuntariness is what distinguishes a hypnotic hallucination from a simple mental image, and posthypnotic amnesia from simple thought suppression. The experience of involuntariness is not the same as the transfer of control. Hypnotized subjects claim their involuntary behavior as their own, even as they experience it as involuntary -- which is why it can persist when the suggestion is canceled, in contrast to behavior under the control of an experimenter's verbal reinforcement (Bowers, 1966; Bowers, 1975; see also Nace & Orne, 1970).
Of course, this nonconscious involvement (Shor, 1959, 1962) is illusory. As Shor (Shor, 1979) noted, "A hypnotized subject is not a will-less automaton. The hypnotist does not crawl inside a subject's body and take control of his brain and muscles". Even posthypnotic suggestion, the classical exemplar of hypnotic automaticity, lacks the qualities associated with the technical definition of automaticity....
Although there are a few dissenters (Kirsch & Lynn, 1997, 1998a, 1998b; Woody & Bowers, 1994; Woody & Sadler, 1998), most theorists of hypnosis, whatever their other disagreements, agree that the experience of involuntariness in response to hypnotic suggestions is in some sense illusory....
In fact, most of the other phenomena described at length by Wegner, such as the Chevreul pendulum, automatic writing, the Ouija board, and even facilitated communication, have this quality: behavior that is experienced by the individual as involuntary is actually voluntary in nature. Documenting this illusion would make for an interesting book, as indeed it has (Spitz, 1997). But Wegner puts this evidence to a different rhetorical use -- he tries to convince us, by citing examples of illusory involuntary behavior, that our experience of voluntary behavior is illusory as well. Logically, of course, this does not follow. Of course, there exist illusions of control as well (Alloy, Albright, Abramson, & Dykman, 1989), but even these do not justify the strong conclusion that all experiences of voluntariness are illusory.
Given that the evidence for an illusion of voluntariness is weak, the rationale for Wegner's claim must be found elsewhere -- in theory, or perhaps in ideology. In this respect, Wegner's book can be viewed in the context of a trend in contemporary social psychology that I have come to call the automaticity juggernaut: the widespread embrace of the view that, even with respect to complex social cognition and behavior, we are conscious automatons whose experiences, thoughts, and actions are controlled by environmental stimuli -- just like Skinner said they were (Bargh, 1997; Bargh & Chartrand, 1999; Bargh & Ferguson, 2000; Wegner & Bargh, 1998). The idea that the experience of conscious will is illusory follows naturally from this emphasis on automaticity, which has its roots in the situationism that has infected social psychology almost from its beginnings as an experimental science (Kihlstrom, 2003). But based on the evidence mustered by Wegner, the "illusion of conscious will" seems now, as it did to James more than a century ago, to be an "unwarrantable impertinence".
Automaticity is all the
rage in contemporary social
psychology, as increasing numbers of
social psychologists adopt some
combination of the following views:
With apologies to Alexander Dubcek (1968), and Susan Sontag (1982) I call this
behaviorism with a cognitive face.
One has to wonder: We had a cognitive revolution for this -- to be told that Skinner had it right after all?
Reading the social-psychological literature on automaticity, one might almost wonder why we bothered to have a cognitive revolution in the first place.
some extent, the
the ambivalence with
which the idea of
consciousness is held in
cognitive psychology and
cognitive science (as in
addition, it seems to
reflect a sincere
belief, on the part of
Bargh himself), that a
explanation of behavior
must be deterministic,
and leaves no room for
anything like conscious
implication is that
conscious will, is
epiphenomenal -- that
is, it does not play any
causal role in the world
and fields of
force. This view
is exemplified by the
offered by T.H.
cousin (and "bulldog"
defender of the theory
of evolution by
But there are also other sources of the automaticity juggernaut.
and all, one shakes
one's head at the zeal
with which some
automaticity have jumped
on the automaticity
is a kind of schadenfreude,
I think, when someone
like Bargh takes it as
his "sad duty" to report
that a fondly held view
of human nature is, in
fact, a myth. And
that is precisely what
he does. In one of
his papers, Bargh takes
up Freud's account of
his contribution to the
According to Freud,
Applying the Process-Dissociation Procedure
Which is where Jacoby's process-dissociation procedure (PDP) comes in. Although still somewhat controversial, the PDP remains the most widely used means of estimating the relative contributions of automatic and controlled processes in task performance. Although widely used in cognitive psychology, the PDP has been rarely used in social psychology. But when it has been used, the results have been revealing -- and reassuring.
Among the earliest applications of the PDP concerned the false fame effect. Jacoby and his colleagues (1989) asked subjects to study a list of non-famous names, such as Sebastian Weisdorf, followed by a memory test. One day later, they were given a long list of names, some clearly famous and others not, and asked simply to judge which of them was the name of a famous person. Among the non-famous names on the list were names from the list that had been memorized 24 hours earlier. The chief finding of the experiment was that the previously studied non-famous names were now judged to be famous. Jacoby argued, reasonably, that the initial study session primed the non-famous names, so that when they appeared on the later fame-judgment task they "rang a bell", and that this increased familiarity was interpreted as evidence of fame. And, of course, the priming was interpreted as automatic in nature.
But was it? In a later study, Jennings and Jacoby (1993) applied the PDP to the false fame effect. The experiment was run as before, except that this time there was both an Inclusion and an Exclusion task (we won't get bogged down in the details of how the Method of Opposition was actually implemented). Moreover, the entire study was run under three experimental conditions: first, they compared full attention (no distraction) to divided attention (where the subjects performed a distracting task while they made fame judgments, in order to restrict cognitive resources; second, they repeated the full-attention condition with a group of elderly subjects, to compare with the college students.
The results were revealing. In the full-attention condition, which was comparable to the original demonstration of the false fame effect, controlled processing was more influential than unconscious, automatic processing. The role of conscious, controlled processing was diminished in the divided-attention condition (naturally, because of the more limited cognitive resources available), and among the elderly (because of age-related declines in cognitive capacity). But even under these conditions, automatic processing did not dominate conscious processing. It would be more accurate to say simply that the false fame effect was produced by a mix of conscious and unconscious processing.
Similar findings were obtained in an experiment on spontaneous trait inferences. Uleman and his colleagues have performed a number of experiments where subjects studied the photographs of strangers, each of which was paired with a simple behavioral description, such as Jane gave a dollar to the beggar). Two days later, they were presented with a larger set of set of photos, including some of the photos studied previously, and asked to make judgments about the personalities of the people depicted. The finding was that targets depicted in the old photos tended to receive trait attributions in line with the behavioral descriptions that had accompanied the photos studied on the first day. For example, subjects tended to describe "Jane" as kind. The interpretation was that, in line with what is called the fundamental attribution error, subjects automatically attribute behavior to a person's personality traits and other internal characteristics, rather than to the situation or some other external factor. So, presentation of the behaviors automatically primed traits, which were then attached to the people depicted in the photos. When the same photo appeared the next day, the subject automatically retrieved the corresponding trait information from memory.
But did it? to his credit, Uleman and his colleagues (2005) applied the PDP to their experimental paradigm, running subjects under both Inclusion and Exclusion instructions. In one condition, the subjects made trait judgments immediately after studying the photographs; in other conditions, the trait judgments were delayed by 20 minutes, or 2 days (as in the initial experiment). In the "immediate" condition, in fact, conscious processing proved to be more influential than automatic processing. The role of conscious processing was reduced after a 20-minute or 2-day delay, reflecting the time-related decline of conscious recollection. But even in these conditions, task performance was mediated by a pretty even mix of conscious and unconscious processing.
Perhaps the most revealing of these PDP studies is a pair of experiments by Payne and his colleagues concerning the weapon bias. In these experiments, subjects are shown a brief video clip, and they have to judge whether the person in the clip is holding a weapon (a knife or a gun, for example) or a tool (like a wrench or a TV remote). Before performing this task, however, the (white) subjects are shown the face of a black or white person. The finding is that the white subjects are more likely to misidentify tools as guns when they have been primed by a black face. Moreover, they were faster to correctly identify the object as a gun after they had been primed with a black face, and faster to correctly identify the object as a tool after they had been primed with a white face. The interpretation is that the presentation of the faces automatically primed racial stereotypes in these white college students, including the stereotypical association of black people with crime and violence; and that this stereotype automatically biased their weapon judgments. But did it? It depends.
In his initial experiment, Payne (2001) applied the MOP in his standard experimental paradigm, in which there was no deadline, and subjects could take their time making their judgments. In this case, controlled, conscious processing proved to be more important than automatic processing.
In a later study, Payne and his colleagues imposed a deadline on the subjects, forcing them to make their decision, weapon or tool, within 500 msec of the video. This was done to more closely simulate the situation of a police officer who must make a split-second decision about whether an object in a suspect's hand is a weapon. Under these circumstances, where subjects had to make very rapid decisions, automatic processes dominated; but there was still a healthy amount of controlled processing in the mix.
Applying the QUAD Model to the IAT
A technique similar to the process-dissociation procedure has been used to separate automatic and controlled components of performance on the Implicit Association Test (IAT), devised by Greenwald, Banaji, and their colleagues. As described in the lectures on Social Categorization, the IAT makes use of stimulus-response (in)compatibility to assess people's unconscious, or at least unacknowledged, attitudes toward things both mundane (like insects vs. flowers) or monumental (like Blacks vs. Whites or Koreans vs. Japanese). The general idea is that subjects' performance on the IAT is influenced by automatic associations between concepts, such as flowers-pleasant or insects-unpleasant. Seeing an insect automatically activates the associated attitude bad, while seeing a flower automatically activates the association good, and these automatically activated associations then lead automatically to prejudicial behavior toward the attitude objects.
At least, that's the theory. In fact, Greenwald and Banaji offer little direct evidence that automatic processes underlie performance on the IAT. Mostly, they show only that scores on the IAT are (in their view) relatively poorly correlated with conscious attitudes, as measured by techniques such as an attitude thermometer. Recently, Jeff Sherman and his colleagues at UCSD have proposed a QUAD model for the analysis of automatic and controlled components of task performance.
an example of how the
QUAD model can be
applied to analyze
performance on the
Black-White version of
the IAT. It's
complicated -- a lot
more complicated than
And when Conway et al. performed the QUAD analysis, employing white students at UC Davis, they found that controlled processing dominated performance under standard conditions, with no time limitations on response. The C and D parameters were both much larger than the AC and G parameters.
et al. performed a
similar study, with
White students at
the University of
Texas. Here the
reduced, but it was
still greater than
is supposed to
be revealed by
errors in IAT
by and large
That is, most
they may harbor.
So, the bottom line is that automaticity plays some role in social cognition and behavior, as we would expect -- because automaticity plays some role in everything we do. The relative impact of automaticity is increased under conditions that militate against conscious processing, such as distraction (which consumes attentional resources), long retention intervals (which induce forgetting) or short response windows (which preclude conscious processes from coming into play). And even under ordinary circumstances, there is enough "mindlessness" in the ordinary course of everyday living to convince us that there's something to this automaticity business.
But the assertion that "humans are closer to zombies than sentient beings much of the time" (Blakeslee, 2002) is wide of the mark. There's just no evidence to support any such belief.
Critique of the Critique of Conscious Will
The social-psychological emphasis on automaticity underlies yet another threat to reason in moral psychology -- namely, a critique of the concept of conscious will itself. You don't have to think about things too hard to understand that the very concept of moral judgment depends on the freedom of the will. Neither concept applies in the natural world of planets and continents and lions and gazelles, where events are completely determined by events that went before. Moral judgment only applies when the target of the judgment has a real choice -- the freedom to choose among alternatives, and whose choices make a difference to its behavior. The problem of free will, of course, is that we understand that we are physical entities: specifically, the brain is the physical basis of mind; and the brain, as a physical system, is not exempt from the physical laws that determine everything else that goes on in the universe; and so neither are our thoughts and actions. So the problem of free will is simply this: how do we reconcile our conscious experience of freedom of the will with the sheer and simple fact that we are physical entities existing in a universe that consists of particles acting in fields of force?
Philosophers have debated this problem for a long time -- at least so long as materialism began to challenge Cartesian dualism. Those who are compatibilists argue that the experience of free will is compatible with physical determinism, while incompatibilists argue that it is not, and we must reconcile ourselves to the fact that we are not, in fact, free to choose what to do and what to think. Those incompatibilists who have read a little physics may make a further distinction between the clockwork determinism of classical Newtonian physics and the pinball determinism of quantum theory, maybe invoking Heisenberg's observer effect and uncertainty principle (they're apparently not the same thing) as well, but injecting randomness and uncertainty into a physical system is not the same as giving it free will, so the problem remains where it was.
Psychologists too have entered the fray: those of a certain age, such as mine, will remember the debate between Carl Rogers and B.F. Skinner over the control of human behavior (Rogers & Skinner, 1956; Wann, 1964). These days, many psychologists, for their part, appear to come down on the side of incompatibilism, arguing essentially that free will is an illusion -- a necessary illusion, if we are to live in a society governed by laws, but an illusion nonetheless.
case in point,
in which Dan
asserts that "the
behavior are never
97). Just to make
his meaning clear,
be presents the
reader with a
thought and action
with the "actual
to an "unconscious
cause of action".
More recently, Mike Gazzaniga (2011, pp. 105-106) has picked up on the theme, writing that the "illusion" of free will is so powerful that "we all believe we are agents... acting willfully and with purpose", when in fact "we are evolved entities that work like a Swiss clock" (no pinball determinism for him!). To illustrate his point, he recounted an instance in which, while walking in the desert, he jumped in fright at a rattlesnake: he "did not make a conscious decision to jump and then consciously execute it" -- that was a confabulation, "a fictitious account of a past event; rather "the real reason I jumped was an automatic nonconscious reaction to the fear response set into play by the amygdala" (pp. 76ff).
Similarly, Sam Harris (2012), a neuroscientist who burst on the scene with a vigorous critique of religion, has weighed in with a critique of free will, arguing, like Wegner, that free will is simply an illusion. "Our wills are simply not of our own making. Thoughts and intentions emerge from background causes of which we are unaware and over which we exert no conscious control."
This argument isn't just inside baseball. In its March 23, 2012 issue, the Chronicle of Higher Education published a forum entitled "Free Will is an Illusion", with a contribution by Mike Gazzaniga; the May 13, 2012 issue of the New York Times carried an Op-Ed piece by James Atlas entitled "The Amygdala Made Me Do It"; and the May-June 2012 issue of Scientific American Mind featured a cover story by Christopher Koch detailing "How Physics and Biology Dictate Your 'Free' Will". These aren't the only examples, so something's happening here. What we might call psychological incompatibilism is beginning to creep into popular culture. Which, like moral intuitionism, is OK if it's true. The question is: Is it true?
Wegner, Gazzaniga, and Harris are inspired, in large part, by a famous experiment performed by the late Benjamin Libet, a neurophysiologist, involving a signal known as the readiness potential (Libet, Gleason, Wright, & Pearl, 1983). When someone makes a voluntary movement, an event-related potential appears in the EEG about 600 milliseconds beforehand.
Libet added to this experimental setup what he called the "clock-time" method. Subjects viewed a light which revolved around a circle at a rate of approximately 1 per 2.5 seconds; they were instructed to move their fingers anytime they wanted, but to use the clock to note the time of their first awareness of the wish to act.
Libet discovered that the awareness of wish preceded the act by about 200 msec -- not much of a surprise there. But he also discovered that the readiness potential preceded the awareness of the wish by about 350 msec (200 + 350 = c. 600 msec). So there is a second type of readiness potential, which Libet characterized as a predecisional negative shift. Libet concluded that the brain decides to move before the person is aware of the decision, which manifests itself as a conscious wish to move. Put another way, behavior is instigated unconsciously (Wegner's "unconscious cause of action"), conscious awareness occurs later, as a sort of afterthought, and conscious control serves only as a veto over something that is already happening. In other words, conscious will really is an illusion, and we are nothing more than particles acting in fields of force after all.
Libet's observation of a predecisional negative shift has been replicated in other laboratories, but that does not mean that his experiment is immune to criticism and his conclusions are correct (for extended discussions of Libet's work, including replies and rejoinders, see (Banks & Pockett, 2007; Libet, 1985, 2002, 2006). In the first place, there's a lot of variability around those means, and the time intervals are such that that the gap between the predecisional negative shift and the readiness potential could be closer to zero. And there are lot of sources of error, including error in determining the onset of the readiness potential, and error in determining the onset of the conscious wish (as for the latter, think about keeping track of a light that is rotating around a clockface once ever 2.5 seconds). Still, that difference is unlikely to be exactly zero, and so the problem doesn't go away.
At a different level, Libet's experiment has been criticized on the grounds of ecological validity. The action involved, moving one's finger, is completely inconsequential, and shouldn't be glibly equated with choosing where to go to college, or whom to marry, or even whether to buy Cheerios or Product 19 -- much less whether to throw a fat man off a bridge to stop a runaway trolley. The way the experiment is set up, the important decision has already been made -- that is, to participate in an experiment in which one is to raise one's finger while watching a clock. And it's made out of view of the EEG apparatus. I find this argument fairly persuasive. But still, there's that nagging possibility that, if we recorded the EEG all the time, in vivo, we'd observe the same predecisional negative shift before that decision was made, too.
More recently, though, Jeff Miller and his colleagues found a way to address this critique. They noted that the subjects' movements are not truly spontaneous, for the simple reason that they must also watch the clock while making them. They compared the readiness potential under two conditions. In one, the standard Libet paradigm, subjects were instructed to watch the clock while moving their fingers, and report their decision time. In the other, they were instructed to ignore the clock, and not asked for any reports. Subjects in both conditions still made the "spontaneous" decision whether, and when, to move their fingers. But Miller et al. observed the predecisional negative shift only when subjects also had to watch the clock and report their decision time. They concluded that Libet's predecisional negative shift was wholly an artifact of the attention paid to the clock. It does not indicate the unconscious initiation of ostensibly "voluntary" behavior, nor does it show that "conscious will" is illusory. Maybe it is, but the Libet experiment doesn't show it.
The Miller experiment is important enough that we'd like to see it replicated in another laboratory, though I want to stress that there's no reason to think that there's anything wrong with it. When they did what Libet did, they got what Libet got. When they altered the instructions, but retained voluntary movements, Libet's effect disappeared completely -- not just a little, but completely. The ramifications are pretty clear. This doesn't mean that the problem of free will has been resolved in favor of compatibilism, though it does suggest that compatibilism deserves serious consideration. Personally, I like the implication of a paper by John Searle, titled "Free Will as a Problem in Neurobiology" (Searle, 2001). We all experience free will, and there's no reason, in the Libet experiment or any other study, to think that this is an illusion. It may well be a problem for neurobiology, but it's a problem for them to solve. I don't lose any sleep over it. But if free will is not an illusion, and we really do have a meaningful degree of voluntary control over our experience, thought, and action, that moral judgment is secure from this threat as well. We should be willing to make moral judgments, using all the information -- rational and intuitive -- that we have available to us.
Free Will, Within Limits
And that's where I came down on my response to the Templeton Foundation's Big Question: "Does Moral Action Depend on Reasoning?". We are currently in the midst of a retreat from, or perhaps even a revolt against or an assault on reason. Some of this is politically motivated, but some is aided and abetted by psychologists who, for whatever motive, seek to emphasize emotion over cognition, the unconscious over the conscious, the automatic over the controlled, brain modules over general intelligence, and the situation over the person (not to mention the person-situation interaction).
Moral intuitionism represents a fusion of automaticity and emotion, and like the literature that comprises the "automaticity juggernaut" (Kihlstrom, 2008) it relies mostly on demonstration experiments that reveal that gut feelings can play a role in moral judgments. There is no reason to generalize their findings to what people do in the ordinary course of everyday living. As I wrote in the Templeton essay (pp. 37-38):
Freedom of the will is real, but that does not mean that we are totally free. Human experience, thought, and action are constrained by a variety of factors, including our evolutionary heritage, law and custom, overt social influences, and a range of more subtle social cues [like demand characteristics]. But within those limits we are free to do what we want, and especially to think what we want, and we are able to reason our way to moral judgments and action.
It is easy to contrive thought experiments in which moral reasoning seems to fail us.... When, in (thankfully) rare circumstances, moral reasoning fails us, we must rely on our intuitions, emotional responses, or some other basis for action. But that does not mean that we do not reason about the moral dilemmas that we face in the ordinary course of everyday living -- or that we reason poorly, or that we rely excessively on heuristic shortcuts, or that reasoning is infected by a host of biases and errors. It only means that moral reasoning is more complex and nuanced than a simple calculation of comparative utilities. Moral reasoning typically occurs under conditions of uncertainty... where there are no easy algorithms to follow. If a judgment takes place under conditions of certainty, where the application of a straightforward algorithm will do the job, it is probably not a moral judgment to begin with/
If you believe in God, then human rationality is a gift from God, and it would be a sin not to use it as the basis for moral judgment and behavior. If you do not believe in God, then human rationality is a gift of evolution, and not to use it would be a crime against nature.
Classic social psychology... makes people appear to be automatons. The situational influences on behavior investigated in these [classic] studies were (a) unintended on the part of the individual, (b) not something of which the person was aware, (c) a response to the situation occurring before the individual had a chance to reflect on what to do (i.e., efficient) or (d) difficult to control or inhibit even when the person is cognizant of the influence. As it happens, these are characteristics of automatic psychological processes, not of conscious control, and comprise a handy working definition of automaticity (p. 447).
"If a social psychologist was going to be marooned on a deserted island and could only take one principle of social psychology with him it would undoubtedly be 'the power of the situation'. All of the most classic studies in the early days of social psychology demonstrated that situations can exert a powerful force over the actions of individuals....
"If the power of the situation is the first principle of social psychology, a second principle is that people are largely unaware of the influence of situations on behavior, whether it is their own or someone else's behavior (Lieberman, 2005, p. 746).
Now, as the purview of social psychology is precisely to discover those situational causes of thinking, feeling, and acting in the real or implied presence of other people..., it is hard to escape the forecast that as knowledge progresses regarding psychological phenomena, there will be less of a role played by free will or conscious choice in accounting for them. In other words, because of social psychology's natural focus on the situational determinants of thinking, feeling, and doing, it is inevitable that social psychological phenomena will be found to be automatic in nature."
"This is the way it needs to be for progress in the explanation of human psychology. The agent self cannot be a real entity that causes actions, but only a virtual entity, an apparent mental causer" (D.M. Wegner, 2005, p. 23).
[T]he same higher mental processes that have traditionally served as quintessential examples of choice and free will -- such as goal pursuit, judgment, and interpersonal behavior -- have been shown recently to occur in the absence of conscious choice or guidance. It would seem, therefore, that the mid-century failure of behaviorism to demonstrate the determinism of complex higher order human behavior and mental processes occurred not because those processes were not determined but rather because behaviorists denied the existence of the necessary intraindividual, psychological explanatory mechanisms... mediating between the environment and those higher processes....
[T]he failure of behaviorism in no way constituted the failure of determinism. We... present the case for the determinism of higher mental processes by reviewing the evidence showing that these processes, as well as complex forms of social behavior over time, can occur automatically, triggered by environmental events and without an intervening act of conscious will or subsequent conscious guidance (p. 926).
This page last revised 10/23/2015.