Home Introduction Cognitive Psychology Cognitive Perspective Social Perception Social Memory Social Categorization Social Judgment Language Automaticity Self Social Neuropsychology Personality Social Intelligence Development Sociology of Knowledge Social Construction Conclusion Lecture Illustrations Exam Information


University of California, Berkeley
Department of Psychology

Psychology 164
Spring 2000
Midterm Examination

This examination consists of six (6) pairs of questions.
Choose four (4) pairs, and then
answer one (1) question from each of these pairs.
You should only write 1-3 paragraphs for each question.
Write legibly, in complete sentences, in the space provided, and please use ink.
Each question you answer is worth 10 points.
You will also receive to "free" points, to make up a 50-point exam.

    1a.  Describe Asch's notion of a "central" trait. What makes one trait central, and another trait peripheral? Consider the following three pairs of traits: good-bad, strong-weak, and active-passive. Which is most likely to be a central trait and why?

According to Asch, a central trait is one which greatly affects the overall impression of someone's personality. Thus, it matters a great deal to the overall impression whether one is described as "warm" or "cold", but not so much whether one is described as "polite" or "blunt". Thus, "warm-cold" are central traits, whereas "polite-blunt" are not. Asch viewed central traits from the point of view of Gestalt psychology, believing that individual stimulus elements (like individual items in a trait ensemble) were organized into a unified whole that was "greater than the sum of its parts".

Asch discovered central traits, but he had no idea what made some traits central, and other traits peripheral. Wishner was the first to make headway on this problem: he discovered that the traits which proved to be central in Asch's research were those which had many significant correlations with other traits. By virtue of these correlations, traits like "warm-cold" carry a lot of information about the person who possesses them -- we can infer many other attributes from knowing whether someone is warm or cold. But Wishner only solved part of the problem, because whether a trait is central or peripheral is still a purely empirical matter -- whether it has lots of intercorrelations, or only just a few.

Rosenberg essentially solved this problem with his analyses of inplicit personality theory. By examining the pattern of correlations among traits, he discovered that a strong dimension of "good-bad" ran through them (this overall dimension can be broken down further into "intellectual" and "social" good or bad). Traits which correlate highly with this big dimension of evaluation carry a lot of implications about other traits: they are central. Thus, a dimension like "good-bad", which obviously loads strongly on the evaluation dimension, is more likely to encompass central traits than dimensions such as "strong-weak" (potency) or "active-passive" (activity), which are not so closely related to social desirability.

    1b. Following the model of Asch's impression-formation experiments, a number of social cognition researchers have chosen to represent the stimulus in terms of a set of traits. What justifies this choice? What nonverbal social stimulus information is available for perception? Given an example of how we "translate nonverbal stimulus features into verbal trait labels.

When Asch provided stimulus information for his studies of person perception in the form of lists of traits describing the target person, he did so largely for the sake of convenience: it was simply easier to generate stimulus materials this way. However, the choice can be justified on other than pragmatic grounds: as Fiske and Cox demonstrated, our descriptions of each other are dominated by trait and type terms such as "extraverted" and "extravert". Moreover, our tendency to make correspondent inferences, and our related proclivity toward the fundamental attribution error also justifies this choice -- at least in retrospect, given that Asch didn't know about these phenomena. Thinking about traits comes "naturally", at least to people in Western European culture, and so they were a defensible choice for Asch.

On other grounds, however, a focus on traits may misrepresent the process of person perception, because perception has to do with forming mental representations of the stimulus world, and people don't walk around with lists of their traits pasted to their foreheads. Rather, traits are a kind of intermediate step in impression formation, standing between physical stimulus information and global impressions (such as "good" or "bad"). When we encounter people in the flesh we "read" their psychosocial characteristics from their physical features, such as facial expressions, posture, gait, gestures, body build, clothing, and the like. So, for example Ekman has shown that we can "read" (or, perhaps more correctly, make inferences about) people's emotional states of happiness, sadness, anger, fear, disgust, and surprise from their facial expressions. As important as it is to know how we make use of trait information, it is also important to know how we gather trait information in the first place -- this is the real problem of "person perception".

    2a. In what sense are social stereotypes also social concepts? What is the relationship between stereotype representations and individual representations from the prototype view? From the exemplar view?

Stereotypes are impressions of social groups that serve as filters on our perception of individual group members. Thus, men's stereotypes of women consist of men's beliefs about what women are like in general; and men tend to view individual women in terms of these stereotypes (women do the same thing to men, of course). Stereotyping does not all individuals to stand alone -- they are always perceived in terms of their group membership. In principle, stereotypes can be structured along classical or proper-set lines, but most likely they, like other concepts, are represented as prototypes or as lists of exemplars. In some instances of stereotyping, the subject may never have met any individual members of the stereotyped group -- therefore a prototype model of stereotyping seems most appropriate.

It's pretty easy to see what the relation is between knowledge about groups and knowledge about individual group members from the exemplar view. According to the exemplar view, our concept of a group is represented by a list of the individual members of that group. Thus, information about individuals is already in the concept.

But the problem is harder to address from the prototype view, because the prototype is a summary abstracted from category members, and it may not represent any particular individual in the category. So, individuals either have to be represented separately in memory -- for example, as nodes in an associative network linked to nodes representing group membership; or they have to be represented as subgroups within category. Thus, a man might have one concept of a "typical" woman, and another concept (or several concepts) for "atypical" women.

    2b. In what sense are psychiatric diagnoses social categories? To what extent is psychiatric diagnosis patterned after the classical model? The prototype model? The exemplar model?

Psychiatric diagnoses are social categories for the simple reason that they are categories that we apply exclusively to people. We don't call rocks and cows schizophrenics (except perhaps metaphorically in the case of cows). But we do call at least some people schizophrenics, and in doing so we categorize them as similar to others who are so labeled, and different from others who don't carry that label. The psychiatric nosology can be viewed as a hierarchical arrangement of social categories, beginning with a superordinate distinction between the normal and the crazy (or perhaps a somewhat more comprehensive set of distinctions among the normal, the criminal, the mentally retarded, and the mentally ill. At the next level down are broad categories of mental illness, such as "psychotic" and "neurotic". Then there are "types of psychosis and neurosis, such as schizophrenia vs. affective disorder and phobic vs. obsessive; and further differentiation among these categories, such as paranoid vs. nonparanoid schizophrenic, bipolar vs. unipolar affective disorder, and the like. Each level provides more fine-grained information about the individuals in the category. Thus, to label someone as "psychotic" doesn't tell you much about them; to label them as "paranoid schizophrenic" tells you a lot.

Historically, it appears that the diagnostic categories were initially construed as classical, proper sets, with diagnostic symptoms serving as singly necessary and jointly sufficient defining features. However, it turns out that the kinds of problems that vex the classical view in nonsocial domains also crop up in social domains: for example, categorization does not appear to be governed by defining features; there is a lot of overlap between categories; and some category members are better instances than others. In addition, empirical studies of diagnostic behavior showed that psychiatrists and other mental health professionals actually treated the diagnostic categorizes as fuzzy sets, represented by category "prototypes" who in some sense represent the central tendencies among category members. At about the same time, the Diagnostic and Statistical Manual for Mental Disorders was revised, essentially incorporating the fuzzy-set, prototype view of category structure.

Interestingly, further research showed that while diagnosis is never based on the classical view, it isn't always based on the prototype view either. In fact, expert mental-health professionals appear to categorize according to an exemplar model, while novices categorize according to a prototype model. This makes sense: newcomers to an area need the structure provided by rules (in the form of summary prototypes), while experts in an area have built up, through experience, considerable knowledge concerning specific instances (or particular patients).

    3a. Describe the effects of expectancies (represented as "schemata") on person memory. How do theories of stability and change affect person memory?

Bartlett proposed that memory is best for information about a person or event that is congruent with activated schemata (expectations and beliefs) concerning that person or event, but more recent research shows that the actual situation is more complicated. In fact, under most circumstances memory is best for schema-incongruent information -- apparently, events that violate expectancies are surprising, they draw more attention (including the cognitive effort involved in explaining them), and this extra attention produces a "deeper", more lasting encoding. However, schema-congruent events are remembered better than schema-irrelevant or schema-neutral events. Apparently, this is because the schema provides extra cues to guide the process of memory retrieval.

When people are asked about their past, their answers appear to be guided by their intuitive theories about stability and change. Mostly, we view personality as stable over both long and short periods of time (this tendency is related to the fundamental attribution error). Thus, when asked what we were like some time ago, we tend to exaggerate the link between what we were like then and what we are like now. However, there are exceptions to this principle, and certain events (such as marriage or the birth of a child) may be believed to produce important changes in memory. Thus, people may exagggerate differences between the past and present. The point is that our memories are guided by our theories of stability and change. When these theories are inaccurate, our memories may also be inaccurate.

    3b. Describe the effects of mood on memory. How do we distinguish between mood congruence and mood dependence experimentally?

The two most prominent effects of mood on memory are mood congruence and mood dependence (there are others, but Kunda doesn't discuss them in any detail)..

In mood congruence, memory is best for information whose emotional valence matches the person's mood at the time the memory is encoded or retrieved. Thus, happy people tend to remember happy events better than sad events, and sad people tend to remember sad events better than happy events. This can result in a kind of vicious cycle -- for example, sad people may remember sad events, and these memories make them even sadder.

In mood dependence, memory is best when the person's mood at the time of retrieval matches the person's mood at the time of encoding. Thus, events that happened while a person was happy are remembered best on subsequent occasions when the person was happy. This too can result in a vicious cycle, as sad people remember events associated with previous periods of sadness.

Mood congruence and mood dependence may be mediated by similar processes. In the real world outside the laboratory, there is probably a correlation between the emotional valence of the events which people experience, and the emotional valence of their internal mood states. That is to say, people are relatively unlikely to be happy during sad events, or sad during happy events. If so, then it will be mood dependence might be viewed as a special case of mood congruent encoding.

    4a. According to Kelley's "covariation" model, what information is needed to make causal attributions concerning events in the social world? What pattern of information leads observers to attribute behavior to the actor? To the target? To the context?

Application of the covariation calculus requires three kind of information about an event: the consistency of the actor's behavior toward the target, observed across a number of occasions; the distinctiveness of the actor's behavior, behavior, compared to other targets; and the consensus among actors, with respect to the target. A pattern of high consistency, low distinctiveness, and low consensus leads to actor attributions; a pattern of high distinctiveness, high consistency, and high consensus leads to target attributions; a pattern of low consistency, high consensus, and high distinctiveness leads to context attributions.

Whatever, the pattern of information, distinctiveness and consensus are the most important cues to distinguishing between actors and targets; consistency is most important in determining attributions to the context. But regardless of the pattern of avaialble information, we tend to attribute behavior to the actor -- this is the fundamental attribution error.

    4b. Distinguish between algorithms and heuristics in social judgment. Define each of the following heuristics: representativeness, availability, simulation, and anchoring and adjustment. Show how each heuristic can play a role in causal attribution.

Algorithms are logical, systematic rules for judgment, inference, and problem slving. An algorithm specifies all the information which is necessary to perform a task, and shows how that information is to be combined in achieving some result. If a problem is soluble, application of the appropriate algorithm will inevitably produce the correct answer. But they can't be applied under conditions of uncertainty, where no algorithm is applicable, or all the necessary information isn't available. Heuristics are shortcuts or "rules of thumb" which bypass the logical rules of inference. They permit judgment under conditions of uncertainty, but they also incur some likelihood of making an error in reasoning or judgment.

Representativeness: judgment based on resemblance or similarity of appearance. Application to causal attribution: causes should resemble effects.

Availability: judgment based on the ease with which examples come to mind. Application to causal attribution: attribution is made to the most salient causal element.

Simulation: judgment based on the ease with which one can construct a plausible scenario. Application to causal attribution: attribution is made according to the first possible explanation which comes to mind.

Anchoring and adjustment: final judgments are tied closely to initial estimates. Application to causal attribution: first impressions (such as correspondent inferences) dominate causal attribution, regardless of additional information which arrives later.

    5a. What is the fundamental attribution error? What is the evidence that we make it? How does this misjudgment relate to correspondent inferences? To our beliefs about trait consistency?

The fundamental attribution error is the tendency to attribute causal responsibility for some event to the actor, rather than to the target or the context or some more complex combination of causes, even when the pattern of available information indicates that the actor is not responsible. The evidence that we make it comes from studies of the attitude attribution paradigm; also evidence from departures from the covariation calculus for causal attribution.

Correspondent inferences are good examples of the fundamental attribution error. By assuming that actors intend the outcomes of their actions, and that these intentions correspond to the actor's internal dispositions, we attribute responsibility for those outcomes to the actors themselves, rather than to the context, target, etc.

Despite evidence for considerable cross-situational variability in behavior, we tend to believe that people's behavior is quite consistent across situations, just as we tend to believe that their behavior is quite stable across short and long intervals of time. We attribute this (somewht illusory) stability and consistency to people's traits -- behavioral dispositions that render people's behavior coherent, stable, consistent, and predictable. Beliefs about traits and their power to determine their behavior are so pervasive, at least in Western culture, that it is no surprise that we make the fundamental attribution error.

    5b. What is the fundamental attribution error? How does the "actor-target" distinction in attribution theory map onto the "subject-object" distinction in language? How does our understanding of the structure of language help us to understand why people are biased to make causal attributions to one or the other party in the situation?

The fundamental attribution error is the tendency to attribute causal responsibility for some event to the actor, rather than to the target or the context or some more complex combination of causes, even when the pattern of available information indicates that the actor is not responsible. The evidence that we make it comes from studies of the attitude attribution paradigm; also evidence from departures from the covariation calculus for causal attribution.

But it turns out that the fundamental attribution error is not quite so simple as that. In most studies of causal attribution, the actor in the situation is represented by the grammatical subject in the sentence that describes that situation. Thus, John laughed at the comedian. But there are many situations in which there really isn't an actor. Consider, John liked the comedian. In both cases John is the subject of the sentence, and the comedian the object. But in the first case, John is the actor and the comedian the target, while in the second case, there isn't any actor at all. Instead the comedian is the stimulus for John's state of liking. Linguistic analysis thus shows that the fundamental attribution error takes two forms. In the classic case, involving subjects, objects, and action verbs, there is a tendency to attribute causality to the actor rather than to the target. But in a second class of cases, involving stimuli, experiencers, and states, there is a tendency to attriubute causality to the stimulus rather than the experiencer.

    6a. One of the tasks of the "naive" scientist is to test hypotheses about what is going on in the social world. How do we test hypotheses? In what ways do our hypothesis-testing strategies depart from normative principles?

According to some views, impression formation and causal attribution are hypotheses that people test against evidence that they gather in the course of ordinary social interactions. For example, you can start out with the hypothesis that a person is likable, and test that hypothesis by determining whether he has likeable characteristics. Or you can start out with the hypothesis that the actor is responsible for some outcome, and then test the hypothesis by gathering consensus, consistency, and distinctiveness information. Logically, one should test hypotheses by seeking evidence that would disconfirm them. This strategy maximizes the likelihood that you will discover that your hypothesis is wrong, if in fact it is wrong. An alternative strategy is to conduct a balanced search for evidence, being equally open to confirmatory and disconfirmatory evidence.

But we don't do this. Instead, there is some evidence (described by Kunda) that we tend to employ a confirmatory strategy in hypothesis testing. That is, we tend to seek, attend to, and remember evidence that would confirm or strengthen our hypotheses, rather than evidence that would disconfirm or weaken them. Klayman and Ha speak of the "positive test strategy", by which we test hypthoses by seeking evidence that match them. A good answer will give at least one concrete example of this tendency (perhaps derived from Kunda's treatment of this material). Biased evidence seeking is a particularly difficult bias, because the "seeking" is rarely neutral. That is, in the course of seeking evidence, we may create conditions that favor the production of the evidence itself -- a version of the self-fulfilling prophecy. So, if we are testing the hypothesis that someone is extraverted, and we give them opportunities to reveal how extraverted they are, we may inadvertently elicit extraverted behavior that we wouldn't otherwise see.

At the same time, this apparent departure from normative rationality shuldn't be exaggerated. Trope and Bassok (and others) have found that people, when given a chioice, favor highly diagnostic information, even when it is contrary to the hypothesis being tested. For example in testing whether someone is extraverted, they generally focus on evidence that actually distinguishes extraversion from introversion; the bias toward evidence of extraversion is imposed on top of this normatively rational strategy for hypothesis testing.

    6b. The covariation calculus for causal attribution requires that people be able to compute correlations between events. How good are we at detecting and estimating correlations? What biases and errors affect our ability to detect covaritation?

In the covariation calculus, causal attributions are generated from observation of the correlation (another word for covariation) between certain features of the situation (the behavior of the actor, the behavior of other people, the presence of the target, and the situation in which actor and target meet) and the outcome. Thus, in order to explain why John laughed at the comedian, we have to know whether other people also laugh, or only John; and whether John laughs at other comedians, or only this one, etc.

The problem is that we appear to be not all that good at picking up correlations between events in our environment. Thus, in assessing correlations, we appear to be biased by something like the positive-test strategy. In a 2x2 table, for example, if the "Yes/Yes" cell is relatively large, we tend to perceive the two variables represented in the table as correlated; logically, however, this can only be determined by examinging the other cells in the table as well. Similarly, we tend to perceive unusual events to be correlated with each other. Finally, our theories of the world can get in the way of covariation detection, leading us to "see" illusory correlations that are just not present in the data available to us.

Stereotypes are particularly vicious examples of this process. So, for example, we tend to perceive a relationship between being a member of an unusual group (e.g., an ethnic minority) and engaging in unusual behavior (e.g., nonnormative behavior that only a few people display). Our perceptions of such correlations then reinforce the stereotype (e.g., that minority group members are criminals), which in turn focuses our attention on future instances in which minority group members commit crime (the positive test strategy). By ignoring the other cells in the table, we may fail to realize that members of the majority group commit just as much crime as to members of minority groups, if not more so (if for no other reason than that there are more people in the majority group, by definition). Thus, illusory correlations are generated by stereotypes, and stereotypes are reinforced by illusory correlations, in a vicious cycle.