(UPDATED: July 2, 2015)

 

I am a social psychologist who studies a variety of questions in the fields of judgment and decision making, social psychology, and consumer behavior. I use experimental methods to test how people come to understand themselves, form impressions of others, and navigate as economic, political, and moral beings in a challenging and complicated world.  My research is guided by the philosophy that there need not be a tradeoff between basic and applied research. Instead, I attempt to make major contributions to basic theoretical development by studying questions of clear applied import to both business and public policy.

 

Below, I organize my research into four topic areas. In the first section, I discuss my research on the self. The next two sections describe my research in different domains of judgment and decision making (JDM). The second section focuses on research in moral psychology; the third section focuses on non-social judgment and decision making. Lastly, the fourth section features research on cognition and emotion. It should be stressed that these categories are not mutually exclusive, and indeed most of my lines of research could be placed into more than one of these buckets. Also, I focus on work that is published, under review, or for which data collection is complete (or nearly complete). Titles of current projects for which data collection is ongoing can be found on my CV.

 

I. The Self

 

William James, the father of modern psychology, recognized the peculiar position the self occupies as both a knower (the “I”) and an object to be known (the “me”). My research examines both aspects of the self.  I study how the self comes to know itself (i.e., the “me”), as well as how our self-understanding colors, differs from, and affects our understanding of others. I also study the demands faced by the “I,” such as its basic desire to view itself favorably and its need to effectively exert self-control.

 

Self-insight. It would seem natural that we come to understand ourselves through experience. But several lines of my research emphasize how our experience can be misleading. For example, in assessing how well we are performing at a task (e.g., an abstract reasoning test), we presumably should be able to look to our bottom-up experience with the task as cues to how we are doing (e.g., how long it takes us to solve each problem). But people’s own preconceived notions of how good they are in a certain performance domain color their direct experience, leading, for example, confident test-takers to believe they are moving through test items more quickly than they actually are (Critcher & Dunning, 2009b, JPSP). As a result, leaning on their subjective experience leads their self-assessments astray.

 

In more recent work with Emily Rosenzweig, I have looked not at people’s errors in understanding their past performance, but at pitfalls in predicting the future. In particular, we have examined how people predict their own (or others’) prospects for improvement (Critcher & Rosenzweig, 2014, JEP:G). I show that people rely on a performance heuristic: People lean on their recently-demonstrated performance as a positive cue in assessing whether further improvements are likely. For example, those who performed relatively well at a darts game bet more money that they would perform better in the next round, even though they were actually less likely to perform better. This heuristic is not relied upon merely in assessing the self, but in making predictions about others. For example, in one study reported in the manuscript, participants were shown the June 2012 rate of return for 12 randomly-selected High Yield Bond mutual funds. They were asked to predict whether the rate of return would increase in July. Participants leaned on the performance heuristic: They used the June 2012 rate of return as a positive cue that the July 2012 rate of return would be even higher. Again, reliance on the performance heuristic led people astray. In short, people’s experience (of success or failure) actually pushed them toward more inaccurate forecasts. The article “Why You Decide the Way You Do” in MIT Sloan Management Review (Winter 2015) featured my research along with this headline offering advice to executives: “Past Success Doesn’t Predict Future Improvement.”

 

Of course, in some domains, experience might seem to be more synonymous with self-insight. For example, assessing how much one enjoys an activity might seem very straightforward: One merely reports whether one is having an enjoyable time or not. But in my work with Tom Gilovich, I have found that assessing whether one is bored or having fun is not so straightforward a task. This is because during tasks, our mind inevitably wanders, but the meaning of this mind wandering is ambiguous. People look to where their mind is going in order to infer whether they are bored or having fun (Critcher & Gilovich, 2010, PSPB). And because where one’s mind wanders is not a straightforward reflection of how much they are enjoying an activity (but perhaps an indication of what memory triggers are in one’s environment), this cue can lead people astray. Although perhaps not the ultimate arbiter of academic contribution, an entire paragraph on the Wikipedia page for “self-perception theory” (a major theory in psychology) is devoted to describing this paper. (Neither my co-author nor I added it.)

 

In more recent work with my marketing PhD student Scott Roeder, I am examining how people communicate the enjoyment of their experiences to others. We find that for certain experiences—those that are treasured, meaning they have special significance for the self—people don’t communicate their full enthusiasm for the memories unless they are talking to someone who has had the experience. We find this is because people think that treasured experiences cannot be captured with words, but instead are of the “you just have to be there” variety (Roeder & Critcher, in prep). But this sense that the significance of treasured memories reside in emotional narratives instead of verbal ones produces something of a self-fulfilling prophecy: A sense that others will not fully “get it” means that people fail to convey their passion for treasured experiences to those who are least knowledgeable about the value of such experiences.

 

Self vs. social knowledge. More than a half century ago, the psychologist Leon Festinger noted that we understand ourselves by comparing ourselves to others. In my own research, I have made advances by studying the opposite pathway: how people’s understanding of themselves colors their views of others. For eighty years, this pathway (self-views influencing social views) has developed in one way through the study of projection: People project their own attitudes, beliefs, and behavioral intentions onto others. Instead of merely showing another way in which people use their standing on one dimension to infer its prevalence in others, I, in collaboration with David Dunning, demonstrated a qualitatively different type of projection. I showed that people rely on the way that two characteristics covary in the self in order to make inferences about how those traits are likely to covary in others (Critcher & Dunning, 2009a, JPSP). For example, creative introverts tend to assume that extroverts they meet are not very creative—recapitulating patterns observed in the self onto others. I called this extension pattern projection. The self, in particular, is leaned upon as a special source of information in this process; that is, people do not lean on trait information about others they know well (e.g., their roommate) to inform social judgments to the same extent.

 

In follow-up work with David Dunning and Sarah Rom, who came to Berkeley from Maastricht University in The Netherlands to complete her Master’s Thesis with me on this project, I set out to understand why pattern projection emerges. I found that for the self, but less so for others, people generate causal theories that explain why one self-aspect influences or gives rise to another. For example, creative extroverts do not merely self-identify on those two traits, but instead tend to have a narrative or theory that explains why for them personally those two traits are related in themselves. Creative extroverts might hold the theory that their frequent interactions with others help to fuel their creative passions, whereas creative introverts may believe that they pursue their creative efforts more efficiently without the distraction of chronic socializing. These causal trait theories, initially generated to explain the self, are then applied to understand people more generally. In short, causal trait theories about the self explain pattern projection (Critcher, Dunning, & Rom, 2015, JPSP). This paper not only identifies a new form of person knowledge (causal trait theories), but also demonstrates a novel epistemic process: An attempt to understand a single exemplar can influence one’s understanding about the attributes that characterize the category’s many exemplars. I have been pleased that other researchers have begun to apply the idea of pattern projection more broadly. For example, marketing researchers have applied the idea in looking at how people make inferences about others’ preferences or even brands’ personalities.

 

Although projection is sometimes identified as a source of bias, it is also a potential source of accuracy. That is, if a party host is trying to determine whether most of her guests will select vanilla ice cream or tiramisu for dessert, the host will get it right most of the time (all else equal) by consulting her own preferences. That is, at least when considering two options, the majority of people hold of the majority preference. But in research with lab manager Em Reit, I find that people’s forecasts of others choices are led astray because they lean on a prevalence heuristic. More specifically, in considering whether someone else is likely to choose X or Y, people are influenced by the relative prevalence of X vs. Y (the commonness of X vs. Y in the world), even controlling for how much it is assumed the other person would like to receive X vs. Y. It is as though people are blurring the question of what has been chosen (e.g., People eat much more vanilla ice cream than tiramisu….) with what one is likely to choose when given the explicit opportunity to have either option. As follows from this logic, biased predictions emerge most strongly for choices between commonplace but relatively unexciting options and relatively rare but exciting options. That is, relative to people’s expectations, others (given the choice) are more likely to have a Japanese imported beer over a Budweiser, choose curry over sandwiches for lunch, or select a night at an improv comedy show instead of the movie cinema (Reit & Critcher, in prep).

 

Instead of considering how self and social perceptions influence each other, I have also studied comparative assessments. Twenty years ago, psychologist Mark Alicke noted something unusual about how people compare themselves to others. Alicke and colleagues found that people tend to compare themselves quite positively to people in general, but have more tempered self-assessments when comparing themselves to specific, randomly chosen individuals (even those about whom people had no information). Although Alicke et al.’s 1995 paper remains highly influential (Google Scholar count: 735), no explanation has been offered for this social comparison asymmetry. In recent work with David Dunning, I believe I have in part solved this mystery. I show that the social comparison asymmetry is higher for morally-relevant traits, and the asymmetry’s presence is predicted by people’s stated desire to give others the benefit of the doubt until proven wrong. In other words, people assume the best about the moral character of specific individuals (and thus compare the self to such individuals more humbly) but not necessarily people in general (Critcher & Dunning, in prep). Critcher and Dunning (2014, Compass) draws on this and other work to offer a three-part framework of why people judge individuals and populations differently, even when there is no logical basis for doing so.

 

Of course, the self does not merely have perceptions of itself and perceptions of others. The self also has beliefs about how others perceive the self. Such beliefs are called meta-perceptions. For example, after a C-level executive delivers an insightful recommendation in the boardroom, she not only has a perception of how she performed, but she has an idea of how she thinks others now see her. In work with former Berkeley psychology PhD students Alice Moon (who is currently a post-doc at Disney Research) and Muping Gan, I have examined how people’s meta-perceptions (their beliefs about how others view them) depart from how others actually do view them. In one study, some participants were contestants in a mock game of Jeopardy, while others watched as audience members. After completing the main portion of the game, audience members rated how intelligent the contestants were, and contestants estimated how audience members would rate them. Then, contestants went through Final Jeopardy, an all-or-nothing bonus round that could net them a cash prize or lead them to lose everything. When contestants were told they had won (or lost) Final Jeopardy, they estimated that audience members would now judge them more positively (or negatively) than they actually did (Moon, Gan, & Critcher, in prep). More recent studies have uncovered the mechanism behind this effect, and in so doing have identified an intervention that can de-bias actors’ meta-perceptions.

 

Managing self-regulation and self-threat. The self is not merely a knower who must come to understand itself and the world. It is also an entity that must manage its own needs while navigating the social world. Frequently, the self must override natural impulses or temptations and replace them with alternate responses. For example, dieters must resist cake and instead exercise; consumers must avoid impulse buys and instead put their money into savings. For over 15 years, psychologists have understood that the self is limited in how much self-control it can exert in limited amounts of time. Turning down a tempting piece of chocolate cake is hard, but it’s even harder if one just had to exert tremendous willpower to finish a project for work. Although researchers in social psychology and consumer behavior have repeatedly shown that exerting self-control at time 1 interferes with attempts to exert self-control at time 2, it has remained unclear what it is about exerting self-control that is depleting of one’s resources. Although this question was the basic theoretical interest that drove research I conducted with Melissa Ferguson, I wanted to test the question in a domain of clear real-world import. I tested whether having to conceal something about oneself (e.g., one’s sexual orientation) during an interview would interfere with one’s subsequent ability to effectively exert self-regulation. I found that it does—hurting one’s intellectual acuity, physical stamina, and ability to exercise interpersonal restraint (Critcher & Ferguson, 2014, JEP:G). This result in itself is of clear practical import, given that in most states in the U.S. (and most countries in the world), people can be legally fired from their job simply for being gay or lesbian. The research suggests that people who feel pressure to conceal their sexual orientation may not perform optimally. The paper is also the first to tackle what aspect of self-regulation is responsible for subsequent deficits. In this way, my work answers Michael Inzlicht and Brandon Schmeichel’s recent call that research on regulatory depletion move beyond the first-generation question of whether self-regulatory depletion effects occur. (They do.) I instead address the second-generation question of why such depletion occurs. I find that it is monitoring one’s speech for content to conceal, not the actual alteration of one’s speech, that produces these deficits. When participants were asked to conceal their sexual orientation in a context in which they would not have spontaneously revealed it (i.e., because the questions never touched on one’s personal life), participants suffered the same deficits. But when participants merely had to alter their speech without monitoring to conceal any content (e.g., by adding in a lie or a specific word to their answer), they suffered no decrement. This research was featured in various outlets, including the print issue of The Atlantic (March 2014).

 

The “I” not only has to push the self to behave in ways it would not otherwise behave, but the “I” must navigate the world while satisfying its own desire to maintain feelings of self-worth. For example, rare is the day we do not receive some form of negative feedback from our colleagues, our partners, our students, or our children. The self has an eclectic toolkit of tactics to help shield itself from negative feedback. Although these ego-saving techniques can help maintain our self-esteem, they can also keep us from accepting and learning from constructive criticism. Three decades ago, eminent psychologist Claude Steele showed that self-affirmations—simple exercises that prompt people to reflect on important values—can forestall defensiveness in unrelated domains. Although many demonstrations of affirmations’ effectiveness followed, there was little focus on when or why affirmations work. In research with David Dunning, I have tried to address these questions. In one line of work, I showed that self-affirmations are effective in preemptively blocking defensive reactions, but are not effective in retroactively undoing them (Critcher & Dunning, 2010, PSPB). Previous researchers believed that self-affirmations’ timing did not influence their effectiveness. By identifying problems with previous paradigms and developing a new paradigm to test questions about affirmations’ timing, I was able to show why timing does matter (and explain why past researchers had missed this).

 

In more recent work with David Dunning, I have developed and empirically tested a model that offers a unifying account of thirty years of self-affirmation research. This Affirmation as Perspective Model argues that when people experience a threat, that injury looms large in their minds, exerting a disproportionate impact on their feelings of self-esteem. Self-affirmations then serve to expand people’s sense of self, reminding them that the threat is not all that defines them, thereby helping to reduce the influence of the threat on their more global sense of self-integrity (Critcher & Dunning, 2015, PSPB). The paper was featured in Psychology Today.

 

II. Moral Psychology

 

In a 2009 chapter summarizing the state of the psychological literature on morality, Jonathan Haidt, a leading moral psychology scholar at NYU Stern, wrote that “moral thinking is for social doing.” In short, morality is an adaptation that facilitates well-functioning social and economic systems. In my own work, I look at the ways in which people’s moral judgments of others are sometimes fickle, sometimes sophisticated, and sometimes cynical.

 

Moral judgment as fickle. Despite the importance of morality for social functioning, people’s moral judgments are known to be influenced by trivial factors, often outside of their awareness (e.g., Critcher & Pizarro, 2008, PSPB). In recent work, I’ve examined a heretofore unidentified factor that has implications not merely for moral judgment, but for behavioral forecasting more generally. Consider these questions: How likely is a randomly-selected American to make a charitable donation next year? What percentage of Berkeley faculty would agree to a pay cut if it meant more financial aid for poor students? Although these questions differ most obviously in the content they focus on, they also differ in another subtle way. The former asks one to predict the moral behavior of an individual; the latter, of a population. In a recent paper with David Dunning, I show that people estimate that individuals are more likely to perform moral behaviors than are populations. For example, although people thought there was a 62% chance that a randomly-selected college student would be entirely academically honest for the next 30 days, people thought that only 48% of all college students would meet this standard (Critcher & Dunning, 2013, JPSP). Mechanistic evidence showed that the asymmetry emerges because people focus on different behavioral influences when making forecasts of individuals versus populations. A final study in the paper extends the forecasting asymmetry to a non-moral domain. If we can better understand how and in what contexts people spontaneously forecast the future by considering individuals (“How likely is it that a person would respond to this charity appeal?) versus populations (“What percentage of people would respond to this charity appeal?”), we may be able to predict when people err toward optimistic vs. pessimistic forecasts of others’ behavior.

 

Moral judgment as sophisticated. Much research on moral judgment has examined what features of others’ behaviors lead them to be judged as praiseworthy or blameworthy. I have been interested instead in how people rely on features of the moral decision-maker’s decision process in understanding whether someone is a morally good or bad person. In work with Yoel Inbar and David Pizarro, I found that people lean on the speed with which others make morally-relevant decisions as a sign of the strength of their moral convictions (Critcher, Inbar, & Pizarro, SPPS). In so doing, I showed that it is necessary to back away from the conclusions of a classic 2000 paper by Wharton professor Phil Tetlock and colleagues on “taboo tradeoffs,” which my data consistently contradicted. Inbar and I wrote an invited article for The Jury Expert, a trial advocacy journal, explaining how our findings inform how juries are likely to respond to information about a defendant’s impulsivity. The journal published two expert responses to our piece, as well as our own reply (Critcher & Inbar, 2013).

 

In other research with Erik Helzer, David Tannenbaum, and David Pizarro, I examine how it is that people determine whether other people are of good moral character. We argue that moral character is like a processor—one that responds to relevant inputs with potentially-appropriate outputs. By this perspective, the task of character evaluation is one of determining whether people respond appropriately to situational “tests.” Consider the following example. In one study, I had participants consider an American military commander faced with a difficult decision. He had to decide whether to bomb an inn in which top al Qaeda terrorists were staying, even though this would require inadvertently killing an innocent terrorist-held prisoner in the process. In one condition, participants praised the commander more for ordering the bomb strike. In the other condition, participants praised the commander more for not ordering the strike. What changed participants’ minds? It mattered whether when the commander looked down at the inn from his mountaintop lookout, the person he happened to see through an open window was a terrorist or the innocent prisoner (Critcher, Helzer, Tannenbaum, & Pizarro, in prep). This is because participants thought that seeing one person or the other would prompt different thoughts in the commander’s mind—i.e., different inputs into his moral processor. That is, bombing the inn is a more morally-reasonable response to staring straight into the eyes of a terrorist than the terrorist’s innocent prisoner. We develop these ideas into a more ambitious book chapter that offers a new perspective on moral character—both how it is judged and what constitutes it (Helzer & Critcher, under editorial review).

 

Some moral judgments are sophisticated not because they reflect a complex process of reasoning about others’ inner motives, but because they reflect the adherence to a complex moral norm. Consider a recent experience my partner and I had when returning from Las Vegas to SFO. Our flight was six hours delayed. Upon arrival at SFO, a gate agent boarded the plane and passed out “apology vouchers” to every passenger. My voucher was for $75. My partner, who flies more than one hundred thousand miles annually on this airline, received $350. I felt wronged, but why? After all, I frequently acquiesce to businesses treating customers differently. In work (inspired by this incident) with Emily Rosenzweig, I have examined what happens when businesses violate the apology script, a moral norm that governs how social wrongs should be remedied. By this script, if one party is responsible for injuring another, then they have a responsibility to take an action in the service of remedying that wrong, a remedy that is proportional to the size of the injury. Consider how this script applies to my travel case. Given that the airline was responsible for the wrong, that they took an action they labeled restitution for that wrong, and that both my partner and I were wronged equally, I had the strong expectation that we should be compensated equally. In a series of studies, I show that by removing features that are necessary for the apology script to apply (e.g., by eliminating the culpability of the apologizer, by reframing the apology voucher as a business appreciation voucher), consumers do not experience the same moral outrage at being treated unequally.  

 

Moral judgment as cynical. I have also documented several ways in which people’s moral judgments are cynical, and unjustifiably so. By unjustifiably cynical, I mean the beliefs violate logical standards of internal consistency, or they make predictions that systematically depart from reality, in a cynical direction. In one line of work with David Dunning, I was inspired by a long-standing debate in psychology about whether altruism is, at its core, fundamentally self-interested. A well-known exchange between psychology and consumer behavior researchers Daniel Batson and Robert Cialdini during the 1980s and 1990s ultimately offered no clear answer. In the end, it seemed that the debate was not empirical, but philosophical, settled by how one chose to define terms like “altruistic” and “self-interested.” My insight was that although one could always argue that a seemingly altruistic act actually conforms to a definition of self-interest, I could instead actually address a related question: Are people unjustifiably cynical in how much self-interest they see in the world? I reasoned that if part of the difficulty of making claims about self-interest is the ambiguity about its definition, I could still study questions about whether people see too much self-interest in the world even if I were agnostic about self-interest’s definition. I could grant people whatever definition of self-interest that they prefer, and then see if they judged people too harshly or too lightly in light of that definition. In designing an empirical approach, I recognized that I could operationalize attributions to self-interest as conditional probability judgments: If a person did behavior X, then there is a Y probability chance that he or she acted out of self-interest. This conditional probability can be predicted, through Bayes’ Rule, from three prior beliefs: p (X), p (Y), and p (X | Y). I showed that people see more self-interest in seemingly moral behaviors than their prior beliefs permit. But they show no tendency to see more selflessness than warranted in seemingly selfish acts; for making attributions of seemingly immoral behaviors, people’s beliefs showed internal consistency (Critcher & Dunning, 2011, JESP). This asymmetry means that, even accepting people’s own prior definition of self-interest, they are too cynical in their view of the world. Additional research extended this to how people view great acts of philanthropy. I found that when people first observe an act of philanthropy, they are quite charitable in their explanation for it. But the more time they spend contemplating the philanthropic act, the more suspicious they become of the philanthropist’s true motivations. Given its implications for public relations, this research was featured in the Wall Street Journal.

 

One reason cynicism can emerge is there is often inherent ambiguity about the motives behind others’ behavior. But there can also be ambiguity about who is seen to be performing a behavior, especially when behavior emerges in a group context. In work with Vivian Zayas, I have looked at an ambiguous social dynamic that I believe plays out frequently in social and organizational contexts, one in which people cynically see more ill-will than is warranted. Psychologists have long been interested in the question of social exclusion, but in general it has been studied in circumstances in which the question of who is excluding whom is clear. In one classic experimental paradigm, two people start tossing a ball to each other but intentionally do not throw it to a third person. In this case, two people are clearly excluding a third. Previous work has focused on the emotional, motivational, and cognitive consequences of such exclusion. In my work, I have looked at exclusion that happens in a more ambiguous context, situations in which one person (the excluder) excludes someone else (the rejected) by only including a third person (the included). In these cases, the excluder is excluding the rejected, but how should the included be viewed: as a co-conspirator, or as a person awkwardly caught in the middle of an act of social aggression? Note that such situations, in simple form, play out frequently within organizational and social life: Don may invite Leif to get coffee, but not invite Clayton. By playing out these situations in laboratory contexts, I find that the rejected perceive that they have been excluded by both the excluder and the included, whom they now view as an exclusive alliance. The rejected expects to be directly excluded by the included in the future. But in actuality, the included tend to feel highly uncomfortable by being caught in the middle of this act of social aggression (Critcher & Zayas, 2014, JPSP).

 

III. Judgment and Decision Making

 

Inspired by Kahneman and Tversky’s heuristics and biases tradition, I study a variety of ways in which normatively-irrelevant features bias judgments and decisions. I study errors in judgment and decision making not with the goal of labeling people as irrational or biased, but in order to understand how people make judgments and decisions (see Rosenzweig & Critcher, 2014, CDPS, for a theoretical framework for considering accuracy and error in judgment and forecasting). Although some of my research focuses on various incidental influences on everyday judgments and decisions, I have lines of work in the specific topics of anchoring and risky decision making as well.

 

Incidental influences on judgments and decisions. I was initially drawn to JDM research given the breadth of research questions that can be addressed. In everyday life, people are called upon to make judgments and decisions to explain the past, to interpret what they presently observe (e.g., products they may buy), and to forecast the future. In research with Jane Risen, I considered how it is that people assess the likelihood of future events. The work was premised on two ideas. First, if people can clearly imagine what a future state of the world will be like, that considered state will seem more likely to occur. Second, visceral states like warmth, thirst, and hunger, are very difficult to merely simulate; their immediate experience is more raw and compelling than attempts at imagining them. From this, I predicted that if people are experiencing a visceral state (e.g., warmth), they should have an easier time simulating possible future states of the world that would inspire that visceral state (e.g., global warming), and thus think them more likely to occur. I showed that by leading people to experience visceral states like warmth and thirst, people then predict that global warming and desertification, respectively, are more likely to occur in the future (Risen & Critcher, 2011, JPSP). As part of the research, I developed a new methodological tool that can be used to assess the clarity with which people mentally represent images. This research was featured in numerous publications, including the New York Times and Al Gore’s book The Future. Furthermore, it was featured in an issue of Science as one of the best recent articles published in all of the sciences.

 

Having established that simulation plays a major role in forecasting, I set out to consider how simulation may be influential in another domain: volume estimation. Previous researchers—in consumer behavior and cognitive psychology—have identified how both features of the targets to be judged and psychological states of the estimators can distort size assessment. But such papers present “as if” models—i.e., those that predict errors in estimation but are agnostic about the psychological process underlying such judgments. In research with my PhD student Hannah Perfecto, I present a simulated judgment account of volume estimation. We argue that people estimate a receptacle’s volume by simulating how much they can pour into it. A number of empirically-confirmed implications follow from this premise. As one example, cups look bigger right-side-up than up-side-down. Mechanistic evidence shows that this is because people have an easier time imagining filling up a cup that is right-side-up as opposed to up-side-down (Perfecto & Critcher, in prep). Beyond contributing to the basic cognitive psychology literature, our findings offer practical advice to retailers and packaging designers on how to maximize the perceived value of their products.  

 

In other research, I have looked at how a seemingly small regulatory requirement ironically incentivizes negative advertising, the exact advertising the regulation was designed to discourage. The Bipartisan Campaign Reform Act, better known as McCain-Feingold, requires candidates for federal office to add a SBYA (“stand by your ad”) tagline to their ads—i.e., to state their name and that they approve the campaign ads they fund. John McCain predicted the provision would help to eliminate much-loathed negative advertising, for candidates would be embarrassed to stand by such ads. In research with recent marketing PhD student Minah Jung, I have found that the law actually incentivizes attack ads. As supported by both laboratory and real election data, viewers of attack ads found such messages more persuasive when the tagline was included than when it was not (Jung & Critcher, invited revision, JMR). I show with process evidence why the provision has this effect. I was fortunate to receive a $50,000 grant from the Hellman Family Foundation to continue this research. Minah and I are currently testing interventions that may keep the SBYA tagline from having this ironic effect.

 

In ongoing work, I have examined a subtle influence on consumers’ apparent preference for variety, what other researchers have called the offer framing effect. The offer framing effect posits that consumers prefer more variety when forming a bundle through individual choices (e.g., when selecting beers from a store cooler for a make-your-own six-pack) than when selecting from pre-made bundles (e.g., when selecting between single-variety six-packs or variety packs). This phenomenon was recently identified in an article published in the Journal of Consumer Research. We read this paper in our weekly journal club organized by Leif Nelson (and attended by faculty and students from marketing, MORS, and psychology). I identified what seemed to be a series of artifactual explanations and alternative mechanisms that might combine to account for the effects. Along with Leif Nelson and marketing PhD student Mike O’Donnell, I find that two of the four studies can be accounted for by a measurement artifact, one fails to replicate, and one emerges mostly due to the fact that it is simply easier to achieve variety when one makes selections one at a time than when selecting among pre-made bundles. That is, if one is making a three-flower bouquet by drawing from a set of red (R) and orange (O) roses, then 6 of 8 (75%) possible permutations involve variety: RRO, ROR, ORR, OOR, ORO, ROO. But if instead one selects from the four possible already-constructed bundles of flowers (a bundle of all R, a bundle of all O, a bundle of 2 R and 1 O, a bundle of 2 O and 1 R), only 2 of 4 (50%) possible combinations include variety. Accounting for this difference (by showing all 8 possible permutations in the bundle condition), most all of the offer framing effect is eliminated (O’Donnell, Nelson, & Critcher, in prep). O’Donnell will be presenting this paper at the Association for Consumer Research, and we plan to submit it soon to the Journal of Consumer Research.

 

Anchoring and adjustment. Since Kahneman and Tversky first introduced the anchoring-and-adjustment heuristic some 40 years ago, there have emerged several accounts of how arbitrary numbers can impact numeric judgments. I have done work that shows how incidental numbers that are salient in the environment at the time of judgment can influence numeric judgments. In one study with Tom Gilovich, participants were shown an advertisement for one of two restaurants, Studio 17 or Studio 97. Except for the restaurant name, the advertisements were identical. Willingness to pay judgments assimilated toward the incidental number in the restaurant name: Participants reported being willing to pay more to eat at Studio 97 than Studio 17. I called this phenomenon incidental environmental anchoring (Critcher & Gilovich, 2008, JBDM).

 

In more recent research with Emily Rosenzweig, I am examining the neglected (and latter) half of anchoring and adjustment. For all of the focus on how numbers may anchor judgments, there has been almost no previous research on how incidental values may influence adjustment toward them. I study these effects in the context of a frequently-occurring anchoring and adjustment problem: forecasting trends. In such contexts, people typically know an attribute’s current value (the anchor) and then must estimate how much that value will change (i.e., adjust) in the next period. One domain where consumers face this task is in deciding whether to purchase airfare. Sites like Kayak.com plot time-series graphs of airfare’s fluctuation across time. It is up to consumers to extrapolate from such trajectories and infer the likely magnitude of airfare’s subsequent adjustment. But such sites also make salient fairly arbitrary values in the direction of adjustment: the values depicted on a graph’s y-axis that provide numerical context so that the graph is interpretable. I find that the positioning of such arbitrary y-axis labels changes people’s subjective sense of what would constitute a large or small change in price, which then changes people’s estimates of what price changes are likely (Critcher & Rosenzweig, in prep). My studies not only inform basic research on errors in forecasting trends, but offer practical advice for how airfare and investment websites can present information in a way that encourages or discourages consumer beliefs that big pricing changes are imminent.

 

Risky decision making. Cognitive psychologists and judgment and decision making researchers have had long and sometimes tedious debates about how best to model the risky decision making process. Such debates typically involve disagreements over how a risky prospect is evaluated. Classic questions include how the value function should be modeled, how subjective probability differs from objective probability, and whether reference points (e.g., an expected outcome, an aspiration level, absolute zero) color these assessments. But in research with Yoel Inbar, I consider an influence that is quite different from these typical foci. More specifically, I examine how one’s construal of a decision—as either an investment decision or a moral decision—changes one’s risky decision making on behalf of others. In a paradigmatic study, participants are asked to imagine they are making a moral decision (or an investment decision) on behalf of their grandmother. Participants are asked how much of her $400,000 saved for retirement should be moved to another account that has a higher expected return, but that involves more uncertainty. Those led to construe the decision as an investment decision advocate for moving 25% more of her money to this risky fund than do those led to construe it as a moral decision (Critcher & Inbar, in prep). Though interestingly, moral framings do not always reduce tolerance for risk. Instead, a moral framing sensitizes people to others’ losses. If the grandmother’s default account was slowly bleeding money (because the maintenance fees were higher than the returns), a moral framing led participants to want to move more of her money in order to avoid this loss. We are continuing to investigate how decision framings influence risky decision making. Also, we are attempting to understand what more subtle cues lead people to construe contexts as moral or merely monetary in nature.

 

Although risky decision making is typically studied as it relates to monetary decision making, I have done work with Poruz Khambatta—a former senior thesis student of mine who is now a PhD student at Stanford GSB—on how shoppers confront a common risk-laden decision. Imagine going to Amazon.com in search of a particular pair of running shoes. But upon arriving at the product’s page, you see the shoes have received mediocre ratings from previous customers. Amazon suggests an alternative product—one with high ratings, but that does not have the exact look you were hoping for. Which do you buy? Previous research has examined what makes recommendations compelling by assessing what affects their perceived validity: What makes reviews seem unbiased, impartial, an accurate reflection of typical customer experience, and applicable to you as a particular consumer? But as Poruz and I have studied, ratings have an additional function. They reduce uncertainty in one’s forecasted satisfaction with a product. A product that seems perfect for you but has mediocre ratings is risky: It may offer an idiosyncratic fit, or it may be a dud. A product with solid ratings, even if it is not a perfect match to one’s preferences, is unlikely to be a bust. Drawing on findings that risk aversion grows with decision stakes, I examined how anticipated ownership length—the amount of time one expects to hold onto and actively use a product—changes consumers’ reliance on ratings. When consumers expect to hold onto a product for longer, they have a greater desire to “play it safe,” which increases their preference for higher-rated products (Khambatta & Critcher, under review, JCR).      

 

IV. Cognition and Emotion

 

When social psychologists try to understand mental processes that influence how people interpret and interact with the world, they frequently distinguish cognitive or informational influences from affective or emotional ones. I have done research in each area and at their intersection.

 

Cognition. In considering cognitive influences on judgment and behavior, one can consider ways in which people operate similarly and ways in which people are unique. People vary in the set of mental associations or general cognitive styles that they possess. For example, conservatives and liberals systematically differ in their willingness to tolerate ambiguity in their perceptions of the world, and that influences how they react to ideological inconsistencies in themselves (Critcher, Huber, Ho, & Koleva, 2009). At the same time, psychologists seek to uncover universal rules that paint a unifying picture of how people navigate their social worlds. I do research that fits into both categories.  

 

In recent work with Jane Risen, I asked how exposure to counterstereotypically-successful exemplars shapes one’s beliefs about the world. For example, how does seeing Ruth Simmons, the African American president of Brown University, influence one’s beliefs about race in America? How does learning that California’s longest-living woman eats at Burger King every day change one’s feelings about the threat posed by fast food? I have found that after incidental exposure to these exemplars, people automatically (i.e., unintentionally and outside of awareness) draw inferences that race is not a success-limiting factor, and fast food does not pose much of a health threat, respectively (Critcher & Risen, 2014, JPSP). Curiously, these automatic inferences are dissociated from how people explicitly reason about these questions—i.e., people’s stated beliefs about what conclusions should be drawn on the basis of these single exemplars. This paper struck a chord with many in the popular press who look to salient signs of racial progress and consider what they mean for the state of race relations in America. Because of this paper, I was asked to contribute an op-ed on race relations to Forbes, had a story on the front page of Huffington Post that had over 27,000 likes on Facebook, and was featured on a radio call-in show in Detroit.

 

Instead of considering how people reason automatically rather than deliberately, I have also examined how subtle differences in how information is framed changes how people deliberate about that information. In new research with Harvard Business School PhD student Bhavya Mohan, I have compared how people respond to “unlimited” offers (e.g., an unlimited cell phone plan) vs. “finite” offers that are de facto unlimited (e.g., a 44,000 minute per month cell phone plan). I find that although people think “unlimited” offers are more attractive, they expect that the finite offers will cost more (and as such, are willing to pay more for them). We term this reversal the unlimited paradox. Why does it occur? Although “unlimited” has a very positive connotation (explaining why it is more attractive), the finite value that makes an offer essentially unlimited is much higher in value than any finite plan people are accustomed to seeing. That is, finite cell phone plans might involve a few thousand minutes a month, at most, but nowhere close to 44,000. As a result, people expect this plan to be quite expensive. Because an “unlimited” plan is less naturally contrasted against a small finite plan, people do not spontaneously consider it in light of these cheaper plans. As a result, people are less likely to have inflated expectations of an unlimited plan’s price (Mohan & Critcher, in prep). For businesses, the take-away is that “unlimited” plans are more enticing to get consumers in the door, but nominally “finite” plans will seem to be priced more reasonably.

 

I also do research that examines not only implicit influences on judgment, but implicit influences on behavior. For over two decades, psychologists have been interested in implicit attitudes, people’s automatic positive and negative associations with stimuli. In what is perhaps the best known research in this tradition, most people harbor negative implicit attitudes toward Blacks—as assessed by a reaction-time-based measure called the implicit association test (IAT)—even though most of those tested explicitly hold positive, egalitarian attitudes about Blacks as a group. Since this pioneering work, researchers have studied the importance of implicit attitudes in non-race-related domains, with a particular interest in how such implicit attitudes might predict behavior. For example, some work has shown that positive implicit attitudes toward goal-related end-states (e.g., thinness) can predict goal-relevant decisions (e.g., to choose a healthful snack over a tempting, unhealthful one) better than do explicit attitudes toward those same stimuli. In research with Melissa Ferguson, my core insight was that many goals require engagement with and persistence on tasks that are unlikely to be viewed positively, but are considered important. For example, a pre-med student may persist through organic chemistry not because of a positive disposition toward the subject matter, but because of a sense that it is important. But to predict whether one will stick with a goal, is it sufficient to merely ask them whether it is important to do so? Most any dieter who walks into Weight Watchers may say it is important to lose weight, but only some of them will succeed at this task. In other words, people’s explicit reflections on what is or is not important may not guide their everyday behavior that unfolds less deliberately and more automatically. Instead, implicit measures that assess the extent to which means of goal pursuit naturally trigger the concept of importance may be ideal for understanding people’s more everyday, non-reflective intuitions about the importance of goals. I adapted the IAT to measure people’s implicit associations between importance and different means of goal pursuit. I found that the implicit importance of schoolwork, exercise, and standardized testing predicts higher grades, a more intense exercise regimen, and stronger GRE scores (Critcher & Ferguson, invited revision, JPSP). We are currently partnering with Spartan Racing on a large-scale test of the validity of our IAT in the exercise context. The millions of members on Spartan’s website will be asked to take an IAT that will measure their implicit importance toward Spartan racing. We will then be able to track these participants’ future racing performance to determine whether our IAT can prospectively predict who succeeds in their racing competitions. 

 

Affect and emotion. In other research, I look at the interplay between cognition and affect. Some research examines the impact of cognitive styles on affective processing, while other work studies the impact of affect or emotion on judgment and behavior. One line of work sought to reconcile literatures that are seemingly inconsistent as to whether abstract or concrete ways of thinking are associated with greater sensitivity to affect. People can view the same stimuli abstractly (“the forest”) or concretely (“the trees”). I highlight that most of the work that has shown a connection between concreteness and affect has defined concreteness in terms of the real or vivid presence of a stimulus (a spider is more emotionally arousing when it is right in front of you than when it is considered hypothetically), and not in terms of the level of abstraction at which a stimulus is cognitively represented. I show that when a more generalized abstract (versus concrete) mindset is induced, people are more likely to automatically attend to affective stimuli, non-consciously extract the affective meaning from subliminally presented stimuli, and behave more in line with their affective reactions to stimuli (Critcher & Ferguson, 2011, JESP).

 

Instead of considering impacts of cognition on emotion, I have also considered the impact of emotion on cognition. Transcendental emotions like inspiration and awe are experienced in response to vast, powerful stimuli that are not easily understood but that are pure and exalted in status. As this description suggests, a prototypical (even if ill-defined) elicitor of inspiration would be God. Drawing on previous psychological findings suggesting that people use Y (e.g., a feeling of inspiration) to infer X (e.g., the existence of God) when X is known to cause Y, we predicted and found that those who experience inspiration—either dispositionally or as a result of a situational elicitor—are more likely to report belief in God (Critcher & Lee, passed initial editorial review and is out for full review, Psych Science). I find this is because feeling inspired leads people to feel connected to something beyond themselves, which offers a spiritually transcendent experience that elevates belief in God.

 

Finally, I have also examined the impact of emotion on behavior. In research with Agostino Mazziotta, Jack Dovidio, and Rupert Brown, I have examined how the experience of anxiety can impact interactions between majority and minority group members. Although there is a large literature on the negative effects of intergroup anxiety on intergroup contact, my core insight was that it matters what exactly one is anxious about. I differentiate self-focused anxiety (anxiety about one’s own thoughts and behavior) from other-focused anxiety (anxiety about one’s interaction partner’s thoughts and behavior). I detail an intimacy-disrupting pathway, whereby those who have self-focused anxiety are more likely to believe that their interaction partners have other-focused anxiety, which predicts fewer and lower-quality interactions. That is, those who worry about their own performance in an interaction assume their interaction partner has little faith in them as well, which leads people to choke or disengage, thereby preventing warm, smooth interaction. Because majorities are usually the ones who bring self-focused anxiety to an interaction, the research details one route by which majorities’ chronic anxieties can derail smooth intergroup functioning (Critcher, Mazziotta, Dovidio, & Brown, in prep).

 

Manuscripts under revision

 

Critcher, C. R., & Ferguson, M. J. (invited revision). “Whether I like it or not, it’s important”: Implicit importance of regulatory means predicts effective self-regulation. Journal of Personality and Social Psychology.

 

Jung, M. H., & Critcher, C. R. (invited revision). How encouraging niceness can incentivize nastiness: An unintended consequence of advertising reform. Journal of Marketing Research.

 

Manuscripts under review

 

Critcher, C. R., & Lee, C. J. (passed initial editorial review, sent out for full review). Feeling is believing: Inspiration encourages belief in God. Psychological Science.

 

Helzer, E. G., & Critcher, C. R. (invited submission). What do we evaluate when we evaluate moral character? For K. Gray & J. Graham (Eds.), Atlas of Moral Psychology.

 

Khambatta, P., & Critcher, C. R. (under review). To follow one’s heart or the stars? Anticipated ownership length increases reliance on product ratings. Journal of Consumer Research.

 

Published

 

Critcher, C. R., & Dunning, D. (2015). Self-affirmations provide a broader perspective on self-threat. Personality and Social Psychology Bulletin, 41, 3-18.

 

Critcher, C. R., Dunning, D, & Rom, S. (2015). Causal trait theories: A new form of person knowledge that explains egocentric pattern projection. Journal of Personality and Social Psychology, 108, 400-416.

 

Critcher, C. R., & Dunning, D. (2014). Thinking about others vs. another: Three reasons judgments about collectives and individuals differ. Social and Personality Psychology Compass, 8, 687-698.  

 

Critcher, C. R., & Ferguson, M. J. (2014). The cost of keeping it hidden: Decomposing concealment reveals what makes it depleting. Journal of Experimental Psychology: General, 143, 721-735.

 

Critcher, C. R., & Risen, J. L. (2014). If he can do it, so can they: Incidental exposure to counterstereotypically-successful exemplars prompts automatic inferences. Journal of Personality and Social Psychology, 106, 359-379.

 

Critcher, C. R., & Rosenzweig, E. L. (2014). The performance heuristic: When past success is misguidedly projected into the future. Journal of Experimental Psychology: General, 143, 480-485.

 

Critcher, C. R., & Zayas, V. (2014). The involuntary excluder effect: Those included by an excluder are seen as exclusive themselves. Journal of Personality and Social Psychology, 107, 454-474.

 

Rosenzweig, E., & Critcher, C. R. (2014). Decomposing forecasting: The salience-assessment-weighting (SAW) model. Current Directions in Psychological Science, 23, 368-373.

 

Critcher, C. R., & Dunning, D. (2013). Predicting persons’ versus a person’s goodness: Forecasts diverge for populations versus individuals. Journal of Personality and Social Psychology, 104, 28-44.

 

Critcher, C. R., & Inbar, Y. (2013). When does impulsivity exculpate vs. incriminate? The Jury Expert, 25(5), 19-24. (invited target article, with commentaries and reply)

 

Critcher, C. R., Inbar, Y., & Pizarro, D. A. (2013). How quick decisions illuminate moral character. Social Psychological and Personality Science, 4, 308-315.

 

Critcher, C. R., & Dunning, D. (2011). No good deed goes unquestioned: Cynical reconstruals maintain belief in the power of self-interest. Journal of Experimental Social Psychology, 47, 1207-1213.

 

Critcher, C. R., & Ferguson, M. J. (2011). Affect in the abstract: Abstract mindsets promote sensitivity to affect. Journal of Experimental Social Psychology, 47, 1185-1191.

 

Critcher, C. R., Helzer, E. G., & Dunning, D. (2011). Self-Enhancement via redefinition:  Defining social concepts to ensure positive views of self. In M. D. Alicke, & C. Sedikides (Eds.), Handbook of self-enhancement and self-protection (pp. 69-91). New York, NY: The Guilford Press.

 

Risen, J. L., & Critcher, C. R. (2011). Visceral fit: While in a visceral state, associated states of the world seem more likely. Journal of Personality and Social Psychology, 100, 777-793. [Editors’ Choice. (2011). Science, 332, 398]

 

Critcher, C. R., Dunning, D, & Armor, D. A. (2010). When self-affirmations reduce defensiveness: Timing is key. Personality and Social Psychology Bulletin, 36, 947-959.

 

Critcher, C. R., & Gilovich, T. (2010).  Inferring attitudes from mindwandering. Personality and Social Psychology Bulletin, 36, 1255-1266.

 

Critcher, C. R., & Dunning, D. (2009). Egocentric pattern projection: How implicit personality theories recapitulate the geography of the self. Journal of Personality and Social Psychology, 97, 1-16.

 

Critcher, C. R., & Dunning, D. (2009). How chronic self-views influence (and mislead) self-assessments of performance: Self-views shape bottom-up experiences with the task. Journal of Personality and Social Psychology, 97, 931-945.

 

Critcher, C. R., Huber, M., Ho, A. K., & Koleva, S. P. (2009). Political orientation and ideological inconsistencies: (Dis)comfort with value tradeoffs.  Social Justice Research, 22, 181-205.

 

Critcher, C. R., & Gilovich, T. (2008). Incidental environmental anchors. Journal of Behavioral Decision Making, 21, 241-251.

 

Critcher, C. R., & Pizarro, D. A. (2008). Paying for someone else’s mistake: The effect of bystander negligence on perpetrator blame. Personality and Social Psychology Bulletin, 34, 1357-1370.

 

Critcher, C. R. (2007). Gain-loss framing. In R. F. Baumeister & K. D. Vohs (Eds.), Encyclopedia of Social Psychology (pp. 371-372). Thousand Oaks, CA: Sage publications.

 

Manuscripts in Preparation (Data collection currently complete)

 

Critcher, C. R., & Dunning, D. When and why I think I’m better than them, but not him.

 

Critcher, C. R., Helzer, E., Tannenbaum, D., & Pizarro, D. A. Moral evaluations depend upon mindreading moral occurrent beliefs.

 

Critcher, C. R., Mazziotta, A., Dovidio, J. F., & Brown, R. J. Intergroup differences in intergroup anxiety: How majorities’ self-focused anxiety disrupts intergroup contact.

 

Critcher, C. R., & Rosenzweig, E. L. Attractors: Incidental values that influence forecasts of change.

 

Perfecto, H., & Critcher, C. R. Volume estimation as simulated judgment.

 

Roeder, S., & Critcher, C. R. “You just had to be there”: Why people sound underwhelmed by treasured experiences.

 

Manuscripts in Progress (Data collected, but additional studies ongoing or planned)

 

Critcher, C. R., Cowan, J., & Bahia, S. Increasing decision confidence but not decision competence: The illusory benefit of structuring attribute information.

 

Critcher, C. R., & Inbar, Y. A moral or an investment decision? Framing influences risk aversion.

 

Critcher, C. R., & Tandanu, J. Unpacked randomness: Why craps players wish they could roll one die at a time.

 

Jung, M. H., Critcher, C. R., Wong, P., & Nelson, L. D. Revisiting a decision by sampling origin of loss aversion: The range, not ranking, of outcomes predicts sensitivity to gains and losses. 

 

Mohan, B., & Critcher, C. R. The unlimited paradox: Consumers find unlimited plans more attractive than capped plans, but expect to pay less for them

 

Moon, A., Gan, M., & Critcher, C. R. Passing (on) judgment: Others judge us less extremely than we think.

 

O’Donnell, M., Nelson, L. D., & Critcher, C. R. The role of set completion in the offer framing effect and preference for variety.

 

Reit, E., & Critcher, C. R. The prevalence heuristic: People overestimate how much others prefer commonly-available options.

 

Rosenzweig, E., & Critcher, C. R. Same Wrong, Different Restitution: Heightened sensitivity to inequity in the context of apology.