Religious people sometimes think, remember, and reason about God in ways that go contrary to their professed religious “beliefs.” Which raises a puzzle: what then shall we say about those “beliefs”? Do people really believe them? Or is there another way to account for what's going on?
I’d like to talk frankly about why research on the topic of self-deception hasn’t made much progress—as far as I can see—despite a steady-stream of on-going interest. There’s been some excellent work, but it doesn’t seem to me that the topic on the whole has moved forward all that much.
In both philosophy and psychology there has been a tendency to talk about self-deception as if it were one thing. If it’s one thing, we can just figure out what that is. Right?
The philosopher’s approach is to try to solve the paradox of self-deception and come up with an analysis of self-deception in terms of necessary and/or sufficient conditions.
The psychologist’s approach is to try to demonstrate experimentally that certain behaviors require positing a mental state of “self-deception.” (This approach is excellently illustrated by the classic 1979 article from Ruben Gur and Harold Sackheim, entitled “Self-Deception: a Concept in search of a Phenomenon.”)
Neither approach is exactly wrong. But here’s the problem. “Self-deception” is a term that only loosely refers. If we were to survey all the psychological states that the term can aptly be applied to, we’d find vast differences within that set of perfectly real phenomena. There are, at least, what I would call classic self-deception, self-inflation bias, semi-pretense, and false emotion, all of which seem to me to be distinct—but all of which get loosely termed “self-deception.” I’ll turn to those shortly. For now, let’s stay focused on the methodological problem.
The implicit assumption that self-deception is a unified phenomenon creates problems for philosophers and psychologists in different ways.
For philosophers: any good analysis of one of the self-deceptive phenomena (which ends up being an “analysis of self-deception [full stop]”) is subject to apparent counterexamples from someone who points to one of the other self-deceptive phenomena. For example, theorist number 1 (who has classic self-deception in mind) may produce an “analysis of self-deception” that theorist number 2 (who has false emotion in mind) presents a “counterexample” to. The two theorists are in fact talking past each other without realizing it, because of this mistaken assumption of unity. They are both talking about “self-deception.”
For psychologists: the problem is even simpler to describe. Bodies of data can seem to contradict when they in fact don’t, simply because a data set about one phenomenon is labeled under the same heading (“self-deception”) as a data set that’s in fact about a distinct phenomenon. Something like this may be what happened in the debate in the 1990s consisting of Shelley Taylor (and colleagues) versus Randy Colvin (and colleagues). The “self-deceptive” phenomena that Taylor found conducive to success and happiness are just not the same mental states as the “self-deceptive” phenomena that Colvin found detrimental to social well-being. (I do some untangling of that particular debate in “Self-Deception Won’t Make You Happy,” in case you’re interested.)
This whole situation impresses upon me one thing that Robert Trivers told me once. He said that what I should be doing with my time and philosophical ability is logically analyzing and distinguishing different kinds of self-deception, which could be a benefit to everyone. I think he was implying that it was a mistake to look for one holy grail analysis of self-deception.
So here I’d like to make some progress on his suggestion. The following four phenomena are distinct, although they could all (in some cases more loosely than others) be called “self-deception.”
Classic self-deception. This is a phenomenon of motivated irrationality, in which motivational forces in the agent somehow drive him/her to form a belief that runs contrary to the wealth of evidence that she possesses. The mind is in some sense divided. Thus, classic self-deception is rightly said to involve some sort of epistemic tension. This is the phenomenon that philosophers are most focused on, since it seems paradoxical. But being focused on classic self-deception hasn’t saved us from accidentally labeling cases of the other phenomena as “self-deception.”
Self-inflation bias. We often hear statistics along the following lines. “94% percent of college professors believe they are above average in their scholarly abilities.” “85% of people think they are above average at driving.” And so on. These statistics are evidence of a general tendency people have to think better of themselves than rigorous analysis of the evidence would warrant. Importantly, I don’t think this self-inflation bias needs to involve an epistemic tension like self-deception does. The self-inflator is wholehearted in her high opinion of herself. Furthermore, this general tendency isn’t motivated by specific desires and insecurities, as is the case in classic self-deception.
Semi-pretense. Often we go about imitating others without any intention to imitate or pretend. Sartre’s waiter is a great example of this. We take on the trappings of a certain character, without even being aware that that’s what’s happening. If the character I’m unwittingly imitating is inappropriate to my actual circumstances, someone might say I’m deceiving myself. But I prefer to call this phenomenon semi-pretense, because it’s in between plain action and full pretending. (But note that semi-pretense can contribute to classic self-deception, if the agent goes on to form beliefs on the basis of the semi-pretense.)
False emotion. As Robert Frank discusses in Passions within Reason, people often have emotions for strategic social reasons. Often that’s good. We may cry because we genuinely need help. But crying may well be disproportionate to the amount of genuine need—a way of manipulating other parties into doing one’s will. Importantly, such manipulative false emotion needn’t be (and perhaps usually isn’t) consciously planned. The agent is convinced by her own false emotion! This, again, may be loosely called self-deception, although it is rather different from the preceding three phenomena.
There are other distinct phenomena, too, that pre-theoretically get thrown into the basket of “self-deception.” Progress will require greater precision going forward.
I’d like to close this blog with a note to anyone who, like me, takes an interest in the evolutionary status of “self-deception.” I have argued in various places that self-deception is not an adaptation evolved by natural selection to serve some function. Rather, I have said self-deception is a spandrel, which means it’s a structural byproduct of other features of the human organism. My view has been that features of mind that are necessary for rational cognition in a finite being with urgent needs yield a capacity for self-deception as a byproduct. On this view, self-deception wasn’t selected for, but it also couldn’t be selected out, on pain of losing some of the beneficial features of which it’s a byproduct. This view seems opposed to the view of Robert Trivers, who maintains that self-deception is an adaptation to facilitate interpersonal deception. But it could be, in light of the foregoing distinctions, that Trivers and I were talking past each other.
I hereby wish to suggest the following. Self-inflation bias and false emotion are evolutionary adaptations that serve interpersonal deception, as Trivers has theorized. But classic self-deception and semi-pretense are in fact spandrels. Whether or not I am right in these particular hypotheses, I think the methodological point of this blog still stands.