Featured Philosop-her: Susanna Siegel


Susanna Siegel is Edgar Pierce Professor of Philosophy at Harvard University. She is author of The Contents of Visual Experience (Oxford 2010), a book about perception and intentionality. She has recently published articles about perceptual justification, the influences of hopes, fears, beliefs, and prior knowledge on perception, wishful thinking, and the relationship between affordances and perception. She teaches a course in the General Education program on social protest and political philosophy, and contributes to the program in Mind, Brain, and Behavior.  She is committed to fostering analytic philosophy in Spanish, and together with Diana Acosta and Patricia Marechal, is hosting a series of philosophy workshops in Spanish at Harvard. The second workshop will take place in March of 2015.

The Rationality of Perception

Susanna Siegel

I’d like to thank Meena for starting and hosting PhilosopHer.

For a long time I’ve been interested in perception. Much work in Anglophone philosophy of perception has focused on two kinds of perception: “good” cases where perception puts us in contact with reality, and “bad” cases where we are unwittingly hallucinating or under a visual illusion of some sort. The distinction between good and bad cases is important. It is the start of enduring philosophical problems of skepticism about the external world. And if there were no good cases, both science and common sense would be called into question. Both rely heavily on observation.

I’m interested in a third category of perceptual states. It cross-cuts the good and bad cases, and it can help analyze desire, fear, a range of cultural phenomena. The third category is that sometimes perception is a sham. It purports to present things as they are, but behind the scenes, your own psychological states are stacking the deck, so that the way things appear ends up congruent with what you want, hope, fear, suspect, or already believe.

Here’s an example from Cordelia Fine’s (2010) book Delusions of Gender. She starts Chapter 1 with a quote from Jan Morris, a male-to-female transsexual describing her post-transition experiences in her autobiography Conundrum (1987). Morris writes,

“The more I was treated as a woman, the more woman I became. I adapted willy-nilly. If I was assumed to be incompetent at reversing cars, or opening bottles, oddly incompetent I found myself. If a case was thought too heavy for me, inexplicably I found it so myself.”

How does Morris’s being presumed incompetent by others come to be part of her own outlook on the world? This happens in part by influencing her perception of things in the world. Consider Morris’s perception of heaviness. This perception is a sham. It’s no surprise if the perceived heaviness of a suitcase is a function of how strong you are – even if, from your point of view, you seem to be simply taking in a feature of the suitcase. It’s more surprising if its perceived weight is a function of how strong you think you are – and if that belief can in turn be influenced so fluidly by what hoards of other people presume about you, regardless of your physical strength. What other people presume about you has nothing to do with the heaviness of the suitcase, and everything to do with social relationships.

So suppose Morris finds the suitcase to be heavy when she tries to lift it. Does Morris’s experience of its heaviness make it reasonable for her to believe that the suitcase is heavy? Yes or No? Both answers can seem plausible. No: It is fishy for her to believe that the suitcase is heavy, when what led to her perception of heaviness simply internalizing other people’s ill-founded underestimation of her competence. Her perception of heaviness is akin to a rationalization of the outlook on which she can’t lift it.  But at the same time, Yes: What else is she supposed to think about how heavy the suitcase is? If the suitcase feels heavy, then so long as she isn’t aware of any reason to discount the feeling, isn’t it reasonable for her to believe that it really is heavy? (Perhaps later, on reflection, Morris becomes aware of such reasons, but let’s focus on the moments before she is aware of any such reason.)  The philosophical problem is that both Yes and No answers to this question seem plausible.

This epistemological problem takes many forms. It can arise when the very contents of perceptual experience are influenced by what the subject fears, beliefs, wants, suspects, or knows. Some influences on the contents of experience have come to be called ‘cognitive penetration.’ This label applies widely. It isn’t always exactly what Fodor and Pylyshyn, and Churchland debated in the 1980’s under that label. (I talk a bit about the differences in “Epistemic Evaluability and Perceptual Farce”).

But the same problem can arise from psychological influences on the role of perceptual experience in what the subject goes on to believe.  For all Morris says (“I found it to be heavy…), perhaps the suitcase didn’t feel heavy, but she just thought it did. Morris might be making an introspective error about how heavy the suitcase feels. Or perhaps introspection isn’t involved at all, and Morris is jumping to the conclusion that the suitcase is heavy. Due to her outlook on herself (freshly inherited from those who presume she’s weak or incompetent), she believes that it is heavy, but if she were guided by her experience, she’d find the suitcase easy to lift.  Here, the perverse, ill-founded outlook makes her discount her experiences, preventing them from playing the prized epistemic role of regulating our beliefs.

So there are a number of ways in which perception might be influenced by the view of herself that Morris is gradually internalizing. This view of herself is obstructing her access to the world. And it is doing that, by making Morris perceive the world as the world would be, if the patriarchal presumptions that she encounters were true. If she were weak and incompetent, then the suitcase really would be heavy.

This kind of sham opens the possibility that perceptual experience itself might be epistemically evaluable. Philosophers often distinguish perception from reasoning. We reason from information we have already, whereas perception is a way of taking in new information. But in a perceptual sham, perception is hijacked as a means of apparently confirming the outlook that shapes it.

We’re familiar with the idea that perceptual judgments can be epistemically better or worse (more or less reasonable). In my book The Rationality of Perception (in draft), I argue that even perceptual experiences can be epistemically evaluable, due to the ways they are formed. Not every perceptual experience is epistemically evaluable. But some are. The epistemically evaluable experiences are outgrowths of the rest of our outlook on the world. When that outlook is epistemically ill-founded, so are the experiences they help generate.

If experiences could be epistemically evaluable, that would solve the epistemological problem posed by perceptual sham. Does Morris’s have reason from her experience to believe the suitcase is heavy, if it feels heavy and she can see no reason to doubt her experiences? No. Her experience is an outgrowth of an epistemically poor outlook on the world, according to which is incompetent in various ways. Her experience was formed unreasonably, due to influences of this outlook.  It is like an unjustified belief. It’s unsuitable for transmitting justification to subsequent beliefs about how heavy the suitcase is.

If I had more space, I’d discuss cases where perceptual experiences are outgrowths of well-founded outlooks on the world. Think of all the intelligence involved in the radiologist’s knowing which parts of an X-ray to focus on when she is studying it to see if there’s a tumor.  But here I’ll stick with the putative cases of ill-founded experiences.

If experiences can be epistemically evaluable due to the way they are formed, what exactly is it about the way that they’re formed that makes them epistemically evaluable?

A first answer is that some experiences result from inferences. What kind of inferences? The kind that bear on the rationality or irrationality of the subject. This kind contrasts with many pre-perceptual inferences discussed by psychologists, such as Helmholtz in the 19th century and today’s Bayesian theories of perceptual processing. Those inferences do not bear on the rationality or irrationality of the subject. I think that in addition to resulting from Helmholtzian inferences, experiences can also in principle result from an epistemically more significant kind of inference.

A second idea is that perceptual experiences can be epistemically evaluable by virtue of their relationships to fears or desires (including hopes and preferences). What kind of relationship? It doesn’t have a label, the way inference does. But we might call it ‘elaboration’. Consider experiences that are congruent with what you fear or want. For instance, an acrophobe (someone afraid of heights) on a balcony will typically overestimate its height from the ground.  (Stefanucci, J. K., & Proffitt, D. R. 2009. “The roles of altitude and fear in the perception of height”. Journal of experimental psychology. Human perception and performance, 35(2), 424-38.) Non-acrophobes are also poor at estimating height. But the misestimates of acrophobes are in the direction of exaggerating the distance to the ground. Why? One explanation is that a greater distance from the ground is more congruent with their fear than a smaller distance. If fear makes the chance of falling salient to you, and the greater the height, the more dangerous the fall, then an experience of a higher balcony rationalizes the fear. It makes the fear seem reasonable.

A similar phenomenon is found in desire. An advertiser might try to move you to buy something, by getting you to want it. How do they get you to want it? They present it in a way they think you will find desirable. Tim Scanlon and Peter Railton have emphasized ways in which desires are closely related to representations of the world that are congruent with them. But now consider a case where a desire you have already influences how things appear to you. You’re tired, you want to plop down and rest. You see a bed. It looks fluffy! It might even look as if it is beckoning you to plop down and rest. Here, the way the bed looks to you could be an outgrowth of your desire to rest. The perceptual experience of the bed as fluffy is an outgrowth of your desire. The outgrowth could operate via attention – you attend to features of the bed that it really has. Or it could operate in some other way: your experience exaggerates the fluffiness of the bed.

How could these relationships between fear and experience, or between desire and experience, be epistemically evaluable? If fears can be well-founded or ill-founded, then when the fear is elaborated into an experience, the experience could inherit the ill-founded or well-founded character of the fear. What about desire? It’s a long-standing question in moral philosophy whether desires can be fitting or ill-fitting. In the special case of a preference to maintain a belief, the notion of ill-fittingness is easy to grasp. (This preference is central to the analysis of belief polarization and other forms of motivated cognition).

Here’s a hypothesis. The elaboration of fear or desire into experience is mediated by confidence that things in the world are congruent with the fear. Whether or not fear or desire are independently well-fitting or ill-fitting, the confidence that the world is congruent with the fear or desire is clearly something that can be more or less reasonable.

Where can this analysis say about Morris? Morris ‘adapts willy-nilly’ to the presumption of incompetence that she finds herself subject to. How could other people’s presumptions influence her perception? They could influence it by influencing her confidence in those presumptions. Surrounded by social reality where those presumptions operate, one’s own confidence in those presumptions could easily gravitate upward. That is one kind of social construction in action. And once one’s confidence in such presumptions gravitates upward, it can mediate the influence of fear and preference on perception. The situation is ripe for elaboration and inference – two routes by which perception itself can be drawn into the domain of epistemic norms.

Featured Philosop-her: Lisa Bortolotti

lisa for blog

Lisa Bortolotti is Professor of Philosophy at the University of Birmingham. She works in the philosophy of cognitive sciences and in biomedical ethics. Her main research interests lie in irrational beliefs. From September 2013 to September 2014 she held an AHRC Fellowship. In October 2014 she started a new project, Pragmatic and Epistemic Role of Factually Erroneous Cognitions and Thoughts (PERFECT), funded by a European Research Council Consolidator Grant. Her monograph on delusions, Delusions and Other Irrational Beliefs (OUP 2009), won the American Philosophical Association Book Prize in 2011. Her new book, Irrationality (Polity 2014), has just been released.


Reverse Othello Syndrome and Epistemic Innocence

Lisa Bortolotti

Can delusions playing a defensive function (hereafter, motivated delusions) have epistemic benefits? Arguably, they can prevent loss of self-esteem and help manage strong negative emotions. The claim that delusions are psychologically adaptive was recently discussed in the psychological literature (McKay and Kinsbourne 2010; McKay and Dennett 2009).

Without denying that delusions are typically false and irrational, and that they compromise good functioning, my goal here is to ask whether the psychological benefits attributed to motivated delusions can translate into epistemic benefits. Thinking about delusions in terms of potential epistemic benefits invites a reflection on the relevance of contextual factors in epistemic evaluation.


Motivated delusions

Clinical delusions are symptoms of psychiatric disorders such as schizophrenia, dementia, and delusional disorders. Motivated delusions can be characterised as irrational beliefs, in that they are implausible, they do not accurately represent reality, they do not respond to evidence, and they may not be consistently reflected in behaviour. An example of a monothematic delusion with a defensive function that emerged as a result of brain damage is the case of Reverse Othello syndrome (Butler 2000). A man, BX, delusionally believed that he was in a happy relationship, when in fact his partner had left him.

Butler’s patient was a talented musician who had sustained severe head injuries in a car accident. The accident left him quadriplegic, unable to speak without reliance on an electronic communicator. One year after his injury, the patient developed a delusional system that revolved around the continuing fidelity of his partner (who had in fact severed all contact with him soon after his accident). The patient became convinced that he and his former partner had recently married, and he was eager to persuade others that he now felt sexually fulfilled. (McKay et al. 2005)

BX’s belief in the fidelity of his previous partner and the continued success of his relationship was very resistant to counterevidence. BX believed that his relationship was going from strength to strength, even though his former partner did not want to communicate with him and was in a relationship with someone else.

The delusion seemed to protect BX from an undesirable truth while he was coping with the consequences of permanent disability. Gradually BX developed the delusion that his former romantic partner was still in a relationship with him, and also that they had recently married. While still in hospital, he often asked to go home so that he could see his wife. Butler argues that the delusion relieved the sense of loss that BX was feeling at the time.

[A]ppearance [of delusions] may mark an adaptive attempt to regain intrapsychic coherence and to confer meaning on otherwise catastrophic loss or emptiness (Butler 2000).

As gradually as it had appeared, BX’s delusional system dissolved, and by the end of the process BX realised that his former partner had moved on, was not married to him, and had no intention to go back to him. This happened roughly at the time when BX had completed his physical rehabilitation and was ready to return home.

Butler argues that a psychological defence against depression contributed to the fixity and elaboration of BX’s delusional system. The delusion kept BX’s depression at bay at a very critical time. Acknowledging the end of his romantic relationship might have been disastrous at a time when he was coping with the realisation of his new disability and its effects on his life.


Delusions as a shear pin

According to the “shear-pin” account developed by McKay and Dennett, some false beliefs that help manage negative emotions and avoid low self-esteem and depression can count as psychologically adaptive. McKay and Dennett suggest that, in situations of extreme stress, motivational influences are allowed to intervene in the process of belief evaluation, causing a breakage.

What might count as a doxastic analogue of shear pin breakage? We envision doxastic shear pins as components of belief evaluation machinery that are “designed” to break in situations of extreme psychological stress (analogous to the mechanical overload that breaks a shear pin or the power surge that blows a fuse). Perhaps the normal function (both normatively and statistically construed) of such components would be to constrain the influence of motivational processes on belief formation. Breakage of such components, therefore, might permit the formation and maintenance of comforting misbeliefs – beliefs that would ordinarily be rejected as ungrounded, but that would facilitate the negotiation of overwhelming circumstances (perhaps by enabling the management of powerful negative emotions) and that would thus be adaptive in such extraordinary circumstances. (McKay and Dennett 2009)

Could motivated delusions be adaptive misbeliefs? The mechanism that inhibits motivational influences on belief evaluation is compromised, and as a result of this motivated delusions emerge, making negative emotions easier to manage and depression less likely to ensue. McKay and Dennett consider the possibility that motivated delusions count as adaptive misbeliefs, but interestingly argue that the extent to which desires are allowed to influence belief formation in the case of delusions is pathological. Delusions are the result of the maladaptive version of a psychologically adaptive mechanism.

According to the shear-pin account, the situation in which adaptive misbeliefs emerge is already seriously compromised. The premise is that the person is already experiencing high levels of distress, and can come to more serious harm unless her negative emotions are managed. Thus, the benefit here amounts to the prevention of more serious harm than the one the person is already experiencing. In other words, the adaptive misbelief is equivalent to an emergency response.


Epistemic innocence

According to the legal notion of justification defence, an act does not constitute an offence when it prevents serious harm from occurring and other ways of preventing the harm were not available to the agent at the time. The act is seen as an acceptable response to an emergency. I want to apply this notion of innocence to the domain of epistemic evaluation. In some contexts, a misbelief may help avoid worse epistemic consequences, and thus qualifies as an acceptable response to an emergency. A delusion is epistemically innocent if adopting it delivers a significant epistemic benefit that could not be obtained otherwise.

If a belief helps manage negative emotions, protect self-esteem, and relieve anxiety and stress (e.g., “I am now severely disabled, but my girlfriend still loves me”), it will have positive effects not just on the agent’s wellbeing but also on her capacity to function well epistemically. By having the belief, a person will be more likely to engage with her surrounding physical and social environment in a way that is conducive to epistemic achievements. Consequences of stress and anxiety include lack of concentration, irritability, social isolation, and emotional disturbances. These in turn negatively affect socialisation, making interaction with other people less frequent and less conducive to useful feedback on existing beliefs, and to the fruitful exchange of relevant information. Due to reduced socialisation and engagement, the acquisition and retention of knowledge is compromised and intellectual virtues are not exercised.

Notice that the delusional belief may bring relief at the time when it is adopted, due to the person being already in an epistemically compromised situation, but it often increases rather than reduces stress and anxiety when it is maintained in the face of conflicting evidence and challenges from third parties. Stress and anxiety no longer come from the negative emotions associated with trauma or loss (e.g., “My girlfriend left me”), but from the fact that the content of the delusion clashes with aspects of the person’s experience, conflicts with other things she believes or feels, and alienates other people. For all of these reasons, anxiety and depression do not always lessen after a delusion is adopted, they can also heighten. My claim here is modest: delusions can be epistemically innocent when they are adopted; and their epistemic innocence does not mean that they are also epistemically justified, or epistemically good overall.

Cannot the person adopt a belief that has the same epistemic benefits as the delusional one but fewer costs? One suggestion emerging from the empirical literature is that, in the extraordinary circumstances in which the agent finds herself, no other belief with the relevant characteristics is available. A belief that is more tightly constrained by evidence (e.g., “My girlfriend left me”) than the delusional one may not be as well placed as the delusional one to play a defensive function, in terms of defusing the negative emotions caused by trauma and disability. The non-delusional belief lacking those psychological benefits may also lack the epistemic benefits associated with the delusional belief.


Conclusions and implications

I suggested that motivated delusions have obvious epistemic costs but can also have a significant epistemic benefit that would be otherwise unattainable. When we think about adaptive misbeliefs, we usually think in terms of there being a trade-off. Believing something false can make us feel better, but it leads us further away from the truth.

The case for the potential epistemic innocence of motivated delusions puts some pressure on the trade-off view. It would be misleading to believe that motivated delusions provide anxiety-relief and protect self-esteem by compromising access to the truth. Rather, in the account I have sketched, the delusion is adopted at a time when access to the truth is already compromised, and it would be further compromised unless negative emotions were effectively managed. As a temporary response to an emergency, motivated delusions play a useful epistemic function.

In the case of BX with Reverse Othello syndrome, the clinical team decided not to challenge the delusion after they realised that there were no other psychotic symptoms and the delusion was playing a defensive function.

Persistent attempts […] to challenge B.X.’s delusional beliefs were unsuccessful and usually led him to become tearful and agitated.

All members of the treating team were instructed not to aggressively B.X.’s delusional beliefs but were also cautioned not to become complicit in his elaboration of them. (Butler 2000)

A clinical team might decide not to challenge a delusion if they think that it will be ineffective or disruptive, or if there is a high risk of depression ensuing from the agent’s insight into her mental illness. My discussion suggests that, in these contexts, challenging the delusion would not be advisable from an epistemic point of view either. At the critical stage, motivated delusions may serve a useful epistemic function, allowing the agent to overcome negative feelings or low self-esteem that would prevent her from functioning as an epistemic agent.



Butler, P. (2000). Reverse Othello syndrome subsequent to traumatic brain injury. Psychiatry: interpersonal and biological processes 63 (1): 85–92.

McKay, R. and Dennett, D. (2009). The Evolution of Misbelief. Behavioral and Brain Sciences 32 (6): 493–561.

McKay, R. and Kinsbourne, M. (2010). Confabulation, delusions and anosognosia. Motivational factors and false claims. Cognitive Neuropsychiatry 15 (1): 288-318.



For the research on which this post is based, I acknowledge the support of the Arts and Humanities Research Council (The Epistemic Innocence of Imperfect Cognitions, grant number: AH/K003615/1). A more detailed argument will appear in an article entitled “The Epistemic Innocence of Motivated Delusions”, forthcoming in Consciousness & Cognition.