Reviewed by Guy Lancaster
In the early twentieth century, there arose a rather melodramatic school of mystery writing popularly dubbed the “Had I But Known” school, a term derived from a particular technique of foreshadowing. For example, in the very first paragraph of her 1937 novel, Murder à la Richelieu, novelist Anita Blackmon has her narrator convey the following: “Had I suspected the orgy of bloodshed upon which we were about to embark, I should then and there, in spite of my bulk and an arthritic knee, have taken shrieking to my heels.” Given the personalities of these narrators, we can well believe that they would have chosen different paths of action had they been forewarned about what awaited them otherwise. But can we say that about everyone? Conservatives regularly insist that those figures whose statuesque presence in the public square has increasingly become the site of controversy and contention—people like Edward Colston, Robert E. Lee, and King Leopold II—simply could not have known what we moderns take for granted—that slavery and genocide were wrong. Had they but known the full horrors of those systems they defended, we are assured, they would have taken shrieking to their heels, far, far away from all that profit they were making.
But this assertion raises questions both epistemological and moral. First, can the average person, not to mention the man whose whole social and economic standing depends upon the suffering of others, overcome what philosopher Endre Begby describes as “the intrinsic capacity limitations of the human mind, as well as the particular and highly contingent informational constraints that ordinary agents are forced to operate under”? (2) Second, to what extent does greater knowledge of reality lead to greater morality? Begby concerns himself only with that first question in his book, Prejudice: A Study in Non-Ideal Epistemology, although the second nonetheless haunts his pages like a ghost, seen occasionally only from the corner of the eye.
To start, Begby defines prejudice as a “negatively charged stereotype, targeting some group of people, and, derivatively, the individuals who comprise this group,” but he emphasizes his intention to “divest the concept of prejudice of its moralistic and ‘shaming’ tone,” treating prejudice as a “low-threshold concept” in which most people will engage one way or another (8–9). Prejudice, Begby insists, can emerge through the same process by which other justified beliefs arise—from considering the evidence at hand. After all, and there exist “severe limitations to what we can do on our own, epistemically speaking, to supply ourselves with information,” meaning that we rely upon the same processes and set of informants for both our prejudicial and non-prejudicial beliefs (25). Moreover, social cognition depends heavily upon the use of generalizations for filtering and weighing input; the creation of stereotypes is simply part of how human beings cope with otherwise overwhelming amounts of information in real time.
Given not only the limits of human cognitive abilities, but also the fact that knowledge claims are often entangled with matters of power and ideology, Begby advocates for what he calls “endogenous non-ideality,” or “a systematic attempt to render epistemological norms relevant and applicable to the human situation” (45). This does not entail the abandonment of goal-setting but, instead, constitutes “simply the idea that which beliefs you are justified in forming must presumably reflect the particular, contingent subset of the total evidence that is available to you” (55), while also acknowledging the fact that “once prejudiced beliefs enter into our cognitive economy, they can change the epistemological landscape in dramatic ways” (60).
So how are prejudices acquired? Largely through induction and testimony. Regarding the former, Begby gives as an example the young student Johnny, who observes that the girls in his class do not exhibit the same aptitude for math as the boys and so concludes that girls are simply not as mathematically inclined. At this stage in his life, he knows nothing about how girls may be discouraged from developing math skills or graded by prejudicial teachers—he forms a proposition based upon the only evidence he has, and if he did not arrive at that conclusion, “he would essentially be guilty of throwing away evidence for no good reason” (65). Testimony, meanwhile, offers us information beyond what we could gain through first-hand experience (from someone with suitable epistemic authority) and so, too, must be taken as evidence. If we want to discount induction and testimony, Begby states, then “we run the risk of ruling out sources of evidence or modes of epistemic reasoning which serve us eminently well in other domains” (75).
But prejudice, once acquired, exerts influence over how new evidence is interpreted, likely making one less sensitive to information that could contradict such generalizations. Because prejudices do not take the form of strictly universal generalizations, they prove more epistemically robust, able to accommodate exceptions (like the odd female math wizard). Moreover, evidence provided by testimony may often contain what Begby calls “evidential preemption,” or “a subsidiary mechanism which tacks naturally on to testimony, and which may serve to effectively ‘inoculate’ recipients against future contradictory evidence” (95). An example might be the statement: “Environmentalists will tell you that we can only protect the economy by combatting climate change, but what they really want is to destroy our businesses with their schemes.” When some environmentalist does address climate change in the predicted manner, it can motivate the prejudiced listener to increase his confidence in already held beliefs about that particular group. And that is to say nothing of the epistemic bubbles that foster conspiratorial thinking and consign all attempts to combat such as “fake news.” Beyond these epistemic bubbles, prejudice can sometimes be maintained by reference to the “common ground,” or the mutual knowledge everyone assumes everyone else to possess, so that people are motivated to act in accordance with certain social scripts, even if no one in the group shares the beliefs underlying that script, precisely because “the risk of social sanction comes with violating the script or even just explicitly raising questions about its validity” (123).
Given the recognized fallacies in human reasoning, some governmental agencies have attempted to neutralize prejudice by designing algorithms to address matters such as criminal sentencing, and so ignore race, but as Begby notes, “given the United States’ long history of institutionalized racism, one doesn’t have to use race as a factor in order nonetheless to obtain essentially the same pattern of results,” with other factors serving as easy proxies for race (146). So what then? Does morality demand that we place constraints upon our abilities to form (or act upon) beliefs in certain situations? No, Begby claims, observing that in many cases “moral and non-moral reasoning will be deeply entangled, in the sense that we cannot give content to a subject’s moral assessment of the situation without assuming a fair bit about her other doxastic commitments. In such cases, we cannot expect the moral stakes to provide an independent rational check on our belief formation policies” (171). Instead, Begby takes a page from tort law, asserting that moral responsibility for a wrong is not exhausted by the act of attaching blame to a subject; victims’ rights can be secured without the identification of an epistemically blameworthy perpetrator.
Begby’s work really seeks to expand upon the bombshell concept philosopher Charles W. Mills dropped in his 1997 book, The Racial Contract—namely, the idea of an “epistemology of ignorance,” or the overarching system of knowledge construction that keeps whites largely ignorant of the mechanisms by which their privilege, vis-à-vis peoples of color, is produced and maintained. Ignorance can also be a characteristic of those who experience wrongs, as philosopher Miranda Fricker noted in her 2007 book, Epistemic Injustice: Power and the Ethics of Knowing, wherein she coined the term “hermeneutical injustice” to describe how certain groups may lack the necessary epistemic resources to make sense of their own collective experiences—the inability to attach a name to the injustice they are enduring. In fact, Begby’s own work serves as an important complement to Fricker’s, for, although he does not spell this out explicitly, his framework helps to explain how people can hold the same prejudicial ideas by which they are negatively stereotyped—that is, how people can, as a result of their own epistemic limitations, blame themselves for their own status in society.
But unlike Mills and Fricker, Begby does not center ignorance; rather, he aims his analysis at how knowledge is arrived at in the non-ideal circumstances that reign in our lives. His is not a theory of the lacunae in our understanding, and in this, his writing echoes other recent works, such as Sarah McGrath’s 2019 Moral Knowledge, which argues that human beings acquire moral knowledge in much the same way they acquire ordinary empirical knowledge—and, moreover, that such ordinary empirical knowledge can also function as an important source of moral knowledge, with moral claims being confirmable or disconfirmable by observation or other evidence. Begby also outlines some of the social forces that can lead the individual into narrow epistemic alleyways, although, on that subject, his book is not nearly as robust as sociologist Mikael Klintman’s Knowledge Resistance: How We Avoid Insight from Others (2019), which more explicitly asserts that human beings simply did not evolve as truth-seekers but, rather, as social survival machines—and that truth-maximizing behavior could, in fact, be non-adaptive when compared to the necessity of cooperating with one’s fellow creatures.
However, we need to return to that mute ghost who haunts this book—namely, the question of whether or not fuller knowledge leads to greater morality. Lurking beneath this schema is the assumption that people might behave more morally were they not subject to such non-ideal epistemic conditions. This assumption calls to mind what philosopher Kate Manne, in Down Girl: The Logic of Misogyny (2018), described as a humanist fallacy—namely, the idea that learning to regard the (racial, national, gendered, etc.) “other” as a full human being would reduce any oppression and violence directed toward that “other.” To the contrary, Manne insists, most perpetrators find manifestations of shared humanity to be more threatening. This accords with a point French thinker Jean-Frédéric Schaub makes in Race Is about Politics: Lessons from History (English translation, 2019)—that the failure of certain groups to conform to stereotypes, rather than the reverse, is what so often provokes a violent reaction.
But this insight can actually align with Begby’s larger thesis, for sometimes the very humanity of people can be wrapped up in the stereotypes we hold about them, even negatively charged stereotypes, so that violations of those expectations can strike one as unnatural or wrong. And this is what makes Begby’s work so informative and such a landmark in non-ideal epistemology, for by demonstrating how prejudice accords with our normal cognitive operations, it centers the humanity of all involved, even those perpetrators of so much so-called “inhumanity.”
12 November 2021