Like Mele and Shepherd, I don’t believe that the news coming out of the cognitive sciences is as bad as some prominent scientists have claimed. The situationist literature on which they focus, for instance, demonstrates only that autonomous agency is sometimes constrained in ways we might not have suspected, not that we are never autonomous agents at all; much the same message is appropriately drawn from work on nonconscious biases and priming. Moreover, I agree with Mele and Shepherd that the same sciences that deliver us this moderately bad news also deliver good news, in the form of findings about how negative influences on our behavior from situational forces and nonconscious biases may be counteracted. However, I think the news is overall less good than they think, and our capacity to counter the negative influences through education (in particular) is more restricted than they claim. I will briefly explain why education is less powerful than Mele and Shepherd believe, before ending with some good news of my own
After outlining the ways in which situations can exert an unexpectedly large influence on behavior, Mele and Shepherd argue that we can counter these effects when we are aware of them. If I know about the bystander effect, for instance, I can wonder if it is at work in reducing my motivation to help in a particular case. By imagining how I would react were I the only witness, I might counteract the effect and produce better behavior in myself. I do not deny that this might sometimes work. But I think there are a number of reasons to doubt that educating people about the bystander effect will have any significant effects. First, there is a great deal of resistance among ordinary people (including educated people) to believing findings like these. Second, and worse, even among those who accept the findings there is little acceptance that these kinds of influences affect them (the great majority of physicians, for instance, accept that gifts from pharmaceutical companies influence their colleagues, but most deny that it influences them).
Even among those who accept that a situational influence is genuine and that they are as likely to succumb to it as anyone else, utilizing this knowledge is difficult. I must be able to identify the influence and recognize it as a potential biasing factor; I must identify the direction of the influence and its rough force (overcorrection may be as bad as undercorrection). In the highly charged circumstances in which moral decisions are made, for instance, these conditions are difficult to satisfy. When we must act quickly, we are unlikely to have the attentional resources to identify biasing factors (one influence of such factors may be to lead us to misperceive the situation as requiring an urgent response, thereby preventing us from utilizing corrective knowledge).
Further, even when we have the time and attentional resources to deploy the resources, under a range of conditions knowledge does not help. Mele and Shepherd cite examples where an effort to counteract bias is successful—for instance, on the implicit association test—but it is easy enough to find examples where it is not. They mention, for instance, that Payne found that implementation intentions are helpful in counteracting the effects of priming on weapon misidentification, but they don’t mention that those who simply tried to avoid using race as a cue in the task, in a different experiment, absolutely failed to avoid bias. Separately, Payne has shown how an effort at avoiding bias in the affect misattribution task failed equally miserably. In fact, the attempt to avoid racial bias may make things worse, by priming us to use racial stereotypes. Moreover, confidence in objectivity absolutely fails to correlate with the avoidance of bias across many tasks, making it especially difficult to know when we need to control prejudice.
Overall, then, the news is worse than Mele and Shepherd suggest. Some things—like the formation of implementation intentions—help, but simply knowing that and how mechanisms of prejudice work does not seem to help, and the effort to utilize the knowledge may make things worse. It is unlikely that implementation intentions can be a panacea, because the range of circumstances we can foresee and form such intentions to help us navigate is limited.
I want to end, however, with some good news from a different perspective. Though it is difficult to use the knowledge gained from the sciences of the mind to help shape our behavior in the heat of the moment (too difficult, I think, to expect much improvement from that direction) we can design our environments to take advantage of the influences identified, nudging ourselves toward better behavior. There is growing evidence that altering environmental cues may reshape implicit prejudice, for instance. For example, increasing exposure to female academics at a college alters the performance of students on the implicit association test, toward more egalitarian attitudes. We can reshape the social world in which agents live, so that their behavior is improved whether or not they accept that they are susceptible to these influences and whether or not they have time to utilize the knowledge they have. This, too, would be an instance of conscious control over our behavior.