Addendum to Pamela Lindsay’s Bibliography on Implicit Bias

The following is an addendum to my Guest Post, “Bibliography: Implicit Bias.”

It’s designed to serve as a segue to my next bibliography, on EDI and research excellence, to be released tomorrow, March 7, 2022.

Here, find three considerations concerning the use of implicit (unconscious) bias testing and training tactics by various EDI Programs, followed by a brief discussion. Please bear with my detailed explication of how I arrived at Ethical Considerations on the Harvard Implicit Associations Test website. I include it to make a point in the discussion that follows these examples.

  1. On the Canadian Institutes of Health Research EDI resource page, under the subheading “Reducing Bias in Peer Review,” I clicked on resource 6. Harvard Implicit Associations Test.
  • “Equity, diversity and inclusion resources,” Equity, Diversity and Inclusion in the Research System, Canadian Institutes of Health Research, Government of Canada, Date modified: 2021-07-05, (accessed on 5 March 2022),

That link led me to a page with the following caveat:

“This demonstration site is out of date. It uses old JavaScript code, which is unreliable in modern browsers, and may negatively impact your experience. In addition, the language we use to discuss the IAT relies on older conceptions, such as making claims about “unconsciousness”. We are currently updating the code, stimuli, and language used throughout the site. Until then, we suggest you take more caution than usual regarding claims made on this website. Or, if you speak English, you could visit our updated website here.

I then followed that link to the updated Harvard Implicit Associations Test to arrive at the Project Implicit Preliminary Information page,

On this page, in turn, is the following disclaimer: “Important disclaimer: In reporting to you results of any IAT test that you take, we will mention possible interpretations that have a basis in research done (at the University of Washington, University of Virginia, Harvard University, and Yale University) with these tests. However, these Universities, as well as the individual researchers who have contributed to this site, make no claim for the validity of these suggested interpretations. If you are unprepared to encounter interpretations that you might find objectionable, please do not proceed further. You may prefer to examine general information about the IAT before deciding whether or not to proceed.”

*Here, see my post at Keeping an Eye on EDI, Ethical Considerations of the Harvard Association Test. Are University EDI Offices Complying With These Guidelines?

2. In the following article, Frederick Herbert considers the UK government’s decision to discontinue unconscious bias testing and training in its various departments. While Herbert agrees with findings that led to the government’s decision, he nevertheless argues these programmes should not be scrapped.

Herbert endorses the argument that notwithstanding the flawed and questionable evidence for the veracity of unconscious bias testing and the efficacy of the training, they have value in raising awareness. He argues that awareness is a necessary first step for making a change in behaviour. “How many civil servants in the country.” he asks rhetorically, “have never realised their implicit negative associations are likely to be contributing to systemic racism or sexism?”

Herbert’s other arguments for not scrapping these programmes include i) ” the potential for indirect benefits of UBT [unconscious bias testing] and indirect costs from the decision to reverse it,” and ii) “UBT may be thought of as a key signalling device that shows managerial commitment to diversity.” Rather than scrapping the programmes, Herbert suggests the government could have used them as a basis to discover the kinds of interventions that do work, “which would have also been a fantastic signal that the government cares about leading the way in uncovering how to reduce bias and prejudice in the workplace.”

Frederick Herbert. “Is Unconscious Bias Training Still Worthwhile?,” LSE Business Review, March 24, 2021, (accessed 6 March 2022),

3. Tinna C. Neilsen and Lisa Kepinski argue, contra Herbert, that ‘awareness’ is not a reason for continuing unconscious bias testing and training. In fact, awareness is at best ineffective and at worst liable to create a backlash.

Neilsen and Kepinski note that, “Over-reliance on unconscious bias awareness training as ‘the solution’ has created a multi-billion dollar-a-year industry that is profiting from many thinking this approach will ‘fix the problem’. Yet often, the outcomes of these bias trainings are not effective and the problem persists. It may even get bigger!”

These authors detail a number of reasons awareness backfires, such as Mental Overload where, “Having to be consciously aware of the unconscious comes at the risk of creating mental overload, which has been proven to strengthen the impact of bias. Furthermore, when knowing (system 2) but not having the ability to act on that knowledge, it can paralyse us (system 1) and then we rely even more on default and biased behaviour. So, you see this creates a vicious circle.”

They suggest modes other than raising awareness for targeting and (re)-training the unconscious mind, which, on their view, “steers people to make better choices. This ‘pushes’ (nudges) the unconscious mind in a non-intrusive way to change behaviour without taking away the freedom to choose something else.”

One example of a nudge these authors give is the practice of anonymising candidates for a symphony orchestra by having them audition behind a screen, and removing any subtle hints to their identities, such as having the women remove high heels that would tellingly clack across the floor as they walk to their audition positions. (This nudge is akin to the call to leave out language in academic letters of reference that would identify an applicant as female, such as “nice.” But as an applicant, I might want to identify as female if, in order to game the system, I believe that doing so advantages me in a quota system.)

Tinna C. Neilsen and Lisa Kepinski, “Bias Awareness Is Not the Solution! It Might Backlash!,” The Inclusion Nudges Blog, Inclusion Nudges,


  1. The architects of the Harvard Implicit Associations Test (HIAT) might be said to be exercising due diligence about the limits of their tests by including a disclaimer on their Preliminary Information Page, which includes a link to Ethical Considerations.

i) The ethical considerations stated on the HIAT website are not made explicit — are not even mentioned — on university EDI websites that link to the HIAT. Nor are these ethical considerations made explicit along with links to the HIAT which are provided and encouraged by Canada’s federal research funding agencies. Rather, readers are encouraged to click on the link and simply take the tests to uncover their unconscious biases, any ethical considerations be damned.

ii) The likelihood that people directed to the HIAT site will take the time to go through the series of clicks to reach the ethical considerations is virtually nil, because a) cognitive effort is required to read all of the accompanying caveats, and b) there is no motivation to expend this effort since readers trust the institutions that recommend the HIAT — including “Harvard” itself.

The last paragraph of the Ethical Considerations page states: “The IAT has potential for use beyond the scientific laboratory. However, in the absence of relevant scientific expertise, there is potential for misuse. We do not advise its use outside of the safeguards of a research institution [bolding mine].” Some might believe that since EDI programs, advocacy programs, are embedded in research institutions, the use of HIAT is safeguarded from misuse. This belief is unwarranted, and the bolded sentence, taken alone, misleading. The Canadian federal research funding agencies are also not conducting research when endorsing the use of the HIAT, and so also provide no safeguard for the misuse of these tests.

iii) Some might make the Buyer Beware argument, that it is up to the test-takers to use due diligence by conducting their own research into ethical considerations concerning the HIAT (and others like it). The Buyer Beware argument is at the least unbecoming when issued from or on behalf of research institutions. Worse, it is liable to damage trust and increase suspicion about EDI programs, between those who subscribe to these programs and those who don’t.

iv) One ethical argument might require that the architects of the HIAT place their ethical considerations on their introductory page where test-takers are more likely to read them, possibly even requiring an indication that the subject has read these caveats in full before proceeding to the tests. The counter-argument is that doing so will discourage people from taking the tests and researchers might lose valuable research information.

2. Herbert’s argument that a positive outcome of preserving unconscious bias programmes, notwithstanding their shortfalls, would allow organisations to display their commitments to diversity is precisely the argument those who are committed to academic freedom tend to fear.

i) Commitment and all-too-often virtue signalling endorsed under the auspices of universities and/or research funding agencies discourage critical or dissenting research from these promoted positions.

ii) Members of the so-called Woke movement as well as advocacy groups — within and without universities — exacerbate both commitment and virtue signalling, increasing the likelihood that critics-of and dissenters-from the signalled positions are at the least discouraged or silenced. At the worst, ostracized. Some, as Frances Widdowson discovered, are fired.

iii) If commitment- and virtue-signalling rest on unsound unconscious bias claims, as they do in the name of ‘raising awareness about biases’, what is as, or perhaps more, likely to be raised are suspicions that EDI initiatives are associated with partisan commitments. As are criticisms of EDI initiatives. Whether partisan commitments in fact determine one’s endorsement or criticism of EDI initiatives might be moot in light of the belief that they do.

Notice my use of “might.” In my research I’m finding people too often take the word “might” to mean “is” or “are”; i.e. a modal claim is taken to be a material claim. The belief that something is the case often substitutes the claim that something might be the case, even among trained thinkers. I provide some examples in an upcoming bibliography tentatively entitled, “Stupidities”—of which there are plenty to be found in EDI discourse.

3. Neilsen’s and Kepinski’s “nudges” still rest upon certain assumptions, i.e. that backlash to implicit bias testing results from people, ‘old white males’, being pushed on their biases, and, in other literature, on a threat to their privileges.

Missing from explications about backlash is the possibility that some academics are responding as scholars/researchers simpliciter worried about sound scholarship/research practices rather than feeling threatened or made uncomfortable in some way about racial and other such negative stereotypes. In other words, the bias is against stupidity and not against race, gender, or other designated disadvantaged group.

Closing remarks:

In relation to EDI, evidential and moral claims are too often conflated. Hence, comments such as the following:

  • “I saw a similar reluctance to criticize implicit bias among friends and colleagues. Taking the test, and buying into the concept of implicit bias, feels both open-minded and progressive.”

Olivia Goldhill. “The world is relying on a flawed psychological test to fight racism,” Quartz, December 3, 2017, updated July 24, 2020, (accessed 19 February 2022)

  • See: Comment 2, Anon PhD: “I second much of Daniel Kaufman’s comment [1], with one exception. The reply is often not just “move along” [nothing to see here, re: comment 1] but something much more pernicious, e.g. an implication that your rejection of the IAT is indicative of deeper moral flaws (I can cite relevant examples, though anyone who has been keeping up with this “debate” is surely familiar.)” [bolding mine, Pamela Lindsay]

Sean Hermanson: ‘Rethinking implicit bias: I want my money back’ Leiter Reports: A Philosophy Blog, April 6, 2018, (accessed 19 February 2022)

And, finally …

Categories: Guest Posts

Tags: , , , , , , , , , , , , , , , , , , ,

3 replies

  1. I wonder about the results of this test across various populations. For example if they compared the test results of 100 black university students and 100 white and 100 indigenous students would the test results show roughly equal or very different percentages of unconscious biases?

    And biases about what? Members of races other than their own? And what if some had biases about subjects other than race, such as against socialism or capitalism or Russians or Catholics or Muslims?

    It seems somewhat unscientific to believe that a written test can prove a bias against someone or something that the person being diagnosed with the bias is completely unconscious and unaware of. Perhaps the people who wrote the test had unconscious biases that they built into the test so that it creates many or mostly false positives.

    Liked by 1 person

  2. Now, now, Andrew. Never apply a test if you think it might reveal something you’d rather the opposition not know.

    This thorough review and exposé by Pamela is discomfiting enough as it is.

    Liked by 1 person


  1. Guest Post by Pamela Lindsay. Higher Ed’s Big Research-Excellence Adventure. Or, What is research excellence and what has it to do with EDI? – Paulosophical Vimplications

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: