Guest Post by Pamela Lindsay: The ubiquitous, questionable, and possibly unethical use of implicit bias tests and training at Canadian universities under the auspices of EDI.

I’ve spent some time over the past two years of Covid-lockdown isolation researching EDI programmes. What follows is something of an annotated bibliography rather than a summary of my findings. (Note that I’ll be using  implicit and unconscious bias interchangeably.)

Search any Canadian University and you’ll discover that implicit bias tests and training are broadly regarded by the architects of EDI programmes as a powerful tool for mitigating negative racial and gender stereotypes. However, not only have they overstated their claims about implicit bias and its possible remedies, but they may well be wholly dead wrong. I’m not here purporting to make a definitive case against implicit bias testing and training. Only that such worries are warranted.

One worry is that the designers of EDI programmes haven’t searched for possible disconfirming counter-evidence to their research, thereby eschewing scholarly rigour:

  • “Commitment to equity, diversity and inclusion …Through these means the agencies will work with those involved in the research system to develop the inclusive culture needed for research excellence and to achieve outcomes that are rigorous, relevant and accessible to diverse populations [bolding mine].”

Hence their claims that EDI leads to “research excellence” might be a performative contradiction.

  • Recruiting, Best Practices, “Have those involved in the hiring process complete EDI training, including instruction on how to recognize and combat unconscious, implicit, overt, prejudicial and any other kinds of bias (e.g., see the “dirty dozen” explained in chapter 11 of The Equity Myth).
  • Mentoring, Best Practices, “Ensure mentors receive unconscious bias training and/or other EDI training as necessary (e.g., microaggressions, antiracism training).”

About the overarching EDI initiative in Canadian universities:

EDI begins at the federal level under the granting agency for Canadian science and innovation research, the National Sciences and Engineering Research Council of Canada (NSERC),

  • “We work with universities, colleges, businesses and not-for-profits to remove barriers, develop opportunities and attract new expertise to make Canada’s research community thrive.”

NSERC, along with two other federal granting agencies,  the Social Sciences and Humanities Research Council (SSHRC)  and the Canadian Institutes of Health Research (CIHR), together form the Tri-Agency Financial Administration

“The Social Sciences and Humanities Research Council (SSHRC) is the federal research funding agency that promotes and supports research and training in the humanities and social sciences.”

“The Canadian Institutes of Health Research (CIHR) is Canada’s federal funding agency for health research. Composed of 13 Institutes, we collaborate with partners and researchers to support the discoveries and innovations that improve our health and strengthen our health care system.”

About: “Collaborations between federal research funding organizations,” The tri-agencies, Canadian Institutes of Health Research, Government of Canada,

  • Included with the Tri-agencies is the “ Canada Foundation for Innovation (CFI) …, a federally funded organization that enables this research, training and innovation through investments in state-of-the-art infrastructure.”

Grants and Awards: An overview of the grants and awards available under the EDI initiative, including a non-renewable Equity, Diversity, and Inclusion Institutional Capacity Building grant worth up to $200,000 per year, for up to 2 years. (*corrected 25 February 2022), (accessed 19 Feb. 22)

A number of Canadian Colleges and Universities have received the full amount of the EDI Institutional Capacity Building Grant, including:

The Dimensions project is an chartered EDI initiative, one of a number of similar international programs, and is supported by the Tri-agencies,

  • “The [Dimensions] program is the result of cross-country consultations to make it uniquely adapted to the Canadian realities.”

NSERC, About, “Equity, Diversity, and Inclusion,” Government of Canada, accessed 19 February 2022,

  • Dimensions Charter: 

Government of Canada 2019, Dimensions: equity, diversity and inclusion Canada, accessed 19 February 2022 ,

The Canada Research Chairs (CRC) program is also a partner in the EDI initiative. 

The CRC site is too extensive to summarize here, but I draw your attention to the Bias in Peer Review Training Module which is recommended by almost every Canadian university:

  • (If anyone has 10-15 minutes to complete the training module, please share your impressions in the comment section.)  

Canadian Universities, Implicit bias & EDI, examples:

Note in most of these examples, links to the Harvard Implicit Association Test (IAT) – also known as Project Implicit — as well as to the CRC training module.

  1. University of British Columbia (UBC) 

“Unconscious bias in the workplace,” Equity and Inclusion Office, The University of British Columbia, September 24, 2021, (accessed 19 February 2022),

2. Simon Fraser University (SFU)

“Implicit bias, hiring, and retention: Equity, diversity, and inclusion (EDI) resource guide,” Simon Fraser University, Library, (accessed 19 Feb. 22)

  • “These pages are the result of extensive collaboration between the SFU Library, the SFU EDI Administrative group, and other SFU stakeholders, and this work is ongoing.”

3. University of Calgary (U of C)

“Unconscious/Implicit Bias,” EDI Workshops, University of Calgary,    (accessed 19 Feb. 22)

4. University of Alberta (U of A)

“Equity, Diversity and Inclusion Public Accountability and Transparency Requirements,” Research Services Office, University of Alberta, (accessed 19 Feb. 22)

  • See subheading: Equity,Diversity, and Inclusion Training

5. University of Saskatoon (USask)

“Equity, Diversity and Inclusion,” For Staff and Faculty,” Wellness,  University of Saskatoon, (accessed 19 Feb. 22),

  • See “Reflective questions” below the Ted- X video.

6. University of Manitoba (U of M, UM)

“Resources on Equity, Diversity and Inclusion in Health: Bias,” Libraries, University of Manitoba, (accessed 19 Feb. 22)

7. University of Waterloo (Waterloo, UW)

“EDII resources and guides,” Research>>Research Equity and Inclusion, University of Waterloo,

  • See subheading, “Understanding Unconscious Bias in Recruitment, Selection and Review.” 

8. Ryerson University (Ryerson, RyeU, RU)

“Types of Unconscious Biases and How to Counteract,” Equity, Diversity and Inclusion, Ryerson University, PDF, (accessed 19 February 2022),

9. University of Toronto (U of T) 

  • “The Division of People Strategy” is not a typo, though perhaps it’s a Freudian slip. The strategy to divide people is ‘Equity and Culture’? Rather unfortunate name for a department.

10. Dalhousie University (Dal) 

“Equity, Diversity, and Inclusion (EDI) —Research Resources,” Office of Research Services (ORS), Dalhousie University, (accessed 19 February 2022),–diversity–and-inclusion—research-resources.html

Why some are reluctant to criticize EDI programmes:

Some fear being labelled a racist, bigot, Alt-Right, etc. for criticizing EDI. In many cases, their fears are justified.  

  • The following excerpt from a Brandon University news article states, “We cannot be fooled by vague language that hides divisiveness and hatred.” Apparently they can. “Divisiveness and hatred” are examples of vague language.

“No place is immune to objectionable and distasteful sentiments. We cannot be fooled by vague language that hides divisiveness and hatred. We are proud of the committed students, faculty and staff who stand together to support universal human rights.

We recognize that the hurtful or hateful actions of a small number of individuals can have an outsize effect on marginalized groups and we reiterate that white supremacy, racism, xenophobia, misogyny, hate speech and discrimination of all kinds have no place at Brandon University. We condemn it and it will not be tolerated.

We know that disturbing expressions can have emotional impacts that require care and attention. We remind our entire BU community that we have services here to support you.”

Also from Brandon University (BU), The BU Statement on Inclusion,

Excerpt: “Brandon University affirms an unwavering and unambiguous commitment to diversity, inclusion and universal human rights. We are stronger and richer together, and we celebrate the unique contributions brought to our community through everyone’s individual circumstances, perspectives and life experiences.

Around the globe, and occasionally here at home, we must sometimes face xenophobia and racism. This often masquerades as nationalism, pride, or concerns about cultural purity. Bigots may deliberately use vague language or misappropriate the struggles of marginalized groups to advance their offensive cause. Their language is couched in pretend innocence that is designed to convince the naïve and to provoke divisive reactions. We are not fooled. We condemn hate speech of all kinds.

The paradox of tolerance reminds us that no accommodations can be made for intolerance. Hate speech is not free speech. Prejudice is not pride. Bigotry is not up for debate.

These distasteful opinions are to be found everywhere, and the Brandon University campus is no exception…” 

Lee Jussim outlines six reasons why implicit bias training is so popular, notwithstanding it’s ineffective. Roughly, they are:

1) Overstated claims at the outset of implicit bias research

2) Implicit bias provides a simple explanation for continuing inequality (especially when appealing to ‘hidden forces’)

3) Virtue Signalling

4) It gives activists a veneer of scientific credibility

5) PR and Insurance

6) Consultants make big bucks 

Hunting Implicit Biases: 

Institutions such as Queens University are collecting self-reported anonymous information as evidence for recurring patterns of harassment, discrimination, and bias/hate incidents. Hence these reports can’t be disconfirmed.

I’ve included two references about the Bias Incident Response Teams (BIRTS and, sometimes, BARTS) that have proliferated in the US.

Pamela Lindsay. “Uncovering an Implicit Bias,” Saturday Morning Pam-toons.

Communications Staff, “Queen’s launches pilot of anonymous harassment and discrimination submission platform,” Queen’s Gazette, Queen’s University, October 12, 2021 (accessed February 19, 2022),

 “2017 Report Bias Response Teams,”FIRE,

  • “Executive Summary: Over the past several years, the Foundation for Individual Rights in Education (FIRE) has received an increasing number of reports that colleges and universities are inviting students to anonymously report offensive, yet constitutionally protected, speech to administrators and law enforcement through so-called “Bias Response Teams.” These teams monitor and investigate student and faculty speech, directing the attention of law enforcement and student conduct administrators towards the expression of students and faculty members…”

Questions and Controversy Around the Accuracy of Tests, the Efficacy of Training, and the Autonomous Effects of Each

Recall that I’m not purporting here to make a definitive case against implicit bias testing and training. The following is a far from exhaustive bibliography.

I’ve made a few of my own comments, but I’ve largely drawn on quotes that will give you the gist of the article. The popular articles I’ve selected are annotated (many hyperlinked) and useful in directing you to technical papers.

Schimmack, Ulrich. “Invalid Claims About the Validity of Implicit Association Tests by Prisoners of the Implicit Social-Cognition Paradigm.” Perspectives on psychological science : a journal of the Association for Psychological Science vol. 16,2 (2021): 435-442. doi:10.1177/1745691621991860, (accessed 19 February 2022),

  • Abstract: In a prior publication, I used structural equation modeling (sic) of multimethod data to examine the construct validity of Implicit Association Tests. The results showed no evidence that IATs measure implicit constructs (e.g., implicit self-esteem, implicit racial bias). This critique of IATs elicited several responses by implicit social-cognition researchers, who tried to defend the validity and usefulness of IATs. I carefully examine these arguments and show that they lack validity. IAT proponents consistently ignore or misrepresent facts that challenge the validity of IATs as measures of individual differences in implicit cognitions. One response suggests that IATs can be useful even if they merely measure the same constructs as self-report measures, but I find no support for the claim that IATs have practically significant incremental predictive validity. In conclusions, IATs are widely used without psychometric evidence of construct or predictive validity.

Patricia Lonergan. “A common test to evaluate people’s implicit bias has been ‘oversold,’ U of T researcher says”, U of T News, Nov. 26, 2019

  • “Racial bias is a reality, Schimmack says, but the problem is too many discussions of the issue are based on research findings that rely on flawed IAT measures. For example, some argue implicit bias training is not useful because it doesn’t change IAT scores. But if IAT scores are not valid in the first place, they are not likely to be effective evaluation tools.”

Tiffany L. Green and Nao Hagiwara. “The Problem with Implicit Bias Training: It’s well motivated, but there’s little evidence that it leads to meaningful changes,”  Scientific American, August 28, 2020,(accessed 19 February 2022),

  • “But while implicit bias trainings are multiplying, few rigorous evaluations of these programs exist. There are exceptions; some implicit bias interventions have been conducted empirically among health care professionals and college students. These interventions have been proven to lower scores on the Implicit Association Test (IAT), the most commonly used implicit measure of prejudice and stereotyping. But to date, none of these interventions has been shown to result in permanent, long-term reductions of implicit bias scores or, more importantly, sustained and meaningful changes in behavior (i.e., narrowing of racial/ethnic clinical treatment disparities).”
  • “Even worse, there is consistent evidence that bias training done the “wrong way” (think lukewarm diversity training) can actually have the opposite impact, inducing anger and frustration among white employees. What this all means is that, despite the widespread calls for implicit bias training, it will likely be ineffective at best; at worst, it’s a poor use of limited resources that could cause more damage and exacerbate the very issues it is trying to solve.” 

Dobbin, Frank, and Alexandra Kalev. “Why doesn’t diversity training work? The challenge for industry and academia.” Anthropology Now 10.2 (2018): 48-55, (accessed 19 February 2022),

Machery, Edouard. “Anomalies in implicit attitudes research.” Wiley Interdisciplinary Reviews: Cognitive Science 13, no. 1 (2022): e1569.

  • Abstract: In this review, I provide a pessimistic assessment of the indirect measurement of attitudes by highlighting the persisting anomalies in the science of implicit attitudes, focusing on their validity, reliability, predictive power, and causal efficiency, and I draw some conclusions concerning the validity of the implicit bias construct.

Brownstein, Michael, “Implicit Bias”, The Stanford Encyclopedia of Philosophy (Fall 2019 Edition), Edward N. Zalta (ed.), URL = <;.(accessed 19 February 2022)

  • Stanford entries, including this one, have extensive bibliographies and notes.

  • “One question crucial to the metaphysics of implicit bias is whether the relevant psychological constructs should be thought of as stable, trait-like features of a person’s identity or as momentary, state-like features of their current mindset or situation (§2.4). While current data suggest that implicit biases are more state-like than trait-like, methodological improvements may generate more stable, dispositional results on implicit measures.”

  • “Future research on epistemology and implicit bias may tackle a number of questions, for example: does the testimony of social and personality psychologists about statistical regularities justify believing that you are biased? “

  • “One noteworthy intersection of theoretical ethics with forthcoming empirical research will focus on the interpersonal effects of blaming and judgments about blameworthiness for implicit bias.”

Sean Hermanson: ‘Rethinking implicit bias: I want my money back’ Leiter Reports: A Philosophy Blog, April 6, 2018, (accessed 19 February 2022)

  • It’s worth reading the comment section as well as the article, especially ‘Comment 15’ by Lee Jussim
  • Comment 2, Anon PhD: “I second much of Daniel Kaufman’s comment [1], with one exception. The reply is often not just “move along” [nothing to see here, re: comment 1] but something much more pernicious, e.g. an implication that your rejection of the IAT is indicative of deeper moral flaws (I can cite relevant examples, though anyone who has been keeping up with this “debate” is surely familiar).

More positively, it would be nice to see a discussion among /professional/ philosophers regarding retraction norms for philosophical work. If, for example, a published piece of philosophy relies heavily upon discredited and/or retracted empirical work, presumably the philosophical work should also be discredited and/or retracted. Presumably one justification for retracting discredited work is that this norm incentivizes scholarly care and precision, two qualities that are conspicuously lacking in much of the philosophical work that incorporates the IAT.”

German Lopez. “For years, this popular test measured anyone’s racial bias. But it might not work after all,” Vox, March 7, 2017, (accessed 19 February 2022)

  • Lopez provides a thorough-going overview of the literature, researchers, and points of contention up to March, 2017. 

  • Lopez raises a critical point. Most EDI university websites encourage visitors to take the IAT (Harvard) test to determine whether they have an implicit bias. But, taking the test once by no means indicates that you have a bias (or not) as indicated by the results.

Olivia Goldhill. “The world is relying on a flawed psychological test to fight racism,” Quartz, December 3, 2017, updated July 24, 2020, (accessed 19 February 2022)

(This is a lengthy article that will take 10-15 minutes of your time.)

  • “I saw a similar reluctance to criticize implicit bias among friends and colleagues. Taking the test, and buying into the concept of implicit bias, feels both open-minded and progressive.”
  • “There’s little doubt we all have some form of unconscious prejudice. Nearly all our thoughts and actions are influenced, at least in part, by unconscious impulses. There’s no reason prejudice should be any different.”
  • “But we don’t yet know how to accurately measure unconscious prejudice. We certainly don’t know how to reduce implicit bias, and we don’t know how to influence unconscious views to decrease racism or sexism. There are now thousands of workplace talks and police trainings and jury guidelines that focus on implicit bias, but we still we have no strong scientific proof that these programs work.”
  • “A lot of folks see the IAT as a golden path to the unconscious, a tool that perfectly captures what’s going on behind the scenes and it’s not,” says Lai. “It’s a lot messier than that. The truth, as often, is a lot more complicated.”

Osman, M. (2021) UK Public Understanding of Unconscious Bias and Unconscious Bias Training. Psychology12, 1058-1069. doi: 10.4236/psych.2021.127063., (accessed 19 February 2022) ,

  • “First, there is very limited [IAT] test-retest reliability (Gawronski et al., 2017). What this means is that the relationship is very low between an individual’s score taking an IAT at one time, and then repeating the test at a later time. What this also implies is that, either the biases aren’t stable over time, or the test doesn’t reliably measure what it purports to measure, or potentially both depending on how sceptical one is about the status of unconscious bias as a phenomenon.”

  • “To the extent that the IAT might even detect unconscious biases, which from the earlier discussion is already under contention, the IAT doesn’t predictable discriminatory behaviours. This issue is particularly problematic for EDI training programmes for the reason that if the IAT doesn’t reliably detect unconscious biases, and doesn’t predict objective behaviours that might be assumed to be causally associated with harbouring unconscious biases towards a particular group, then it also cannot be used as an objective test of the efficacy of unconscious bias training.”
  • “By focusing on basic cognitive biases rather than specific social biases, can serve several useful functions, including decreasing inter-group tensions amongst those on DEI training methods. Here the evidence suggests that, in DEI training initiatives such as unconscious bias training, identifying a group that holds unconscious biases towards another group, can lead to backfiring effects, such as increased tensions between different groups.”

  • “As this pilot study hopefully shows, people can vary with respect to several core interpretations of core concepts, such as bias. If DEI methods, such as unconscious bias training are to be used, then it is important to recognise that recipients of the training ought not to be treated as a homogenous group with similar attitudes and opinions towards the training or their views on biases and where and how they appear. Finally, given that the biggest disconnect is between the evidence base regarding unconscious bias training and the public’s view of its efficacy, clearly this needs to be addressed.”

Harvard Implicit Association Test (IAT), aka Project Implicit,, (accessed 19 February 2022)

  • If you take the test, please leave your impressions in the comment section.
  • Note that the IAT was never intended for a one-of use by an individual to determine one’s implicit bias. But, universities encourage people to take the test in just this manner.

Adam Branson. “UK government follows US with ban on unconscious bias training,” Global Government Forum, 16/12/2020, (accessed 19 February 2022),

  • This article includes a link to the Unconscious Bias Training Report by The Behavioural Insights Team.
  • The report is also available here:


  • The report mentions some think Diversity training might serve to “raise awareness” about biases. My worry is that “raise awareness” requires an indexical”: Raise awareness in whom and about WHAT? And having done so, what are the autonomous effects – for better and worse? (E.g. make stereotypes more salient, thereby amplifying them.) This is the kind of worry that activists tend to miss.

Lee Jussim. “Is Implicit Bias Training Useless?”, Psychology Today, June 13, 2021, (accessed 19 February 2022),

  • “Claiming the mantle of “science” for false claims and misinformation, no matter how earnest or well-intended is bad. Misinformation is one harm; opportunity costs are another. The time and money spent on implicit bias training could surely be better spent doing more constructive things…A university could do more to reduce inequality simply by taking that fee and creating a fellowship for a student from a low-income background or marginalized group. Then, at least, they would know for a positive fact that one deserving person was actually helped.”
  • Jussim is a little loose here — who vets a “deserving person” and by which criteria? But I take his point. The money can be better used. How? A university student-cum-member of a marginalised group is privileged (in virtue of being a university student) with respect to a low-income person with no prospect for education, whether from a marginalised group or not. Maybe the money ought to be relegated to transportation, computers, meals, and so on — the things that gets you there and keeps you there. Suggestions?

Jesse Singal. “Psychology’s favourite tool for measuring racism isn’t up to the job,” The Cut, New York Magazine, (accessed 20 February 2022)

  • This is a long article, so reserve 10-15 minutes of your time for the undertaking. It’s worth the read. And it’s worth noting that at the time EDI in Canada was getting off the ground, worries about the limits of implicit bias measures and training were already circulating.

Jesse Singal. “The Creators of the Implicit Association Test Should Get Their Story Straight,” New York Intelligencer, December 5, 2017, accessed 25 February 2022,

  • The problem, as I showed in a lengthy rundown of the many, many problems with the test published this past January, is that there’s very little evidence to support that claim that the IAT meaningfully predicts anything. In fact, the test is riddled with statistical problems — problems severe enough that it’s fair to ask whether it is effectively “misdiagnosing” the millions of people who have taken it, the vast majority of whom are likely unaware of its very serious shortcomings. There’s now solid research published in a top journal strongly suggesting the test cannot even meaningfully predict individual behavior. And if the test can’t predict individual behavior, it’s unclear exactly what it does do or why it should be the center of so many conversations and programs geared at fighting racism.

Jesse Singal. “Psychology’s favourite tool for measuring implicit bias is still mired in controversy,” Research Digest, The British Psychological Society, December 5, 2018,  (accessed 19 February 2022),

Chequer, S., & Quinn, M. G. (2021, May 1). More Error than Attitude in Implicit Association Tests (IATs), a CFA-MTMM analysis of measurement error. (accessed 20 February 2022),

(You will find other versions of this paper by searching Google Scholar.)

  • Excerpt from the conclusion: “Nosek and Greenwald (2009, p. 375) note that “the most important considerations in appraising validity of psychological measures are those that speak to the measure’s usefulness in research and application”. Whilst there have been many concerns regarding the IAT’s veracity and usefulness (see Blanton et al., 2009; Krause et al., 2010; Mitchell & Tetlock, 2017; Oswald et al., 2015; Rae & Olson, 2018), there has been no clear estimate for the component of error variance in IAT scores. The present study has provided clarity on this issue, demonstrating that the IAT effect scores were comprised of over 80% combined random and systematic error variance, allowing little opportunity for trait ‘implicit attitudes’ to be revealed through the noise, and requiring significant statistical modifications and processing to obtain even population-level ‘insights into our implicit biases’. To put it simply, the IAT was shown to be inadequately honed to provide insights into our implicit biases and its ‘usefulness in research and application’ is questionable, if not at times, potentially misleading. The sheer magnitude of error variance has serious implications for the use and interpretation of IAT effect scores.” 

Forscher, P. S., Lai, C. K., Axt, J. R., Ebersole, C. R., Herman, M., Devine, P. G., & Nosek, B. A. (2019). A meta-analysis of procedures to change implicit measures. Journal of Personality and Social Psychology, 117(3), 522–559.

  • Using a novel technique known as network meta-analysis, we synthesized evidence from 492 studies (87,418 participants) to investigate the effectiveness of procedures in changing implicit measures, which we define as response biases on implicit tasks. We also evaluated these procedures’ effects on explicit and behavioral measures. We found that implicit measures can be changed, but effects are often relatively weak (|ds| < .30). Most studies focused on producing short-term changes with brief, single-session manipulations. Procedures that associate sets of concepts, invoke goals or motivations, or tax mental resources changed implicit measures the most, whereas procedures that induced threat, affirmation, or specific moods/emotions changed implicit measures the least. Bias tests suggested that implicit effects could be inflated relative to their true population values. Procedures changed explicit measures less consistently and to a smaller degree than implicit measures and generally produced trivial changes in behavior. Finally, changes in implicit measures did not mediate changes in explicit measures or behavior. Our findings suggest that changes in implicit measures are possible, but those changes do not necessarily translate into changes in explicit measures or behavior. (APA PsycInfo Database Record (c) 2019 APA, all rights reserved)

Categories: Guest Posts

Tags: , , , , , , , , , , , , ,

2 replies


  1. Addendum to Pamela Lindsay’s recent Guest Post on implicit bias testing and training. – Paulosophical Vimplications
  2. Guest Post by Pamela Lindsay. Higher Ed’s Big Research-Excellence Adventure. Or, What is research excellence and what has it to do with EDI? – Paulosophical Vimplications

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: