Skip to main content

Advertisement

Log in

Engineering Equity: How AI Can Help Reduce the Harm of Implicit Bias

  • Research Article
  • Published:
Philosophy & Technology Aims and scope Submit manuscript

Abstract

This paper focuses on the potential of “equitech”—AI technology that improves equity. Recently, interventions have been developed to reduce the harm of implicit bias, the automatic form of stereotype or prejudice that contributes to injustice. However, these interventions—some of which are assisted by AI-related technology—have significant limitations, including unintended negative consequences and general inefficacy. To overcome these limitations, we propose a two-dimensional framework to assess current AI-assisted interventions and explore promising new ones. We begin by using the case of human resource recruitment as a focal point to show that existing approaches have exploited only a subset of the available solution space. We then demonstrate how our framework facilitates the discovery of new approaches. The first dimension of this framework helps us systematically consider the analytic information, intervention implementation, and modes of human-machine interaction made available by advancements in AI-related technology. The second dimension enables the identification and incorporation of insights from recent research on implicit bias intervention. We argue that a design strategy that combines complementary interventions can further enhance the effectiveness of interventions by targeting the various interacting cognitive systems that underlie implicit bias. We end with a discussion of how our cognitive interventions framework can have positive downstream effects for structural problems.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. In this paper, we endorse the widespread view that implicit bias is a mental construct (e.g., an association, attitude, or internal structure) that causes behaviors. However, this view is not unanimously held; for instance, De Houwer (2019) proposes to take implicit bias as a behavioral phenomenon—specifically, behavior that is automatically influenced by cues that function as an indicator of the social group to which one belongs.

  2. There is some disagreement concerning how best to draw the distinction between implicit and explicit attitudes in philosophy and psychology (Brownstein 2018). In the case of implicit and explicit bias, one common way of operationalizing the distinction in scientific practice is to associate them with implicit and explicit measures, respectively. In explicit measures, subjects are asked to report their attitudes in the test, while in implicit measures, their attitudes are inferred from other behaviors (Brownstein 2019). The disagreement will not be the focus of this paper, as we believe it will not affect the arguments of this paper.

  3. See Devine et al. (2012) for a more optimistic result that shows in-person, long-term debiasing can have effects for extended periods of time. However, Forscher et al. (2017) failed to fully replicate the study.

  4. Our paper focuses on AI-assisted intervention on implicit bias rather than on bias in general. Implicit and explicit biases are distinct scientific constructs, and their relation remains a topic of controversy. In addition, it is unclear whether findings in one field can be generalized to the other. For example, a recent study (Forscher et al. 2019) suggests that effective interventions on implicit bias may not always change explicit bias. Finally, implicit and explicit biases bring about different reactive attitudes. For instance, it has been shown that discrimination is considered less blameworthy when it is caused by implicit bias instead of explicit bias (Daumeyer et al. 2019). As a result, we will restrict our discussion to interventions on implicit bias to avoid complicating the discussion. However, the framework we develop in this paper can be adapted to explore intervention on explicit bias.

  5. For example, Hung and Yen (2020) extract five general principles for protecting basic human rights, including data integrity for reducing bias and inaccuracy through the examination of over 115 principles recently proposed by academies, governments, and NGOs.

  6. KBSs are computer programs that generate information to help humans solve problems or generate solutions. AI has played an important role in enhancing the capacity of KBSs by powering knowledge acquisition, representation, and reasoning. In this paper, the use of this term is adopted from the discipline of analytics (Sharda et al. 2020). It is different from the knowledge-based system in AI which represents knowledge and performs inferences explicitly.

  7. Microaggressions are “brief and commonplace daily verbal, behavioral, or environmental indignities, whether intentional or unintentional, that communicate hostile, derogatory, or negative racial slights and insults toward people of [underrepresented groups]” (Sue et al. 2007, p. 271). Examples include talking over interviewees with a particular demographic background and insensitive comments demeaning interviewee’s heritage or identity.

  8. Currently, fair proxy communication and interview in virtual space are not products; they are only proposed ideas (Seibt and Vestergaard 2018; Skewes et al. 2019)

  9. Determining which information should be masked to reduce implicit bias is difficult, and the determination needs to be made on a case-by-case basis. In the information technology (IT) industry, for example, the assessment of purely professional skills may be distinguished from other traits related to interpersonal skills (e.g., personality and coordination skills) that may not be essential to the job. So, when evaluating an applicant’s coding skills, demographic cues are irrelevant and should be masked. Conversely, in other industries (e.g., insurance sales), masking demographic information could be a loss when assessing the applicant’s communication styles that may be essential to the job performance.

  10. Nonetheless, if the robot is too natural, it may trigger the uncanny valley effect—humanoid robots may elicit unintended cold, eerie feelings in human viewers (Mori 1970; MacDorman and Chattopadhyay 2016).

  11. Another example of how AI can help predict human biases is by using ML to detect biases expressed in ordinary language. Caliskan et al. (2017) developed Word-Embedding Association Test (WEAT)—a method of measuring the associations between words. Their model, trained on a corpus of text from the internet, succeeded in replicating the known biases revealed by the Implicit Association Test (e.g., male or female names are associated with career or family respectively). As a result, WEAT can potentially be developed to identify an individual’s implicit bias through analyzing the text she produces.

  12. A possible solution to this attendant harm focuses on reducing the implicit bias of interviewers. Since AI detects bias, it can also be programmed to alert the interviewers for correction while masking the biased expressions to the interviewees. The detection record can be used by senior managers to choose better interviewers.

  13. However, we should not think of the three types of cognition-based interventions as a final and unrevisable category of cognition-based intervention. This is because as our knowledge about the mechanisms of implicit bias grows, new types of cognition-based intervention may become available.

  14. However, we need to be careful of the unforeseen ethical consequences of interventions (such as those involving VR). For example, Madary and Metzinger (2016) point out that VR can induce illusions of embodiment and change one’s long-term psychological states. Risky content and privacy are critical issues too. Therefore, they offer a list of ethical recommendations as a framework for future study. While there will always be unforeseeable risks involved in new technology, such research will help us minimize it.

  15. The interventions proposed in this paper are generally based on currently available AI and AI-related technologies; however, their advancement relies on the development of AI research in some domains. In particular, predictive interventions face the challenge of modeling and predicting the behavior of an individual accurately; on top of that, prescriptive interventions, in order to suggest decisions to its user, require a causal model, which represents how the intervention leads to results for a particular user (Albrecht and Stone 2018; Sheridan 2016). Finally, we need empirical research to validate the effectiveness of the specific implementation of these interventions.

  16. According to Miller (2018), responsibility is about the ability to fulfill a duty, and accountability is about the liability to respond to one’s performance of duties. Accountability presumes responsibility, but is not identical with it. Please see Miller (2018) for further distinction of the two notions.

  17. Engelen and Nys (2020) propose the concept of perimeters of autonomy, according to which changes in an agent’s options within the perimeters can occur without precluding his autonomy because he still has a range of options to choose from. Nonetheless, there may be an issue about how to draw the perimeters.

  18. The complex interaction between cognitive and structural factors can have unpredictable consequences. It is exemplified in the change of implicit and explicit antigay bias before and after same-sex marriage legalization. Ofosu et al. (2019) found that implicit and explicit antigay bias decreased before the legalization of same-sex marriage. Nevertheless, the change of attitude following legalization differs depending on whether the legalization was passed locally: a deeper decrease was found if the legalization was passed locally, whereas an increase following federal legalization in states that never passed local legalization. However, note that Tankard and Paluck (2017) found that federal legalization led individuals to change their perceptions of social norms regarding gay marriage, but not their personal attitudes.

References

  • Agan, A., & Starr, S. (2017). Ban the box, criminal records, and racial discrimination: A field experiment. The Quarterly Journal of Economics, 133, 191–235.

    Google Scholar 

  • Albrecht, S. V., & Stone, P. (2018). Autonomous agents modelling other agents: A comprehensive survey and open problems. Artificial Intelligence, 258, 66–95. https://doi.org/10.1016/j.artint.2018.01.002.

    Article  Google Scholar 

  • Amnesty International United Kingdom. (2018). Trapped in the matrix: Secrecy, stigma, and bias in the Met’s gangs database. https://reurl.cc/8lmnzy. .

  • Barton, A. (2013). How tobacco health warnings can Foster autonomy. Public Health Ethics, 6(2), 207–219.

    Google Scholar 

  • Behaghel, L., Crepon, B., & Le Barbanchon, T. (2015). Unintended effects of anonymous resumes. American Economic Journal: Applied Economics, 7, 1–27.

    Google Scholar 

  • Biggs, M. (2013). Prophecy, self-fulfilling/self-defeating. Encyclopedia of Philosophy and the Social Sciences. Inc: SAGE Publications. https://doi.org/10.4135/9781452276052.n292. isbn:9781412986892.

  • Botvinick, M., & Braver, T. (2015). Motivation and cognitive control. Annual Review of Psychology, 66(1), 83–113.

    Google Scholar 

  • Brownstein, M. (2018). The implicit mind: Cognitive architecture, the self, and ethics. New York, NY: Oxford University Press.

    Google Scholar 

  • Brownstein, M. (2019). Implicit bias. In E. Zalta (Ed.), The Stanford Encyclopedia of Philosophy (Fall 2019).

  • Burns, D., Parker, M., & Monteith, J. (2017). Self-regulation strategies for combating prejudice. In C. Sibley & F. Barlow (Eds.), The Cambridge Handbook of the Psychology of Prejudice (pp. 500–518).

  • Byrd, N. (2019). What we can (and can’t) infer about implicit bias from debiasing experiments. Synthese.

  • Caliskan, A., Bryson, J. J., & Narayanan, A. (2017). Semantics derived automatically from language corpora contain human-like biases. Science, 356, 183–186.

    Google Scholar 

  • Castelvecchi, D. (2016). Can we open the black box of AI? Nature, 538(7623), 20–23. https://doi.org/10.1038/538020a.

    Article  Google Scholar 

  • Chamorro-Premuzic, Tomas (2019). Will AI reduce gender bias in hiring? Harvard Business Review.

  • Clabaugh, C., & Matarić, M. (2018). Robots for the people, by the people. Science Robotics, 3(21).

  • Daumeyer, N. M., Onyeador, I. N., Brown, X., & Richeson, J. A. (2019). Consequences of attributing discrimination to implicit vs. explicit bias. Journal of Experimental Social Psychology, 84, 103812.

  • De Houwer, J. (2019). Implicit bias is behavior: A functional-cognitive perspective on implicit bias. Perspectives on Psychological Science, 14(5), 835–840.

    Google Scholar 

  • Devine, P. G., Forscher, P. S., Austin, A. J., & Cox, W. T. (2012). Long-term reduction in implicit race bias: A prejudice habit-breaking intervention. Journal of Experimental Social Psychology, 48(6), 1267–1278. https://doi.org/10.1016/j.jesp.2012.06.003.

    Article  Google Scholar 

  • Doshi-Velez, F., & Kortz, M. (2017). Accountability of AI under the law: The role of explanation. In Berkman Klein center working group on explanation and the law. Berkman Klein: Center for Internet & Society working paper.

    Google Scholar 

  • Dunham, C. R., & Leupold, C. (2020). Third generation discrimination: An empirical analysis of judicial decision making in gender discrimination litigation. DePaul J. for Soc. Just, 13.

  • Eightfold AI. (n.d). Talent Diversity. Retrieved from https://reurl.cc/EKp05m

  • Engelen, B., & Nys, T. (2020). Nudging and autonomy: Analyzing and alleviating the worries. Review of Philosophy and Psychology, 11(1), 137–156.

    Google Scholar 

  • Entelo. (n.d.). Entelo Platform Reports. Retrieved from https://reurl.cc/Gko62y

  • Equal Reality. (n.d.). Retrieved from https://equalreality.com/index

  • FitzGerald, C., Martin, A., Berner, D., & Hurst, S. (2019). Interventions designed to reduce implicit prejudices and implicit stereotypes in real world contexts: A systematic review. BMC Psychology, 7(1), 29. https://doi.org/10.1186/s40359-019-0299-7.

    Article  Google Scholar 

  • Floridi, L. (2015). The ethics of information. Oxford University Press.

  • Floridi, L., & Cowls, J. (2019). A unified framework of five principles for AI in society. Harvard Data Science Review.

  • Foley, M., & Williamson, S. (2018). Does anonymising job applications reduce gender bias? Understanding managers’ perspectives. Gender in Management, 33(8), 623–635. https://doi.org/10.1108/GM-03-2018-0037.

    Article  Google Scholar 

  • Forscher, P. S., Mitamura, C., Dix, E. L., Cox, W. T., & Devine, P. G. (2017). Breaking the prejudice habit: Mechanisms, timecourse, and longevity. Journal of Experimental Social Psychology, 72, 133–146.

    Google Scholar 

  • Forscher, P. S., Lai, C. K., Axt, J. R., Ebersole, C. R., Herman, M., Devine, P. G., & Nosek, B. A. (2019). A meta-analysis of change in implicit bias. Journal of Personality and Social Psychology, 117, 522–559.

    Google Scholar 

  • Galinsky, A. D., & Moskowitz, G. B. (2000). Perspective-taking: Decreasing stereotype expression, stereotype accessibility, and in-group favoritism. Journal of Personality and Social Psychology, 78(4), 708.

    Google Scholar 

  • Garcia, M. (2016). Racist in the machine: The disturbing implications of algorithmic bias. World Policy Journal, 33(4), 111–117.

    Google Scholar 

  • Gollwitzer, P. M. (1999). Implementation intentions: Strong effects of simple plans. American Psychologist, 54(7), 493–503. https://doi.org/10.1037/0003-066X.54.7.493.

    Article  Google Scholar 

  • Hajian, S., Bonchi, F., & Castillo, C. (2016). Algorithmic bias: From discrimination discovery to fairness-aware data mining. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining (pp. 2125-2126).

  • Haslanger, S. (2012). Resisting reality. Oxford: OUP.

    Google Scholar 

  • HireVue. (2019). CodeVue offers powerful new anti-cheating capability in coding assessment tests. Retrieved from https://reurl.cc/24D9An

  • HireVue. (n.d.). HireVue video interviewing software. Retrieved from https://reurl.cc/NapMKk

  • Hiscox, M. J., Oliver, T., Ridgway, M., Arcos-Holzinger, L., Warren, A., & Willis, A. (2017). Going blind to see more clearly: Unconscious bias in Australian public service shortlisting processes. Behavioural Economics Team of the Australian Government. https://doi.org/10.1016/j.jmrt.2015.05.003.

  • Hodson, G., Dovidio, F., & Gaertner, L. (2002). Processes in racial discrimination. Personality and Social Psychology Bulletin, 28(4), 460–471.

    Google Scholar 

  • Holpuch, A., & Solon, O. (2018, May 1). Can VR teach us how to deal with sexual harassment? In The Guardian Retrieved from https://reurl.cc/A1KreQ.

  • Holroyd, J., & Sweetman, J. (2016). The heterogeneity of implicit biases. In M. Brownstein & J. Saul (Eds.), Implicit Bias and philosophy, volume 1: Metaphysics and epistemology. Oxford University Press.

  • Huebner, B. (2016). Implicit bias, reinforcement learning, and scaffolded moral cognition. In M. Brownstein & J. Saul (Eds.), Implicit bias and philosophy (Vol. 1). Oxford: Oxford University Press.

    Google Scholar 

  • Human Rights Watch. (2019). World report, 2019 https://reurl.cc/6g641d. .

  • Hung, T.-w. (2020). A preliminary study of normative issues of AI prediction. EurAmerica, 50(2), 205–227.

  • Hung, T.-w. & Yen, Chun-pin (2020). On the person-based predictive policing of AI. Ethics and Information Technology. https://doi.org/10.1007/s10676-020-09539-x.

  • IBM Knowledge Center (n.d.). Retrieved from https://reurl.cc/W4k9DO

  • IEEE Global Initiative. (2016). Ethically aligned design. IEEE Standards, v1.

  • Interviewing.io. (n.d.) Retrieved from https://interviewing.io/

  • Jarrahi, M. (2018). Artificial intelligence and the future of work. Business Horizons, 61(4), 577–586.

    Google Scholar 

  • Krause, A., Rinne, U., & Zimmermann, K. (2012). Anonymous job applications in Europe. IZA Journal of European Labor Studies, 1(1), 5.

    Google Scholar 

  • Lai, C. K., & Banaji, M. (2019). The psychology of implicit intergroup bias and the prospect of change. In D. Allen & R. Somanathan (Eds.), Difference without domination: Pursuing justice in diverse democracies. Chicago, IL: University of Chicago Press.

  • Lai, C. K., Marini, M., Lehr, A., Cerruti, C., Shin, L., Joy-Gaba, A., et al. (2014). Reducing implicit racial preferences I. Journal of Experimental Psychology: General, 143(4), 1765.

    Google Scholar 

  • Lai, C. K., Skinner, L., Cooley, E., Murrar, S., Brauer, M., Devos, T., et al. (2016). Reducing implicit racial preferences II. Journal of Experimental Psychology: General, 145(8), 1001.

    Google Scholar 

  • Lara, F., & Deckers, J. (2019). Artificial intelligence as a Socratic assistant for moral enhancement. Neuroethics. https://doi.org/10.1007/s12152-019-09401-y.

  • Liao, S., & Huebner, B. (2020). Oppressive Things. Philosophy and Phenomenological Research. https://doi.org/10.1111/phpr.12701.

  • Lu, J., & Li, D. (2012). Bias correction in a small sample from big data. IEEE Transactions on Knowledge and Data Engineering, 25(11), 2658–2663.

    Google Scholar 

  • MacDorman, K. F., & Chattopadhyay, D. (2016). Reducing consistency in human realism increases the uncanny valley effect; increasing category uncertainty does not. Cognition., 146, 190–205.

    Google Scholar 

  • Machery, E. (2016). De-freuding implicit attitudes. In M. Brownstein & J. Saul (Eds.), Implicit bias and philosophy, Metaphysics and epistemology (Vol. 1, pp. 104–129). Oxford: Oxford University Press.

    Google Scholar 

  • Madary, M. & Metzinger, T.K. (2016). Real virtuality: A code of ethical conduct. Recommendations for good scientific practice and the consumers of VR-technology. Front. Robot. AI 3:3. https://doi.org/10.3389/frobt.2016.00003.

  • Madva, A. (2017). Biased against debiasing: On the role of (institutionally sponsored) self-transformation in the struggle against prejudice. Ergo, 4.

  • Madva, A., & Brownstein, M. (2018). Stereotypes, prejudice, and the taxonomy of the implicit social mind. Noûs, 52(3), 611–644.

    Google Scholar 

  • Miller, S. (2017). Institutional responsibility. In M. Jankovic & K. Ludwig (Eds.), The Routledge handbook of collective intentionality (pp. 338–348). New York: Routledge.

    Google Scholar 

  • Miller, S. (2018). Dual use science and technology, ethics and weapons of mass destruction. Springer.

  • Monteith, J., Woodcock, A., & Lybarger, E. (2013). Automaticity and control in stereotyping and prejudice. Oxford: OUP.

    Google Scholar 

  • Mori, M. (1970/2012). The uncanny valley (K. F. MacDorman & N. Kageki, trans.). IEEE Robotics and Automation, 19(2), 98–100. https://doi.org/10.1109/MRA.2012.2192811.

  • Mya. (n.d.). Meet Mya. Retrieved from https://mya.com/meetmya

  • Obermeyer, Z., Powers, B., Vogeli, C., & Mullainathan, S. (2019). Dissecting racial bias in an algorithm used to manage the health of populations. Science, 366(6464), 447–453. https://doi.org/10.1126/science.aax2342.

    Article  Google Scholar 

  • Ofosu, E. K., Chambers, M. K., Chen, J. M., & Hehman, E. (2019). Same-sex marriage legalization associated with reduced implicit and explicit antigay bias. Proceedings of the National Academy of Sciences, 116, 8846-8851.

  • Paiva, A., Santos, P., & Santos, F. (2018). Engineering pro-sociality with autonomous agents. Proc of AAAI.

    Google Scholar 

  • Peck, T., Seinfeld, S., Aglioti, S., & Slater, M. (2013). Putting yourself in the skin of a black avatar reduces implicit racial bias. Consciousness and Cognition, 22(3), 779–787.

    Google Scholar 

  • Pymetrics. (n.d.). Retrieved from https://www.pymetrics.com

  • Régner, I., Thinus-Blanc, C., Netter, A., Schmader, T., & Huguet, P. (2019). Committees with implicit biases promote fewer women when they do not believe gender bias exists. Nature Human Behaviour, 1–9.

  • Richardson, R., Schultz, J., & Crawford, K. (2019). Dirty data, bad predictions: How civil rights violations impact police data, predictive policing systems, and justice. New York University Law Review, 94, 192–233.

    Google Scholar 

  • Samek, W., Wiegand, T., & Muller, K.-R. (2017). Explainable artificial intelligence: Understanding, visualizing and interpreting deep learning models. ITU journal: ICT Discoveries, 1.

  • Saul, J. (2018). Should we tell implicit bias stories? Disputatio., 10(50), 217–244.

    Google Scholar 

  • Savulescu, J., & Maslen, H. (2015). Moral enhancement and artificial intelligence. Beyond Artificial Intelligence (pp. 79–95). In J. Romportl, E. Zackova, J. Kelemen (eds), Beyond artificial intelligence. Springer.

  • Schwitzgebel, E. (2013). A dispositional approach to attitudes: Thinking outside of the belief box. In N. Nottelmann (Ed.), New essays on belief. New York: Palgrave Macmillan.

    Google Scholar 

  • Seibt, J., & Vestergaard, C. (2018). Fair proxy communication. Research Ideas and Outcomes, 4, e31827.

    Google Scholar 

  • Sharda, R., Delen, D., & Turban, E. (2020). Analytics, data science, & artificial intelligence: Systems for decision support. Pearson.

    Google Scholar 

  • Sheridan, T. B. (2016). Human–robot interaction. Human Factors: The Journal of the Human Factors and Ergonomics Society, 58(4), 525–532. https://doi.org/10.1177/0018720816644364.

    Article  Google Scholar 

  • Skewes, J., Amodio, D., & Seibt, J. (2019). Social robotics and the modulation of social perception and bias. Philosophical Transactions of the Royal Society B, 374(1771).

  • Snyder, M., Tanke, E. D., & Berscheid, E. (1977). Social perception and interpersonal behavior: On the self-fulfilling nature of social stereotypes. Journal of Personality and Social Psychology, 35, 655–666.

    Google Scholar 

  • Soon, V. (2019). Implicit bias and social schema. Philosophical Studies, 1–21.

  • Sue, D., Capodilupo, C., Torino, G., Bucceri, J., Holder, A., Nadal, K., & Esquilin, M. (2007). Racial microaggressions in everyday life. American Psychologist, 62(4), 271.

    Google Scholar 

  • Suresh, H., & Guttag, J. V. (2019). A framework for understanding unintended consequences of machine learning. arXiv preprint arXiv:1901.10002.

  • Surowiecki, J. (2005). The wisdom of crowds. New York, NY: Anchor Books.

    Google Scholar 

  • Sweeney, L. (2013). Discrimination in online ad delivery. Queue, 11(3).

  • Taddeo, M. (2019). Three ethical challenges of applications of artificial intelligence in cybersecurity. Minds and Machines, 29(2), 187–191.

    Google Scholar 

  • Taddeo, M., & Floridi, L. (2018). How AI can be a force for good. Science, 361(6404), 751–752.

    Google Scholar 

  • Tankard, M. E., & Paluck, E. L. (2017). The effect of a supreme court decision regarding gay marriage on social norms and personal attitudes. Psychological Science, 28, 1334–1344.

    Google Scholar 

  • Textio. (n.d.). Textio hire. Retrieved from https://textio.com/products/

  • Unbias.io. (n.d.) Retrieved from https://unbias.io/

  • Vantage Point. (n.d.). Retrieved from https://www.tryvantagepoint.com/

  • Wachter, S., Mittelstadt, B., & Floridi, L. (2017). Transparent, explainable, and accountable AI for robotics.

    Google Scholar 

  • Winsberg, E., Huebner, B., & Kukla, R. (2014). Accountability and values in radically collaborative research. Studies in History and Philosophy of Science Part A, 46, 16–23.

    Google Scholar 

  • Zaleski, Katharine. (2016). Virtual reality could be a solution to sexism in tech. Retrieved from https://reurl.cc/vnezZk

  • Zheng, R. (2018). Bias, structure, and injustice: A reply to Haslanger. Feminist Philosophy Quarterly, 4(1).

Download references

Acknowledgements

For helpful discussions and feedback on earlier drafts of this work, thanks to Michael S. Brownstein, Acer Chang, Caitrin Donovan, Ivan Gonzalez-Cabrera, Julia Haas, Richard Heersmink, Bryce Huebner, Calvin Lai, Eric Schwitzgebel, Jacob Sparks, and two anonymous referees.

Funding

This work is supported in part by an Academia Sinica Fellowship to Dr. Linus Ta-Lun Huang, sponsored by Academia Sinica, Taiwan. This research is also funded in part by the Ministry of Science and Technology Taiwan to Dr. Tzu-wei Hung (MOST 107-2410-H-001-101-MY3).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Linus Ta-Lun Huang.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Lin, YT., Hung, TW. & Huang, L.TL. Engineering Equity: How AI Can Help Reduce the Harm of Implicit Bias. Philos. Technol. 34 (Suppl 1), 65–90 (2021). https://doi.org/10.1007/s13347-020-00406-7

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s13347-020-00406-7

Keywords

Navigation