The Reasonable Inferences Project
Research on Ethical Aspects of AI Analyzing Facial Features
Faces play an indispensable role in human social life. At present, computer vision artificial intelligence (AI) captures and interprets human faces for a variety of digital applications and services. The ambiguity of facial information has recently led to a debate among scholars in different fields about the types of inferences AI should make about people based on their facial looks. We ask:
What AI-drawn inferences from human faces are considered reasonable by the public?
Faces play an indispensable role in human social life. At present, computer vision artificial intelligence (AI) captures and interprets human faces for a variety of digital applications and services. The ambiguity of facial information has recently led to a debate among scholars in different fields about the types of inferences AI should make about people based on their facial looks. AI research often justifies facial AI inference-making by referring to how people form impressions in first-encounter scenarios. Critics raise concerns about bias and discrimination and warn that facial analysis AI resembles an automated version of physiognomy. What has been missing from this debate, however, is an understanding of how ``non-experts'' in AI ethically evaluate facial AI inference-making. In a two-scenario vignette study with 24 treatment groups, we show that non-experts (N = 3745) reject facial AI inferences such as trustworthiness and likability from portrait images in a low-stake advertising and a high-stake hiring context. In contrast, non-experts agree with facial AI inferences such as skin color or gender in the advertising but not the hiring decision context. For each AI inference, we ask non-experts to justify their evaluation in a written response. Analyzing 29,760 written justifications, we find that non-experts are either ``evidentialists'' or ``pragmatists'': they assess the ethical status of a facial AI inference based on whether they think faces warrant sufficient or insufficient evidence for an inference (evidentialist justification) or whether making the inference results in beneficial or detrimental outcomes (pragmatist justification). Non-experts' justifications underscore the normative complexity behind facial AI inference-making. AI inferences with insufficient evidence can be rationalized by considerations of relevance while irrelevant inferences can be justified by reference to sufficient evidence. We argue that participatory approaches contribute valuable insights for the development of ethical AI in an increasingly visual data culture.
Recent advances in computer vision analysis have led to a debate about the kinds of conclusions artificial intelligence (AI) should make about people based on their faces. Some scholars have argued for supposedly "common sense" facial inferences that can be reliably drawn from faces using AI. Other scholars have raised concerns about an automated version of "physiognomic practices" that facial analysis AI could entail. We contribute to this multidisciplinary discussion by exploring how individuals with AI competence and laypeople evaluate facial analysis AI inference-making. Ethical considerations of both groups should inform the design of ethical computer vision AI. In a two-scenario vignette study, we explore how ethical evaluations of both groups differ across a low-stake advertisement and a high-stake hiring context. Next to a statistical analysis of AI inference ratings, we apply a mixed methods approach to evaluate the justification themes identified by a qualitative content analysis of participants' 2768 justifications. We find that people with AI competence (N=122) and laypeople (N=122; validation N=102) share many ethical perceptions about facial analysis AI. The application context has an effect on how AI inference-making from faces is perceived. While differences in AI competence did not have an effect on inference ratings, specific differences were observable for the ethical justifications. A validation laypeople dataset confirms these results. Our work offers a participatory AI ethics approach to the ongoing policy discussions on the normative dimensions and implications of computer vision AI. Our research seeks to inform, challenge, and complement conceptual and theoretical perspectives on computer vision AI ethics.
Publications and Talks
Conference Proceedings
- Ullstein, C., Engelmann, S., Papakyriakopoulos, O., Hohendanner, M., & Grossklags, J. (2022) AI-competent Individuals and Laypeople Tend to Oppose Facial Analysis AI. Proceedings of the Second ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization (EAAMO), Article No. 9. Author Version Publisher Version (Open Access)
- Engelmann, S., Ullstein, C., Papakyriakopoulos, O., & Grossklags, J. (2022) What People Think AI Should Infer from Faces. Proceedings of the ACM Conference on Fairness, Accountability, and Transparency (ACM FAccT), pp. 128–141. Author Version Appendix Publisher Version (Open Access)
Conference Talks
- Talk on A Reflection on How Cross-Cultural Perspectives on the Ethics of Facial Analysis AI Can Inform EU Policymaking at the 2023 European Workshop on Algorithmic Fairness, Winterthur, Switzerland.
- Talk on How People Ethically Evaluate Facial Analysis AI: A cross-cultural study in Japan, Argentina, Kenya, and the United States at the 2023 Many Worlds of AI Conference, Cambridge, United Kingdom.
Contact
Chiara Ullstein
chiara.ullstein(at)tum.de
Severin Engelmann
severin.engelmann(at)tum.de
Prof. Jens Grossklags, Ph.D.
jens.grossklags(at)tum.de