Towards Ethics in AI
“The first step in the evolution of ethics is a sense of solidarity with other human beings.” – Albert Schweitzer
The content of this post follows from an outline submitted for review to the Global Forum on the Ethics of Artificial Intelligence 2024. The argument here follows from the assumption that moral and ethical obligations of artificial intelligence are not inherent, and that AI agents are a kind of moral tabula rasa; the onus of ensuring ethical and moral developments of technology and artificial intellegence falls upon the social setting whithin which they are developed, advocating for cooperation between humanities and STEM focused disciplines.
The question follows: How can we characterize the alignment of values, morals, or objectives in technological developments, especially in existing AI systems?
I will frame my response to the prompt in three parts: (1) an answer; (2) some problems with the answer; (3) solutions to those problems. Existing AI systems are consistent with their values and goals only insofar as they reflect the values and goals of their societal context, and are therefore a tabula rasa; this leads to challenges in consistent values and goals due to moral relativism and the varying norms of the different groups that commission, design, and present such systems; a solution to this issue follows from emergent ethics and values through large-scale inclusion, diversity, and interdisciplinary integration. These points are restated below:
- Answer to the prompt: AI as tabula rasa, so the onus falls on the societal context and people working on it to ensure consistent values and goals.
- The problem with the above is moral relativism; that is, assuming a blank slate, whose values and goals are to be ascribed?
- A possible solution to the problem: ethics, values, and morals as emergent; that is, as natural consequences of large-scale inclusion, diversity, and integration.
AI as tabula rasa
First, let us consider the answer: AI as tabula rasa1 . From a technical perspective, most AI systems, particularly machine learning models, do not possess inherent values or goals but are shaped by the data they are trained on and the objectives set by their designers. Right away, we see the key ingredients to the response: a blank slate, values and goals influenced by data, objectives of the designers
I argue for the interpretation of AI systems as a tabula rasa (J. Young 2019), evoking the same notion elaborated upon in Duschinsky (2012) and Petryszak (1981). John Locke believed that people develop rational knowledge from experiences in the natural world; in this same way, machines are agnostic to the world-context within which they reside, and so are blank slates until trained on data from which to learn and operate2.
This, however, crucially depends on the notion that a machine can at all be considered an ethical agent responsible for its actions, which is not necessarily trivial, since a machine agent is quite volitile, in that it could just as easily be modified to follow ethically questionable directives, or, to a more extreme extent, entirely unethical directives (Vanderelst and Winfield 2016).
This sets the stage of agnosticism of existing AI for values or goals; assuming consistent values or goals for AI systems a priori absolves the humans involved in them, which is a dangerous position to take; rather, they are a posteriori: the agent needs and receives direction and directive from humans, neither of which are inherent to the machine itself (Gabriel 2020). This leads naturally to the problem with this answer, however: if a machine receives its values and goals from a human or group of humans, what is the nature of those values and goals it is receiving?
Moral Relativism
Consider the following. English is the de facto national language of the United States, but the United States has no official language (at the federal level). Proponents of English as the official language argue for consistency, nationalism, practicality, opportunity, and the like as reasons for instilling English as the national language (Marshall 1986; Califa 1989). English is likewise a lingua franca across many areas of science, entertainment, and industry (Elder and Davies 2006; Seidlhofer 2005; Jenkins 2009; Jenkins, Cogo, and Dewey 2011). “English”, however, is used here nebulously: whose English? Perhaps Midwestern English; then Wisconsin in particular, then even more particular, perhaps Southeastern Wisconsin? Perhaps Gullah? Perhaps the English spoken in Northumberland?
The choice of one language variation over another inherently privileges that choice over others. In language education, teachers “correcting” students’ grammar sends several messages: the standard dialect is the best dialect; other dialects are not appropriate for smart, educated people; other dialects are a threat to literacy and education (Schildkraut 2001; Pakir 2001). Official preference for one variation over another likewise has consequences, including internal conflicts, minority languages take a newer, possibly lower, places in a socio-political and socio-cultural hierarchy, new allocation of resources (Stalker 1988; Pakir 2001).
In the same, just as there is contentious debate about what constitutes whose English is “standard” or “official”, determining whose values and goals AI should reflect is contentious; indeed, the risk of privileging certain viewpoints or norms over others colors the moral and ethical landscape, usually grounded in some position of power over others. In AI, this is akin to the issue of dataset bias: when an AI system is trained on data that reflects a certain subset of societal norms or values, it inherently becomes biased towards (or against) those norms, potentially marginalizing other perspectives. The immediate problem that is raised, then, is the question of whose values and goals should be considered.
Presently, we are running headlong into the consequences of this framework. Given a blank slate, we populate it with data of perhaps dubious origin (DeBrusk 2018; Mehrabi et al. 2021) (virtually any data-trained AI), copyrighted data (Margoni 2018; Sag 2018; Paullada et al. 2021; Lambert 2017) (which we see in generative AI for image creation and text generation), personal biases (especially in the context of LLMs) (Leavy et al. 2020; Kotek, Dockum, and Sun 2023), to name a few. The question then becomes how to balance diverse values and goals without falling into this moral relativism, where every viewpoint is seen as equally valid, leading to a lack of clear direction for AI development.
The societal context and the biases of the people working on AI systems significantly influence the values and goals that these systems embody. The design choices, from data selection to algorithmic priorities, reflect the creators’ values and societal norms. This dependency on human input means that AI systems’ consistency in values and goals is as variable as the humans and societies programming them. Thus, ensuring consistency requires careful consideration of these decidedly human factors.
Ethics as Emergent
We last consider, then, a solution to the problem of relativism: emergent ethics. Moral and ethical relativism in AI is as much a social problem as it is a technical one; though the technically minded might be prone to the problematic case of hammers identifying problems as nail-shaped, when the tools most advantageous may be socio-cultural, rather than strictly technocratic (Conitzer et al. 2017; Villegas-Galaviz and Martin 2023). I advocate, therefore, for the following solution to the problem stated above: ethics, values, and morals should be emergent as a natural consequence of inclusion, diversity, and interdisciplinary integration (Hinnefeld 2012; Minati 2011).
Methods of ensemble learning in AI follow the same principle (Dong et al. 2020; Zhou and Zhou 2021; Sagi and Rokach 2018): two heads are better than one. As discussed above, the socio-cultural importance of AI values and goals cannot be overstated, so let us use that to our advantage (and simultaneously mitigate the technocratic hammer and nails): inviting insights from humanities and social sciences is the proverbial second of the two heads (or, the $n+1$ of $n$ heads, as it were).
The idea here is that by integrating cooperation from a wide range of sources, the AI systems tend towards holistic and inclusive sets of values and goals (Ciston 2019; McDonald and Pan 2020; Bauer and Lizotte 2021; Kong 2022), especially when the social and political landscapes are likewise diverse and dynamic. Humans are inherently social, though our understanding of the neural factors that control social behavior is limited (S. Young 2008), so why not take advantage of that? It stands to reason, then, that consistent values or goals of AI should likewise align with the widest social orientation of its greatest context (Amershi et al. 2019; Yang et al. 2020; Maadi, Akbarzadeh Khorshidi, and Aickelin 2021; Xu et al. 2021). The hope, then, is that through large-scale inclusion and diversity (ethnic, gender, academic, socio-economic, etc.), the ethics, values, and morals that emerge in AI systems will representative and less biased towards (or against) any single group or perspective.
Conclusion
Humorously, an advisor once quipped: “There are too many computer scientists in computer science”. It is erroneous to assume that development is agnostic to social dynamics, and that they are value-neutral. Advances in such foundations aare not the sole domain of science and technology, but sociology and culture as well. The question of how to proceed and what form some potential harms may take are often visited and revisited, and should always be so. A reasonable place to start is outlined for us by the Alan Turing Institute (Leslie 2017), which identifies evergreen issues such as bias and discrimination, denial of autonomy, opaque decision-making, infringement and invasion of privacy, unreliable and unsafe results. Resolving these issues and equipping AI systems with sound values and goals are, as argued, interdisciplinary efforts, and demand accountability at every stage of development.
I argue that AI systems are blank slates, the responsibility for endowing them with appropriate ethics and morality falls on the humans working with them. The challenge of moral relativism – determining whose values and goals are represented – quickly stifles the ambitious innovator, and must be rectified, and enthusiastically invites an interdisciplinary approach, ensuring emergent ethics from diversity and inclusion in AI development. The hope is this can lead to the emergence of more universally representative and ethically sound AI systems, reflecting values and goals that are not just technically sound but also socially responsible and culturally inclusive.
References
Amershi, Saleema, Dan Weld, Mihaela Vorvoreanu, Adam Fourney, Besmira Nushi, Penny Collisson, Jina Suh, Shamsi Iqbal, Paul N Bennett, Kori Inkpen, et al. 2019. “Guidelines for human-AI interaction.” In Proceedings of the 2019 chi conference on human factors in computing systems, 1–13.
Bauer, Greta R, and Daniel J Lizotte. 2021. Artificial intelligence, intersectionality, and the future of public health, 1.
Califa, Antonio J. 1989. “Declaring English the official language: Prejudice spoken here.” Harv. CR-CLL Rev. 24:293.
Ciston, Sarah. 2019. “Intersectional AI is essential: polyvocal, multimodal, experimental methods to save artificial intelligence.” Journal of Science and Technology of the Arts 11 (2): 3–8.
Conitzer, Vincent, Walter Sinnott-Armstrong, Jana Schaich Borg, Yuan Deng, and Max Kramer. 2017. “Moral Decision Making Frameworks for Artificial Intelligence.” Proceedings of the AAAI Conference on Artificial Intelligence 31 (February). https://doi.org/10.1609/aaai.v31i1.11140.
DeBrusk, Chris. 2018. “The risk of machine-learning bias (and how to prevent it).” MIT Sloan Management Review 15:1.
Dong, Xibin, Zhiwen Yu, Wenming Cao, Yifan Shi, and Qianli Ma. 2020. “A survey on ensemble learning.” Frontiers of Computer Science 14:241–258.
Duschinsky, Robert. 2012. “Tabula rasa and human nature.” Philosophy 87 (4): 509–529.
Elder, Catherine, and Alan Davies. 2006. “Assessing English as a lingua franca.” Annual review of applied linguistics 26:282–304.
Gabriel, Iason. 2020. “Artificial Intelligence, Values, and Alignment.” Minds and Machines 30, no. 3 (September): 411–437. ISSN: 1572-8641. https://doi.org/10.1007/s11023-020-09539-2. http://dx.doi.org/10.1007/s11023-020-09539-2.
Hinnefeld, Henry. 2012. “Morality as an Emergent Property of Human Interaction.” https://api. semanticscholar.org/CorpusID:43742499.
Jenkins, Jennifer. 2009. “English as a lingua franca: Interpretations and attitudes.” World Englishes 28 (2): 200–207.
Jenkins, Jennifer, Alessia Cogo, and Martin Dewey. 2011. “Review of developments in research into English as a lingua franca.” Language teaching 44 (3): 281–315.
Kong, Youjin. 2022. “Are “intersectionally fair” ai algorithms really fair to women of color? a philosophical analysis.” In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 485–494.
Kotek, Hadas, Rikker Dockum, and David Sun. 2023. “Gender bias and stereotypes in Large Language Models.” In Proceedings of The ACM Collective Intelligence Conference, 12–24.
Lambert, Paul. 2017. “Computer generated works and copyright: selfies, traps, robots, AI and machine learning.”
Leavy, Susan, Gerardine Meaney, Karen Wade, and Derek Greene. 2020. “Mitigating gender bias in machine learning data sets.” In Bias and Social Aspects in Search and Recommendation: First International Workshop, BIAS 2020, Lisbon, Portugal, April 14, Proceedings 1, 12–26. Springer.
Leslie, D.x. 2017. “Understanding artificial intelligence ethics and safety: A guide for the responsible design and implementation of AI systems in the public sector” (2019). https://doi.org/10.5281/zenodo.3240529.
Maadi, Mansoureh, Hadi Akbarzadeh Khorshidi, and Uwe Aickelin. 2021. “A review on human–ai interaction in machine learning and insights for medical applications.” International journal of environmental research and public health 18 (4): 2121.
Margoni, Thomas. 2018. “Artificial Intelligence, Machine learning and EU copyright law: Who owns AI?” Machine learning and EU copyright law: Who owns AI.
Marshall, David F. 1986. “The question of an official language: Language rights and the English Language Amendment.”
McDonald, Nora, and Shimei Pan. 2020. “Intersectional AI: A study of how information science students think about ethics and their impact.” Proceedings of the ACM on Human-Computer Interaction 4 (CSCW2): 1–19.
Mehrabi, Ninareh, Fred Morstatter, Nripsuta Saxena, Kristina Lerman, and Aram Galstyan. 2021. “A survey on bias and fairness in machine learning.” ACM computing surveys (CSUR) 54 (6): 1–35.
Minati, Gianfranco. 2011. “Ethics as Emergent Property of the Behavior of Living Systems.” https://api.semanticscholar.org/CorpusID:17367939.
Pakir, Anne. 2001. “Bilingual education with English as an official language: Sociocultural implications.” Georgetown University round table on languages and linguistics 1999, 341.
Paullada, Amandalynne, Inioluwa Deborah Raji, Emily M Bender, Emily Denton, and Alex Hanna. 2021. “Data and its (dis) contents: A survey of dataset development and use in machine learning research.” Patterns 2 (11).
Petryszak, Nicholas G. 1981. “Tabula rasa–its origins and implications.” Journal of the History of the Behavioral Sciences 17 (1): 15–27.
Sag, Matthew. 2018. “The new legal landscape for text mining and machine learning.” J. Copyright Soc’y USA 66:291.
Sagi, Omer, and Lior Rokach. 2018. “Ensemble learning: A survey.” Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery 8 (4): e1249.
Schildkraut, Deborah J. 2001. “Official-English and the states: Influences on declaring English the official language in the United States.” Political Research Quarterly 54 (2): 445–457.
Seidlhofer, Barbara. 2005. “English as a lingua franca.” ELT journal 59 (4): 339–341.
Silver, David, Julian Schrittwieser, Karen Simonyan, Ioannis Antonoglou, Aja Huang, Arthur Guez, Thomas Hubert, et al. 2017. “Mastering the game of Go without human knowledge.” Nature 550 (7676): 354–359.
Stalker, James C. 1988. “Official English or English Only.” The English Journal 77 (3): 18–23.
Villegas-Galaviz, C., and K. Martin. 2023. “Moral distance, AI, and the ethics of care.” AI and Soc, https://doi.org/10.1007/s00146-023-01642-z.
Xu, Wei, Marvin J Dainoff, Liezhong Ge, and Zaifeng Gao. 2021. “From human-computer interaction to human-AI Interaction: new challenges and opportunities for enabling human-centered AI.” arXiv preprint arXiv:2105.05424 5.
Yang, Qian, Aaron Steinfeld, Carolyn Rosé, and John Zimmerman. 2020. “Re-examining whether, why, and how human-AI interaction is uniquely difficult to design.” In Proceedings of the 2020 chi conference on human factors in computing systems, 1–13.
Young, John. 2019. “Tabula Rasa - Rethinking the intelligence of machine minds.” Accessed November 24, 2023. https://www.artnome.com/news/2019/9/17/tabula-rasa-rethinkingthe-intelligence-of-machine-minds.
Young, Simon. 2008. “The neurobiology of human social behaviour: an important but neglected topic.” J Psychiatry Neurosci. 33 (5): 391–392.
Zhou, Zhi-Hua, and Zhi-Hua Zhou. 2021. Ensemble learning. Springer.
Vanderelst, Dieter, and Alan F. T. Winfield. 2016. “The Dark Side of Ethical Robots.” CoRR abs/1606.02583. arXiv: 1606.02583. http://arxiv.org/abs/1606.02583.
-
Note that the notion of tabula rasa used here is in reference to the morality and ethics of a machine; that is, their morality and their ethics are inherently blank slated. This is as opposed to Silver et al. (2017), where the tabula rasa refers to “an algorithm based solely on reinforcement learning, without human data, guidance or domain knowledge beyond game rules”. ↩
-
Young is quick to point out, however, that the data is volatile, curated by humans, which is decidedly not the natural world; the analogy remains. ↩