Practical Principles for AI Ethics

Principles of AI are a top-down approach to ethics for artificial intelligence (AI). Recently, we have been seeing lists of principles for AI ethics popping up everywhere. They are very useful, not only for AI and its impact but also on a larger social level. Because of AI, people are thinking about ethics in a whole new way: How do we define and digest ethics in order to codify it? 

Previously I have written an analysis of top-down and bottom-up approaches to ethics for AI, and then we explored the bottom-up method of reinforcement learning for teaching AI ethics. In this segment, we will address AI principles as a top-down method for working towards an ethical AI. 

Ethical AI Principles

Principles can be broken into two categories: principles for people who program AI systems to follow, and principles for the AI itself

Some of the principles for people, mainly programmers and data scientists, read like commandments. For instance, The Institute for Ethical AI & ML has a list of eight principles geared toward technologists. These include human augmentation, to keep a human in the loop; bias evaluation, to continually monitor bias; explainability and justification, to improve transparency; reproducibility, to ensure infrastructure that is reasonably reproducible; displacement strategy, to mitigate the impact on workers due to automation; practical accuracy, to align with domain-specific applications; trust by privacy, to protect and handle data; and data risk awareness, to consider data and model security. (The Institute for Ethical Ai & Machine Learning)


The Responsible Machine Learning Principles:

  • Human Augmentation

  • Bias Evaluation

  • Explainability and Justification

  • Reproducibility

  • Displacement Strategy

  • Practical Accuracy

  • Trust by Privacy

  • Data Risk Awareness

Other lists of principles are geared towards the ethics of AI systems themselves and what they should adhere to. One such list consists of four principles, published by the National Institute of Standards and Technology (NIST), and are intended to promote explainability. The first of these is regarding explanation: that a system can provide evidence and reasons for its processes and outputs, be readable by a human, and explain its algorithms. The remaining three expand on this. The second recommends that AI systems must be meaningful and understandable, and have methods to evaluate meaninglessness. The third principle is explanation accuracy: a system must correctly reflect the reason(s) its generated output. Finally the fourth is knowledge limits: ensuring that a system only operates under conditions for which it was designed and that it does not give overly confident answers in areas it has limited knowledge of; for example, a system programmed to classify birds being used to classify an apple. (Marengo, 2021)

Many of the principles overlap across corporations and agencies. We can see a detailed graphic and information published by the Berkman Klein Center for Internet and Society at Harvard, found here. This gives a great overview of forty-seven principles that various organizations, corporations, and other entities are adopting, where they overlap, and their definitions. 

The authors provide many lists and descriptions of ethical principles for AI and categorize them into eight thematic trends: Privacy, Accountability, Safety and security, Transparency and explainability, Fairness and non-discrimination, Human control of technology, Professional responsibility, and Promotion of human values. (Fjeld and Nagy, 2020)

The Illusion of Ethical AI

One particular principle that I see as missing from these lists regards taking care of the non-human world. As Boddington states in her book, Toward a Code of Ethics for Artificial Intelligence (2018), “. . . we are changing the world, AI will hasten these changes, and hence, we’d better have an idea of what changes count as good and what counts as bad.” (Boddington, 2018) We will all have different opinions on this, but it needs to be part of the discussion. We can’t continue to destroy the planet while trying to create super AI and still be under the illusion that our ethical principles are saving the world. 

This will also be a cautionary tale, for a lot of these principles are theoretically sound, yet act as a veil that presents the illusion of ethics. This can be dangerous because it makes us feel like we are practicing ethics while business carries on as usual. Part of the reason for this is because the field of ethical AI development is so new and not a lot of research has been done yet to ensure the overall impact is a benefit to society. “Despite the proliferation of these ‘AI principles,’ there has been little scholarly focus on understanding these efforts either individually or as contextualized within an expanding universe of principles with discernible trends.” (Fjeld and Nagy, 2020)

Principles are a double-sided coin. On one hand, making the stated effort to follow a set of ethical principles is good. It is beneficial for people to be thinking about doing what is right and ethical, and not just blindly entering code that could be detrimental in unforeseen ways.

Some principles are simple in appearance yet incredibly challenging in practice. For example, if we look at the commonly adopted principle of transparency, there is quite a difference between saying that algorithms and machine learning should be explainable and actually developing ways to see inside of the black box. As datasets get bigger, this presents more and more technical challenges. (Boddington, 2018

Furthermore, some of the principles can conflict with each other, which can land us in a less ethical place than where we started. For example, transparency can conflict with privacy, another popular principle. We can run into a lot of complex problems around this, and I hope to see this addressed quickly and thoroughly as we move forward.

Overall, we want these concepts in people's minds: such as Fairness. Accountability, and Transparency. These are the core tenets and namesake of the FAAcT conference that addresses these principles in depth. It is incredibly important for corporations and programmers to be concerned about the commonly addressed themes of bias, discrimination, oppression, and systemic violence. And yet… What can happen is that these principles make us feel like we are doing the right thing, however, how much does writing out these ideals actually change things? 

The AI Ethical Revolution We Need

In order for AI to be ethical, A LOT has to change, and not just in the tech world. There seems to be an omission of the unspoken principles: the value of money for corporations and those in power and convenience for those who can afford it. If we are trying to create fairness, accountability, and transparency in AI, we need to do some serious work on society to adjust our core principles away from money and convenience and towards taking care of everyone’s basic needs and the Earth

Could AI be a tool that has the side effect of starting an ethics revolution? 

How do we accomplish this? The language that we use is important, especially when it comes to principles. Moss and Metcalf pointed out the importance of using market-friendly terms. If we want morality to win out, we need to justify the organizational resources necessary, when more times than not, companies will choose profit over social good. (Moss and Metcalf, 2019

Whittlestone et al. describe the need to focus on areas of tension in ethics in AI, and point out the ambiguity of terms like  ‘fairness’, ‘justice’, and ‘autonomy’. The authors prompt us to question how these terms might be interpreted differently across various groups and contexts. (Whittlestone et al. 2019)

They go on to say that principles need to be formalized into standards, codes, and ultimately regulation in order to be useful in practice. Attention is drawn to the importance of acknowledging tensions between high-level goals of ethics, which can differ and even contradict each other. In order to be effective,  it is vital to include a measure of guidance on how to resolve different scenarios. In order to reflect the genuine agreement, there must be acknowledgment and accommodation of different perspectives and values as much as possible. (Whittlestone et al. 2019)

The authors then introduce four reasons that discussing tensions is beneficial and important for AI ethics:

  1. Bridging the gap between principles and practice

  2. Acknowledging differences in values

  3. Highlighting areas where new solutions are needed

  4. Identifying ambiguities and knowledge gaps

Each of these needs to be considered ongoing, as these tensions don’t get solved overnight. Particularly, creating a bridge between principles in practice is important, as I have argued above.

To wrap up, I will share this direct quote because it is incredibly profound:

We need to balance the demand to make our moral reasoning as robust as possible, with safeguarding against making it too rigid and throwing the moral baby out with the bathwater by rejecting anything we can’t immediately explain. This point is highly relevant both to drawing up codes of ethics and to the attempts to implement ethical reasoning in machines.” (Boddington, 2018 p.18-19) 

In conclusion, codes of ethics, or ethical principles for AI are important to have, and I like the conversations that are being started because of their existence. However, it can’t stop there. I am excited to see more and more ways that these principles are put into action and to see technologists and theorists working together to investigate ways to make them work. I would also hope that we can open minds to ideas beyond making money for corporations and creating conveniences, and rather toward addressing tensions and truly creating a world that works for everyone. 

Citations

ACM Conference on Fairness, accountability, and transparency (ACM FACCT). ACM FAccT. (2021). Retrieved January 7, 2022, from https://facctconference.org/

Ai Principles. Future of Life Institute. (2021, December 15). Retrieved December 30, 2021, from https://futureoflife.org/2017/08/11/ai-principles/

Berkman Klein Center Media Library. (n.d.). Retrieved January 8, 2022, from https://wilkins.law.harvard.edu/misc/ 

Boddington, Paula. (2018). Towards a code of Ethics for Artificial Intelligence. SPRINGER INTERNATIONAL PU. 

Fjeld , J., & Nagy, A. (2020). Principled artificial intelligence . Berkman Klein Center. Retrieved December 30, 2021, from https://cyber.harvard.edu/publication/2020/principled-ai

Marengo, F. (2021). Federico Marengo on linkedin: Four principles of explainable AI: 35 comments. LinkedIn. Retrieved January 7, 2022, from https://www.linkedin.com/posts/fmarengo_four-principles-of-explainable-ai-activity-6878970042382356480-updf/

Moss , E., & Metcalf, J. (2019, November 14). The ethical dilemma at the heart of Big Tech Companies. Harvard Business Review. Retrieved December 13, 2021, from https://hbr.org/2019/11/the-ethical-dilemma-at-the-heart-of-big-tech-companies.

The Institute for Ethical Ai & Machine Learning. (n.d.). The machine learning principles. The 8 principles for responsible development of AI & Machine Learning systems. Retrieved December 30, 2021, from https://ethical.institute/principles.html

Whittlestone, J., Cave, S., Alexandrova, A., & Nyrup, R. (2019). The role and limits of principles in AI Ethics: Towards a … Retrieved December 13, 2021, from http://lcfi.ac.uk/media/uploads/files/AIES-19_paper_188_Whittlestone_Nyrup_Alexandrova_Cave.pdf.

Previous
Previous

Machine Learning Algorithms Cheat Sheet

Next
Next

Reinforcement Learning as a Methodology for Teaching AI Ethics