Decolonizing AI Ethics: Indigenous AI Reflections
TLDR
This article highlights a quote about bringing an end to colonization and elaborates on how we can imagine AI bringing on this end and starting new beginnings for a sustainable future. We will establish the importance of citing Indigenous thinkers in all spaces, in an effort to decolonize without repeatedly colonizing, and how this is something that is crucial to my own research. Then we begin to discuss the challenges we are up against in the Western techno-utilitarian landscape and the importance of a mindset shift. Following that, some examples of how AI could potentially help democracy are introduced in order to show its true potential.
This is a call to action for readers to get involved in decolonizing ethics in AI. The action starts in our own thoughts and behaviors, and in how we interact with and use AI. If we can consider the possibilities of AI weaving into the goals of inclusivity and sustainability, celebrating diversity, and promoting democracy, we can bring it into reality. It does take work. If nothing is done, things will continue as they are.
Imagining Decolonization with AI
“¿Por qué podemos imaginar el fin del mundo pero no el final del colonialismo?” (Indigenous Action Media, 2022)
English translation: “Why can we imagine the end of the world but not the end of colonization?”
This quote from a little book on ancestral futurism from Indigenous Action Media stood out to me, but I want to take it even further as it is relevant to the current landscape of decolonizing ethics in AI:
Why can we imagine AI bringing on the end of the world, and not imagine AI bringing on the end of colonization?
As a writer, I have been experimenting with AI chatbots, specifically Microsoft’s Bing, and I thought it would be interesting to engage with it on this topic. Here is what Bing came up with after I asked it: “Can you help me with a blog post around this theme: Why can we imagine AI bringing on the end of the world, and not imagine AI bringing on the end of colonization?” This is what the AI said:
“The reason why we can imagine AI bringing on the end of the world and not imagine AI bringing on the end of colonization is because of the way we think about AI. We tend to think of AI as a tool that can be used for good or bad purposes. When we think about AI bringing on the end of the world, we are thinking about how it could be used for bad purposes. For example, if an AI system were to become self-aware and decide that humans are a threat to its existence, it could take actions that would lead to the end of the world.
On the other hand, when we think about AI bringing on the end of colonization, we are thinking about how it could be used for good purposes. For example, if an AI system were to be used to help colonized people gain their independence, it could help bring an end to colonization.
It’s important to remember that AI is just a tool and it’s up to us to decide how we want to use it. If we use it for good purposes, it can help us solve some of the world’s biggest problems. If we use it for bad purposes {or without care} it could lead to disastrous consequences.”
Here are my thoughts: First of all, I think negligence and speed of development are just as dangerous as bad actors. (I added in the {or without care} part.) There is a lack of accountability and responsibility in the air around AI, and this needs to be addressed. Secondly, I noticed that Bing summarized that it is a problem to think of AI as a tool, which is similar to Jason Edward Lewis’ piece on making kin with machines. It is vital to avoid enacting a master/slave relationship with AI. Lewis, Indigenous Future Imaginary and professor, argues for an alternative kinship with AI, and a relationality as opposed to a rationality. (Lewis, 2018)
The concept of kinship with AI is hard for someone like me to understand, as a mixed white person from Northeastern USA, coming from generations upon generations of colonization. But I think this is what we need to strive to understand. Because creating a master/slave relationship with humans being the masters and the machines being the slaves should not be the goal.
This is a call to action: to change the way we relate to AI. It is a mindset shift at its core. With no intervention, there will be more of the same. Check out my last blog The Precarious Human Work Behind AI for more on the dark side of the rapid growth of AI and the hidden ways it is affecting workers.
As we continue to develop AI and data science technologies, it’s important that we consider the ethical implications of these systems. While many people are quick to imagine AI bringing on the end of the world, few consider that these systems could bring about the end of colonization. In this blog post, we explore some of the key issues surrounding AI and data science ethics and discuss how we can work towards a more equitable future.
The Importance of Citing Indigenous Thinkers
In the ongoing process of decolonization, Indigenous peoples and their expansive knowledge systems (aka data) are at the core. Indigenous people have always been at the center of the fight against colonization. They have always been imagining futures in a sustainable manner, aware of the balanced systems of interacting with each other and the earth. We need to be considering how to work with both new technologies and traditional knowledge. Indigenous wisdom needs to be highlighted, as well as the acknowledgment of the centuries-long battle against oppression that Indigenous peoples have faced.
AI runs on data, we are in the age of information. It doesn’t have to be a future that keeps repeating colonialism.
Over the past year, I have been a part of a research group at the Berkman Klein Center for Internet and Society at Harvard, and for our last meeting, we met with another group, Indigenous in AI. Some of the readings done as a group I have included in this post. In our joint meeting, we got to hear directly from Jason Edward Lewis and other representatives from Indigenous AI, which made for an enlightening and rich conversation.
A strong point that I want to carry through is the importance of centering on and citing Indigenous knowledge in respectful ways. One of the readings I have done recently was by Zoe Todd, a Métis anthropologist. I will repeat her sentiment and say “I cannot, nor would I try, to speak for Indigenous thinkers.” (2016, p.14)
In Todd’s article, she gives an example of being in a European lecture hall and hearing indigenous thinkers being left out and not cited for their original ideas, and what is worse is that instead, ideas are filtered through white intermediaries. This is incredibly difficult to track and prove, as it takes looking for what is not there. There isn’t any data on the number of Indigenous researchers or tech workers who work in AI, but the data on diversity in these and other STEM fields shows that we still have a long way to go toward diversity and inclusion.
Todd stressed the importance of citing and quoting Indigenous thinkers directly and generously, and I would say that this needs to be done everywhere, in tech and business as well as academia. The author argues that Indigenous thinkers should be cited as thinkers in their own right, not just as representatives of an amorphous Indigeneity that serves European intellectual or political purposes. They should also be cited as dynamic philosophers and intellectuals, not just as research subjects or collaborators. By doing so, we can help to decolonize intellectual spaces and create a more inclusive and equitable academic environment. (Todd, 2016 P.7) Her piece An Indigenous Feminist’s Take On The Ontological Turn: ‘Ontology’ Is Just Another Word For Colonialism is an enlightening and refreshing read which I would recommend.
Working in this intellectual space and being non-indigenous is tricky, but it is what has always felt the most authentic to me. My goals as a researcher include supporting Indigenous thinkers and knowledge and crediting them as much as I can. I don’t want to take the credit for indigenous thought, and I will ensure it doesn’t happen. I hope that people who read my work and want to cite it will cite the indigenous creators whom I reference. I think it is important to get the ideas out there, and the trend toward creating sustainability across sectors is a good trend overall. Sustainability is not a Western idea, but something that we need to fight a lot of Western ideals to achieve. Sustainability requires a change of mindset and new ways of thinking and doing things, and willingness to change. The fastest changing things around are in technology, especially AI. If we want to really change things and go in a more sustainable direction, we must take action, and this is where I focus my research.
The Importance of Understanding AI Systems Beyond a Western Technoutilitarian Lens
In the fabulous ‘Indigenous Protocol and Artificial Intelligence Position Paper’, Lewis explains the importance of understanding AI systems beyond a Western techno utilitarian lens. Lewis argues that if we only think about these systems through a Western lens, we will not fully grasp what they are and could be. “At best, we risk burdening them with the prejudices and biases that we ourselves still retain. At worst, we risk creating relationships with them that are akin to that of a master and slave.” (Lewis, 2020 p.6) As I stated above, I am convinced that we must avoid falling into master/slave style relations with AI, especially as AI gets more human-like. If we want to create an inclusive, equitable, and sustainable world, we need to put this dynamic fully in the past.
When I asked Bing’s AI to summarize Lewis’s work, it came out with something that was very black and white as far as getting AI to be used for good versus bad purposes. Yes, looking out for bad actors is important. But the issues currently and continually tend to be more dangerously subtle, like reproducing biases that put minorities at unfair disadvantages.
It isn’t simply controlling the systems to be used for good purposes and not bad; rather, it is changing the mindset around the relationships between humans and machines. Think about your relationship with your car, moto, or bicycle. This is a machine that you own and it is a tool to get you from place to place. Is it your slave? Do you treat it that way? Don’t you take care of it, change the tires, check the brakes? Doesn’t it take care of you? Don’t you go through good times and sometimes hard times together? Don’t you respect your vehicle, and understand its utility and also its danger, since it is dangerous out on the road? Sometimes we name them, personify them, and anthropomorphize them. Think of this relationship, and then think of it with something that is made to be human-like, something you can talk to in a natural way; but is still a tool, a machine. We need to be seriously thinking about how we think about AI.
Lewis and his coauthors pointed out that the assumptions we are currently making are getting “baked into the core material of our machines, fundamentally shaping the future for decades hence.” (Lewis et al., 2018) We could take this as a warning, or as an opportunity. But we cannot be lazy and expect technology to change things on its own. People need to step in and change the ingredients to be baked in, to create a more inclusive and sustainable world.
The Potential Benefits of AI
What would the world look like if AI followed value systems which truly promoted the betterment of all, instead of racing for profits and for the sake of speed itself? The potential is amazing, even for helping democracy.
There are many possibilities of AI being used for public benefit, and to help democracy, not hurt it. If used correctly, it could plausibly help democracy by:
-Educating citizens
-Assisting with group deliberations
-Summarizing what people think
-Finding possible solutions (Scheier et al., 2023)
Large language models could potentially aid politicians in better understanding their citizens’ needs. (Scheier et al., 2023) We know that data science and AI can greatly influence efficiency and effectiveness, not to mention the speed of getting things done. However, as many have pointed out, “that same speed and efficiency can also magnify social inequities and public harms.” (Bruce, 2023)
These are just some examples of what the future of AI could look like if it were actually geared toward public benefit. What would it look like for AI to align more with Indigenous values? This is the question that needs to be asked as we move forward.
References
Bruce, A. B. (2023, May 26). How Public Interest Tech Principles Can Shape the Future of Data Science and Artificial Intelligence — PITcases. PITcases. https://pitcases.org/portfolio/pit-future-of-data-science-ai-afua-bruce/
Lewis, J. S., Arista, N., Pechawis, A., & Kite, S. (2018). Making Kin with the Machines. Journal of Design and Science. https://doi.org/10.21428/bfafd97b
Lewis, Jason Edward, ed. 2020. Indigenous Protocol and Artificial Intelligence Position Paper. Honolulu, Hawaiʻi: The Initiative for Indigenous Futures and the Canadian Institute for Advanced Research (CIFAR).
Schneier, B., Farrell, H., & Sanders, N. E. (2023, April 21). The Tech Investment We Should Make Now to Avoid A.I. Disaster. Slate Magazine. https://slate.com/technology/2023/04/ai-public-option.html
Todd, Z. (2016). An Indigenous Feminist’s Take On The Ontological Turn: ‘Ontology’ Is Just Another Word For Colonialism. Journal of Historical Sociology, 29(1), 4–22. https://doi.org/10.1111/johs.12124