The Precarious Human Work Behind AI
AI is now everywhere, but it doesn’t exist as autonomously as it makes it seem. AI is increasingly prevalent in a large variety of industries, many which hide the countless workers behind the curtain making it function, and I am not just talking about the engineers who create it.
It is important to acknowledge the human work behind AI development and maintenance, from grueling content moderation to rideshare drivers to all of us whose data serves to profit large corporations. This leaves countless workers in precarious positions, stuck in survival mode and forced to adapt as best as they can, with low wages and the threat of job loss looming as tasks continue to be automated.
Anything done in the name of ‘safety and trustworthiness’ of AI is truly an afterthought to capital gain for corporations. In a podcast with engineers from Open AI, they were laughing about how ‘Trust and Safety’ (T&S) more so stands for ‘Tradeoffs and Sadness.’ This is a fundamental problem for multiple reasons. Here in this blog, we will discuss the areas where the rapid development and deployment of AI is affecting precarious work in various ways.
The Human Work Behind Data
Data is the foundation of AI and is generated by people. Each day, approximately 328.77 million terabytes of data are created. The work done to produce data is almost never compensated, although it is massively profited off of by large corporations. How could companies compensate their users for the data that they use and profit from? What kind of laws or policies could be created to solve this problem, and how would it work on a global scale? These are still questions that we are grappling with as a society.
Data is the fuel of AI. There is a stark lack of control and ownership over data, which brings up some serious ethical considerations which include but are not limited to privacy, and which are barely covered by inconsistent and often unenforced data protection laws.
What should be done about this aspect of human work behind AI? This could be seen as a form of ghost work. Should it be compensated? How would this be implemented? There are some companies which are taking some initiatives in this and paying very small amounts to users for their data, but the issue is much bigger than that. The data collected is used to target advertising at users, which means further exploitation. Not to mention that it can be used to feed AI that replaces human work, so that your own data which you aren’t paid for could be used to put you out of a job, while also being used to sell you things.
In 2017, it was estimated that the transaction of giving up personal details to companies like Facebook came to about $1,000 per person per year, but this is quickly rising. (Madsbjerg, 2017) The exact value of our data is unclear, even to Google, but is often used for targeted advertising, as well as being sold to data brokers who sell it as a commodity to advertisers, retailers, marketers, government agencies, and other data brokerages. According to a report by SecurityMadeSimple.org, the data brokerage industry generates over $200 billion of revenue yearly and continues to grow annually. Another report by MAXIMIZE MARKET RESEARCH states that the Data Broker Market size was valued at $257.16 billion in 2021 and the total Data Broker revenue is expected to grow at 4.5% from 2022 to 2029, reaching nearly $365.71 billion. When will we as users and providers of data ever see any of these profits?
One proposed answer would be universal basic income based on the data we produce. This idea is not new, and was first presented by Jaron Lainer in his 2013 book, Who owns the future? The book criticizes the accumulations and evaluation of consumer data in the tech industry which fails to acknowledge any monetary debt to the people for all this free information they create and give.
The Exploitation of Workers in AI Moderation and Content Labeling
Now, we will leave that can of worms crawling around and discuss the low-paid gig work that goes into moderating AI systems, such as scanning content for violence and hate speech or endlessly labeling data. These jobs are often outsourced to workers in the Global South who are repeatedly exposed to traumatic content and receive little compensation. This is highly exploitative work, with little room for workers to organize and demand worker’s rights.
Take for example the story of Sama, which claims to be an “ethical AI” outsourcing company. Sama is headquartered in California and handles content moderation for Facebook. Its Kenya office pays its foreign employees a monthly pre-tax salary of around $528, which includes a monthly bonus for relocating from elsewhere in Africa. After tax, this amounts to around $440 per month. Based on a 45-hour work week, this equates to a take-home wage of roughly $2.20 per hour. Sama employees from within Kenya who are not paid the monthly relocation bonus receive a take-home wage equivalent to around $1.46 per hour after tax. (Perrigo, 2022)
Time published a report on Sama which detailed a failed worker uprising. The workers faced the trauma of viewing hundreds of horrific pieces of content every day, with the goal of determining if they were Facebook appropriate within 50 seconds for each, while living hand-to-mouth on low salaries and not given the appropriate support needed for this PTSD-inducing job. When workers organized in protest and planned a strike, high-paid executives flew in from San Francisco to ‘deal’ with the situation. They isolated the spearheader of the worker’s alliance, and terminated him, making him look like the bully who forced 100 other workers to sign a petition against the company. (Perrigo, 2022) The real bullies got away with this, as the ultimate goal is to make Facebook happy. It suits them to have low-waged workers with no other options to suffer life-long trauma everyday, all day long. But these workers need fair pay and worker’s rights. They need real support for their labor which is what makes Facebook a safer space, with less hate speech and violent content. They deserve to have a voice.
Another example is Mechanical Turk, or MTurk, which is a marketplace for human intelligence micro-tasks which are extremely low-paid (with no guarantee of pay), not to mention poor labor protection and high exploitation, and involves tasks such as tedious image labeling. As of December 2019 MTurk’s workers’ portal had 536,832 visitors, and although the work is demoralizing and pays pennies, many depend on it over no work at all. (Mehrotra, 2020) MTurk has been functioning since 2005, still with no worker protections.
The Human Intervention Required for AI Systems Case Studies: Spotlight on the Global South
Taking a deeper peek behind the curtain, we see that AI systems often require unseen human intervention and workarounds to operate effectively. This goes beyond the desks of technologists, and drives through the streets of nearly every city.
One study looked into the operations of two startups, Gojek and Grab, which entered Jakarta in 2015 with the aim of digitizing the city’s motorbike taxi market. (Qadri, & D’Ignazio, 2022) They found that the platform’s view of the city is idealized and flattened, with no consideration for frictions such as traffic, parking delays, or blocked roads. The routes assigned to drivers are often inappropriate or dangerous due to the platform’s lack of consideration for these variables, which local drivers develop work-arounds for that remain invisible and unacknowledged by the platforms. The drivers know the safest ways through their own city, despite what the app says.
The authors compared this to Donna Haraway’s “god trick” (1988) because it places the viewer in the impossible position of a disembodied, all-knowing eye looking down at the city. (Qadri, & D’Ignazio, 2022) The startups’ discourse often casts technology as the central organizer and optimizer of activity, while other forms of (human) intelligence are considered inferior. And to further demonstrate the dehumanization at play, Grab’s blog refers to drivers as “supply” units that can be moved around like goods or trucks. (Garg, et al., 2019) In reality, it is the human drivers who have knowledge of the city in its ever-changing state which makes the taxi service work, but the “AI” technology gets all the credit and the company owners benefit the most profit.
Workers rights remain an issue for lots of new areas of precarious occupation behind AI. As stated in a paper on work regulations for platform food delivery workers in Colombia, a neoliberal discourse on entrepreneurship is deepening the crisis of platform workers who are characterized as “self-employed” and therefore excluded from employment rights guaranteed for “employed workers” in local labor legislation. (Wood et al., 2019) (Vargas et al, 2022, p..38)
What is desperately needed are people to care about people. AI has no capability of systems to actually care about people, even if it were based on human systems that did. Algorithms are programmed with the ultimate goal to promote business. This leads to human workers being treated more and more like machines. With humans working under control of algorithms, digital workers are excluded from the benefits of the value chain in which they are one of the most important subjects. (Vargas et al, 2022 p.34)
Discussion
In a Harvard Business Review article on the subject of the humans behind the curtain of AI, the authors spoke of the paradox of automation’s last mile, which is the ever-moving frontier of AI’s development. (Gray & Suri, 2017) This is all the more relevant today. As AI makes progress, it creates and destroys temporary labor markets for new types of humans-in-the-loop tasks at a rapid pace
Contract workers are needed to train algorithms to make important decisions about content. They are also responsible for making snap decisions about what stays on a site and what’s deleted. This is a new form of employment that should be valued. (Gray & Suri, 2017) However, this work is not only still largely invisible, but the workers are not valued and the work is unreliable, low-paid, and often traumatizing.
Adrienne Williams, Milagros Miceli and Timnet Gebru wrote an essay late last year which argued that the idea of a world where AI is the primary source of labor is still far from being realized. The push towards this goal has created a group of people who are performing what is called “ghost work”, a term introduced by anthropologist Mary L. Gray and computational social scientist Siddharth Suri. This refers to the human labor that is often overlooked and undervalued but is actually driving AI. Companies that have branded themselves as “AI first” rely heavily on gig workers such as data labelers, delivery drivers and content moderators who are underpaid and often subject to heavy surveillance. (Williams, Milagros and Gebru, 2022)
Recommendations from Williams, Milagros and Gebru:
Funding for research and public initiatives which highlight labor and AI issues.
Analysis of causes and consequences of unjust labor conditions of harmful AI systems.
Consideration for the use of precarious crowdworkers to advance careers of AI researchers and practitioners and shift power into the hands of workers.
Co-create research agendas based on worker’s needs.
Support for cross-geographical labor organizing efforts.
Ensuring that research findings are accessible to workers rather than confined to academic publications.
Journalists, artists and scientists can foster solidarity by drawing clear connections between harmful AI products and labor exploitation. (Williams, Milagros and Gebru, 2022)
Recommendations from Gray and Suri:
Require more transparency from tech companies that have been selling AI as devoid of human labor.
Demand truth in advertising with regard to where humans have been brought in to benefit us.
Recognize the value of human labor in the loop.
Understand the training and support that informed their decision-making, especially if their work touches on the public interest. (Gray & Suri, 2017)
Conclusion
I can’t stress enough the importance of acknowledging the human work behind AI. There is a need to ensure that those who contribute to the development of AI are fairly compensated and protected. When trust and safety are dismissed as tradeoffs and sadness, with no question that the ends might not be justifying the means, there are some fundamental changes necessary to the approach on this. We might even question the end goals while we are at it.
We need to be humanized. It is arguable that AI was started back in the day to try to eventually replace human slavery. This is inherently problematic, as master/slave relations are built on exploitation, subjugation and dehumanization, which extends to the workers behind AI and not just to the AI itself. Although there are many benefits to AI replacing, changing, or accompanying work, it must be done in a way that is not exploitative and is centered on the betterment of all people and the planet, not in a speed-race for AI.
While AI has the potential to revolutionize many industries, it is important to acknowledge the human work that goes behind its development and maintenance. From data collection to system maintenance, humans play a critical role in the AI ecosystem. It is essential that we recognize and value this work, and understand the real harms that are already happening around AI.
It is easy to have a lot of fear for what AI can bring, how many jobs it is going to take. The reality is that most jobs will need to adapt to AI, and also that AI is creating so many new jobs at various skill levels. This would be much better news if it was something that everyone could benefit from, instead of being a product of exploitation and techno-solutionism.
Sources
Garg A, Yim LP and Phang C (2019) Understanding Supply & Demand in Ride-hailing Through the Lens of Data. In: Grab Tech. Available at: https://engineering.grab.com/ understanding-supply-demand-ride-hailing-data (accessed 6 October 2021).
Gray, M. L., & Suri, S. (2017). The humans working behind the AI curtain. Harvard Business Review. https://hbr.org/2017/01/the-humans-working-behind-the-ai-curtain
Haraway, D. (1988). Situated knowledges: The science question in feminism and the privilege of partial perspective. Feminist Studies, 14(3), 575-599.
Fagen, R. (2023). GPT4: Eldritch abomination or intern? A discussion with OpenAI — Integrity Institute. Integrity Institute. https://integrityinstitute.org/podcast/trust-in-tech-e19-eldritch-open-ai-gpt
Lanier, J. (2013). Who Owns the Future? Simon and Schuster.
Mehrotra, D. (2020, January 28). Horror Stories From Inside Amazon’s Mechanical Turk. Gizmodo. https://gizmodo.com/horror-stories-from-inside-amazons-mechanical-turk-1840878041#:~:text=The%20workers%20of%20Mechanical%20Turk,numbers%20and%20other%20personal%20data
Perrigo, B. (2022, February 17). Inside Facebook’s African Sweatshop. Time. https://time.com/6147458/facebook-africa-content-moderation-employee-treatment/
Qadri, R., & D’Ignazio, C. (2022). Seeing like a driver: How workers repair, resist, and reinforce the platform’s algorithmic visions. Big Data & Society, 9(2), 205395172211337. https://doi.org/10.1177/20539517221133780
Should tech companies pay us for our data? (2022, May 20). World Economic Forum. https://www.weforum.org/agenda/2018/12/tech-companies-should-pay-us-for-our-data/
Vargas, D. S., Castañeda, O. C., & Hernández, M. R. (2022). Technolegal Expulsions: Platform Food Delivery Workers and Work Regulations in Colombia. Journal of Labor and Society, 1–27. https://doi.org/10.1163/24714607-bja10009
Wood, A.J, Graham, M., Lehdonvirta, V. and Hjorth, I. “Good Gig, Bad Gig: Autonomy and Algorithmic Control in the Global Gig Economy.” Work, Employment and Society 33(1) (2019), 56–75. https://doi.org/10.1177/0950017018785616.
Williams, A., Miceli, M., & Gebru, T. (2022, December 10). The Exploited Labor Behind Artificial Intelligence. NOEMA. https://www.noemamag.com/the-exploited-labor-behind-artificial-intelligence/