Beyond the Pipeline: Rethinking Human Labor in AI at CSCW 2024

Who does the work behind AI? While AI systems are celebrated for their automation, they rely on extensive — and often invisible — human labor. This labor is rarely acknowledged in the dominant narratives surrounding AI. At the CSCW 2024 workshop, The Work of AI: Mapping Human Labor in the AI Pipeline, scholars, practitioners, and activists gathered to critically examine the often-invisible human labor behind AI systems and to challenge the dominant narratives surrounding AI development and its impacts. I was very happy to be able to join and share research I did with Nana Nwachukwu and Laura Montoya for a short paper we coauthored, The Glamorization of Unpaid Labor: AI and its Influencers. Our paper explored how the glamorization of unpaid labor obscures the real human effort behind AI development, drawing attention to the role of influencers and online communities in perpetuating this dynamic.

All of the participants had written a short paper for the workshop, and one of my favorites was called ‘Seamful Design for Labor Visibility in AI Pipelines: Insights from Transit Advocacy and Smart Cities’ by Taneea S Agrawaal and Robert Soden at the University of Toronto, on the seamfulness of AI. The authors spoke of seamfulness as the opposite of seamlessness, as AI is often portrayed as running seamlessly, however that is not the case. It actually runs with a lot of seams, and those seams often represent the exploitative and precarious labor that makes AI seem to be seamless, when it is not. Seamfulness highlights the often-overlooked flaws and gaps in AI systems, shedding light on the hidden human labor that keeps them running smoothly. Pluriversal approaches emphasize the importance of diverse, context-specific solutions for AI governance, rejecting the notion that a single model can work universally.

Rethinking the AI Pipeline Metaphor

One of the workshop’s central discussions revolved around the limitations of the “AI pipeline” metaphor. While the term is widely used to describe the linear progression of data collection, model training, deployment, and evaluation, participants challenged its simplicity. They argued that this metaphor fails to capture the dynamic, interconnected, and messy reality of AI development.

The workshop began with a participatory exercise of mapping out labor on the AI pipeline on the floor

Instead, alternative metaphors such as ecosystems, networks, or “living libraries” were proposed. These models emphasize the continuous and interdependent nature of AI work, acknowledging the contributions of human labor, the influence of socio-political contexts, and the need for adaptive oversight.

Recognizing Bias and the Need for Accountability

The workshop underscored the intrinsic biases within AI systems. Participants highlighted that bias cannot simply be “removed” from AI systems; instead, it must be continuously monitored and mitigated. Bias reflects the political, economic, and social contexts in which AI operates. Addressing it requires not only technical fixes but also governance frameworks that ensure accountability and transparency.

A key takeaway was the importance of acknowledging the partial and situated nature of AI systems. This means openly communicating the limitations and assumptions baked into AI models and fostering iterative processes to refine them as contexts evolve.

Highlighting the Invisible Labor in AI

The workshop brought attention to the undervalued and often invisible human labor required to support AI systems. Consider the data annotators who label thousands of images daily, or the moderators exposed to harmful content to train algorithms, often under the guise of making AI ‘ethical.’ Their contributions remain essential yet invisible, not to mention exploitative. These workers are critical to AI’s functionality but are frequently marginalized in discussions about AI’s impact.

Participants discussed ways to better acknowledge and support this labor, such as:

  • Ensuring fair wages and working conditions for workers involved in tasks like data labeling and content moderation.

  • Creating mechanisms for workers to voice concerns and influence the design of AI systems.

  • Promoting transparency around the role of human labor in sustaining AI technologies.

Imagining New Models of AI Governance

The workshop concluded with a call for alternative models of AI governance that prioritize equity, sustainability, and community participation. Suggestions included:

  • Decentralized oversight systems inspired by public libraries or Wikipedia, which rely on transparency and community-driven moderation.

  • Advocacy for “pluriversal” approaches to AI that resist one-size-fits-all solutions and center local needs and values.

Participants also emphasized the need to address AI’s broader environmental and ethical implications. This includes examining the extractive practices that sustain AI development, from the exploitation of natural resources to the energy-intensive processes of training large language models.

Final Thoughts

The CSCW 2024 workshop brought together researchers to explore the intricate and often-overlooked human labor embedded in AI systems. These discussions highlighted how questions of bias, accountability, and labor visibility are deeply intertwined, shaping how AI systems are developed and governed. This workshop underscored a critical truth: AI systems are not autonomous entities but the products of extensive human effort and socio-political contexts. By embracing new metaphors and governance models, we can chart a path toward AI systems that uplift, rather than exploit, humanity. We can envision governance models that prioritize justice, sustainability, and collective well-being. Such an approach is essential for advancing AI systems, catching and correcting them when they replicate existing inequities, and applying appropriate checks and balances at every turn.

Join the conversation: How do you think we can make AI labor more visible and equitable? Share your thoughts or connect with us to continue this critical dialogue.

Next
Next

Introduction to Computer Vision