My current research investigates Algorithmic Collective Action as a lens for designing socio-technical systems that empower communities — not just individuals — to resist harm and reclaim agency.
This is a blog designed to explore Algorithmic Collective Action, inspired from works such as Algorithmic Collective Action (Hardt et al. 2023), Protective Optimization Technologies (Kulynych et al. 2018), and Data Leverage (Vincent et al. 2020).
💡 What Do These Concepts Mean?
We’ve heard over and over again how we’re living in a data driven world and how algorithms are becoming part of our everyday lives. What does this mean? Whenever we interact with algorithmic systems—whether it’s through a search engine, a social platform, or a recommendation system—we’re not just passive users. We’re actively contributing data that these algorithms use to learn and adapt. However, living with algorithms does not always have to be a one-way street. Since the users contribute data to this system – the data contribution can be seen as a form of leverage. That’s where Vincent et al. (2020)’s reserach is really interesting.
🧮 Data Leverage
“By reducing, stopping, redirecting, or otherwise manipulating data contributions, the public can reduce the effectiveness of many lucrative technologies.”
— (Vincent et al. 2020)
This idea explores how individuals and groups can use their data contributions strategically—not just passively—to shape the behavior and incentives of data-driven systems. Users of the data driven system can have a say in how the systems operate and could contribute towards making these systems fairer for everyone!
Hardt et al. (2023) introduces the concept of Algorithmic Collective Action as
📘 Algorithmic Collective Action
“The collective pools the data of participating individuals and executes an algorithmic strategy by instructing participants how to modify their own data to achieve a collective goal.”
— (Hardt et al. 2023)
This concept emphasizes coordination and strategy among groups of individuals to influence the behavior of learning algorithms—especially those deployed by large-scale platforms.
Similar concepts are also explored in Kulynych et al. (2018)’s reserach. Protective Optimization Technologies offer an alternative to traditional fairness techniques that operate from outside the system to mitigate harms without relying on goodwill of the service providers.
🛠️ Protective Optimization Technologies (POTs)
“POTs, provide means for affected parties to address the negative impacts of systems in the environment, expanding avenues for political contestation. POTs intervene from outside the system, do not require service providers to cooperate, and can serve to correct, shift, or expose harms that systems impose on populations and their environments.”
— (Kulynych et al. 2018)
POTs are techniques designed to mitigate or adapt to unintended consequences of optimization systems. They are implemented externally and do not require changes to the systems themselves.
👩🏽💻 About Me
I’m Meghana Bhange, a researcher and engineer working at the intersection of machine learning, privacy, and social impact. I’m currently exploring Algorithmic Collective Action as part of my PhD at the Trustworthy Information Systems Lab (TISL) and Mila – Quebec AI Institute, under the supervision of Prof. Ulrich Aïvodji.
Some portions of this blog were edited using AI writing tools (such as ChatGPT-4, Grammerly and Quillbot) to improve flow, grammer, clarity, and structure. All content has been carefully reviewed and manually validated to ensure accuracy, alignment with the cited sources, and consistency with the research context.