Algorithmic Collective Action with Two Collectives
Published in 2025 ACM Conference on Fairness, Accountability, and Transparency (FAccT), 2025
Given that data-dependent algorithmic systems have become impactful in more domains of life – from finding media to influencing hiring – the need for individuals to promote their own interest and hold algorithms accountable has grown. The large amount of data these systems use means that individuals cannot impact system behavior by acting alone. To have meaningful influence, individuals must band together to engage in collective action. The groups that engage in such algorithmic collective action are likely to vary in size, membership characteristics, ability to act on data, and crucially, objectives. In this work, we introduce a first of a kind framework for studying collective action with two or more collectives that strategically behave to manipulate data-driven systems. With more than one collective acting on a system, unexpected interactions may occur. We use this framework to conduct experiments with language model-based classifiers and recommender systems where two collectives each attempt to achieve their own individual objectives. We examine how differing objectives, strategies, sizes, and homogeneity can impact a collective’s efficacy. We find that the unintentional interactions between collectives can be quite significant. We find cases in which a collective acting in isolation can achieve their objective (e.g., improve classification outcomes for themselves or promote a particular item), but when a second collective acts simultaneously, the efficacy of the first group drops by as much as 75\%. We find that, in the recommender system context, neither fully heterogeneous (all users are very similar) nor fully homogenous collectives stand out as most efficacious; moderately heterogeneous tend to achieve their objectives, though the impact of heterogeneity is secondary compared to collective size. Our results signal the need for more transparency in both the underlying algorithmic models and the different behaviors individuals or collectives may take on these systems. This approach also allows collectives to hold algorithmic system developers accountable and illustrates a framework for people to actively use their own data to promote their own interest.