Manifesto on Algorithmic Sabotage: A Review

On 1 April, 2024, Twitter user mr.w0bb1t posted the following to their feed:

The post points readers to the document, MANIFESTO ON “ALGORITHMIC SABOTAGE” created by the Algorithmic Sabotage Research Group (ASRG) and described as follows:

[the Manifesto] presents a preliminary version of 10 statements  on the principles and practice of algorithmic sabotage ..

… The #manifesto is designed to be developed and will be regularly updated, please consider it under the GNU Free Documentation License v1.3 ..

The struggle for “algorithmic sabotage” is everywhere in the algorithmic factory. Full frontal resistance against digital oppression & authoritarianism  ..

Internationalist solidarity & confidence in popular self-determination, in the only force that can lead the struggle to the end ..

MANIFESTO ON “ALGORITHMIC SABOTAGE” – https://tldr.nettime.org/@asrg/112195008380261222

Tech industry critique is fixated on resistance to false narratives, debunking as praxis. This is understandable; the industry’s propaganda campaign is relentless and successful, requiring an informed and equally relentless response.

This traps us in a feedback loop of call and response in which, OpenAI (for example) makes absurd, anti-worker and supremacist claims about the capabilities of the systems its selling, prompting researchers and technologists who know these claims to be lies to spend precious time ‘debunking.’


The ‘Manifesto’ consists of ten statements, numbered 0 through 9. In what follows, I’ll list each and offer some thoughts based on my experience of the political economy of the technology industry (i.e., how computation is used in large scale private and public environments and for what purposes) and thoughts about resistance.

Statement 0. The “Algorithmic Sabotage” is a figure of techno-disobedience for the militancy that’s absent from technology critique.

Comment: This is undeniably true. Among technologists as a class of workers, and tech industry analysts as a loosely organized grouping, there is very little said or apparently thought about what “techno-disobedience” might look like. One thing that immediately occurs to me, what resistance might look like, is a complete rejection of the idea of obsolescence and adoption of an attitude of, if not computational perma-culture, the idea of long computation.

Statement 1. Rather than some atavistic dislike of technology, “Algorithmic Sabotage” can be read as a form of counter-power that emerges from the strength of the community that wields it.

Comment: “Counter power,” something the historic Luddites – who were not ‘anti technology’ (whatever that means) understood, is a marvellous turn of phrase. An example might be the use of concepts that hyper-scale computation rentiers such as Microsoft and Amazon call ‘cloud computing’ for our own purposes. Imagine a shared computational resource for a community built from a ‘long computing’ infrastructure that rejects obsolescence and offers the resources a community might need for telecommunications, data analysis as a decision aid and other benefits.

Statement 2. The “Algorithmic Sabotage” cuts through the capitalist ideological framework that thrives on misery by performing a labour of subversion in the present, dismantling contemporary forms of algorithmic domination and reclaiming spaces for ethical action from generalized thoughtlessness and automaticity.

Comment: We see examples of “contemporary forms of algorithmic domination” and “generalized thoughtlessness” in what is called ‘AI,’ particularly the push to insert large language models into every nook and cranny. Products such as Microsoft Co-pilot serve no purpose aside from profit maximization. This is thoughtlessness manifested. Resistance to this means rejecting the idea there is any use for such systems and proposing an alternative view; for example, the creation of knowledge retrieval techniques that are built on attribution and open access to information.

Statement 3. The “Algorithmic Sabotage” is an action-oriented commitment to solidarity that precedes any system of social, legal or algorithmic classification.

Comment: Alongside other capitalist sectors, the tech industry creates and benefits from alienation. There was a moment in the 1980s and 90s when technology workers could have achieved a class consciousness, understanding the critical importance of their work as a collective to the functioning of society. This was intercepted by the introduction of the idea of atomized professionalism that successfully created a perceptual gulf between tech workers and workers in other sectors and also, between tech workers and the people who utilize the systems they craft and manage, reduced to the label, ‘users.’ Arrogance in tech industry circles is common, preventing solidarity within the group and with others. Resistance to this might start with the rejection of the false elevation of ‘professionalism’ (which has been successfully used in other sectors, such as academics, to neutralize solidarity).

Statement 4. The “Algorithmic Sabotage” is a part of a structural renewal of a wider movement for social autonomy that opposes the predations of hegemonic technology through wildcat direct action, consciously aligned itself with ideals of social justice and egalitarianism.

Comment: There is a link between statement 3, which calls for a commitment to solidarity, and statement 4, which imagines wildcat action against hegemonic technology. Solidarity is the linking idea. Is it possible to build such solidarity within existing tech industry circles? The signs are not good. Resistance might come from distributing expertise outside of the usual circles. We see examples of this in indigenous and diaspora communities in which, there are often tech adepts able and willing to act as interpreters, bridges, troubleshooters and teachers.

Statement 5. The “Algorithmic Sabotage” radically reworks our technopolitical arrangements away from the structural injustices, supremacist perspectives and necropolitical power layered into the “algorithmic empire”, highlighting its materiality and consequences in terms of both carbon emissions and the centralisation of control.

Comment: This statement uses the debunking framework as its baseline – for example, the critique of ‘cloud’ must be grounded by an understanding of the materiality of computation – mineral extraction and processing (and associated labor, environmental and societal impacts). And also, the necropolitical, command and control nature of applied computation. Resistance here might include an insistence on materiality (including open education about the computational supply chain) and a robust rejection of computation as a means of control and obscured decision making.

I’ll list the next two statements together because I think they form a theme:

Statement 6. The “Algorithmic Sabotage” refuses algorithmic humiliation for power and profit maximisation, focusing on activities of mutual aid and solidarity.

Statement 7. The first step of techno-politics is not technological but political. Radical feminist,anti-fascist and decolonial perspectives are a political challenge to “Algorithmic Sabotage”, placing matters of interdependence and collective care against reductive optimisations of the “algorithmic empire”.

Comment: Ideas are hegemonic. We accept, without question, Meta/Facebook’s surveillance based business model as the cost of entry to a platform countless millions depend on to maintain far flung connections (and sometimes even local ones in our age of forced disconnection and busy-ness). The ‘refusal to accept humiliation’ would mean recognizing algorithmic exploitation and consciously rejecting it. Resistance here, means not assuming good intent and staying alert but also, choosing ‘collective care.’ This is the opposite of the war of all against all created by social media platforms whose system behaviors are manipulated via the use of attention directing methods.

The final two statements can also be treated as parts of a whole:

Statement 8. The “Algorithmic Sabotage” struggles against algorithmic violence and fascistic solutionism, focusing on artistic-activist resistances that can express a different mentality, a collective “counter-intelligence”.

Statement 9. The “Algorithmic Sabotage” is an emancipatory defence of the need for community constraint of harmful technology, a struggle against the abstract segregation “above” and “below” the algorithm.

Comment: Statement 8 conveys an important insight: what we accept, despite our complaints, as normal systems behavior on platforms such as Twitter is indeed “algorithmic violence.” When we use these platforms, finding friends and comrades (if we’re fortunate) we are moving through enemy terrain and constantly engaged in a struggle against harm. I’m not certain, but I imagine that by “fascistic solutionism,” the ASRG mean the proposing of control to manage control – that is, the sort of ‘solution’ we see as the US Congress claims to address issues with TikTok via nationalistic and thereby, fascistic appeals and legislation. We are encouraged by the ‘Manifesto’ to go beyond acceptance above or below ‘the algorithm’ to build a path that rejects the tyranny that creates and nurtures these systems.

Beyond Command and Control

In his book, ‘Surveillance Valley’ (published in 2018) journalist Yasha Levine traces the Internet’s use as a population control tool to its start as an ARPA project for the military. Again and again, detailing efforts such as Project Camelot and many others besides, Levine describes the technology platforms we see as essentially benign, but off course (and therefore, reformable) as a counter-insurgency initiative by the US government and its corporate partners which persists to this day. The ‘insurgents’, in this situation, are the population as a whole.

Viewed this way, it’s impossible to see the current digital computation regime as anything but a terrain of struggle. The MANIFESTO ON “ALGORITHMIC SABOTAGE” is an effort to help us get our heads right. From the moment of digital computation’s inception, war was declared but most of us don’t yet recognize it. In the course of this war, much has been lost including alternative visions of algorithmic use. The MANIFESTO ON “ALGORITHMIC SABOTAGE” calls on us to assume a persona (where resistance starts) of the person, and people who know they’re under attack and think and plan accordingly.

It’s an incomplete but vital response to the debunking perspective which assumes a new world can be fashioned from ideas that are inherently anti-human.

Resisting AI: A Review

What should we think about AI? To corporate boosters and their camp followers (an army of relentless shouters) , so-called artificial intelligence is a world altering technology, sweeping across the globe like a wave made from the plots of forgotten science fiction novels. Among critics, thoughts are more varied. Some focus on debunking hyped claims, others, on the industry’s racist conceptions (such as the presentation of a cohort of men, mostly White, who work with ‘code’ as being the pinnacle of human achievement) and still others, on the seldom examined ideology of ‘intelligence’ itself.

For Dan McQuillan, author of the taut (seven chapters) yet expansive book,  ‘Resisting AI: An Anti-Facist Approach to Artificial Intelligence’ AI, is, under current conditions but not inherently, the computational manifestation of ever present fascist ideologies of control, categorization and exclusion.  McQuillan has written a vital manifesto, the sort of work which, many years from now, may be recalled, if we’re fortunate, as being among the defining calls to arms of its age. In several interviews (including this one for Machine Learning Street Talk) McQuillan has described the book’s origin as a planned, scholarly review of the industry that, as its true state became clearer to him, evolved into a warning. 

We can be glad he had the courage to follow the evidence where it led.


Both In and Of the World

“The greatest trick the Devil ever pulled” the saying goes, “was convincing the world he doesn’t exist.” The tech industry, our very own Mephistopheles (though lacking the expected fashion sense)  has pulled a similar trick with ‘AI’ convincing us that, alone among technical methods, it exists as a force disconnected from the world’s socio-political concerns. In short order, McQuillan dispenses with this in the introduction:

It would be troubling enough if AI was a technology being tested in the lab or applied in a few pioneering startups, but it already has huge institutional and cultural momentum. […] AI derives a lot of its authority from its association with methods of scientific analysis, especially abstraction and reduction, an association which also fuels the hubris of some of its practitioners. The roll out of AI across swathes of industry doesn’t so much lead to a loss of jobs as to an amplification of casualized and precarious work. [emphasis mine] Rather than being an apocalyptic technology, AI is more aptly characterized as a form of supercharged bureaucracy that ramps up everyday cruelties, such as those in our systems of welfare. In general, […] AI doesn’t lead to a new dystopia ruled over by machines but an intensification of existing misery through speculative tendencies that echo those of finance capital. These tendencies are given a particular cutting edge by the way Al operates with and through race. AI is a form of computation that inherits concepts developed under colonialism and reproduces them as a form of race science. This is the payload of real AI under the status quo. [Introduction, pg 4]

Rather than acting as the bridge to an unprecedented new world, AI systems (really, statistical inference engines) are the perfect tool for the continuance of existing modes of control, intensified and excused by the cover of supposed silicon impartiality.

Later, in chapter two, titled, ‘AI Violence’ McQuillan sharpens his argument that the systems imposed on us are engines of automated abuse.

AI operationalizes [a] reductive view through its representations. […] , Aľ’s representations of the world consist of the set of weights in the [processing] layers plus the model architecture of the layers themselves. Like science, Al’s representations are presented as distinct from that which they claim to represent. In other words, there is assumed to be an underlying base reality that is independent of the practices by which such representations are constructed. But […] the entities represented by AI systems- the ‘careful Amazon driver’ or the ‘trustworthy citizen’- are partly constructed by the systems that represent them. AI needs to be understood not as an instrument of scientific measurement but as an apparatus that establishes ‘relations of becoming between subjects and representations. The subject co-emerges along with the representation. The society represented by AI is the one that it actively produces.

We are familiar with the categories McQuillan highlights such as ‘careful drivers’ from insurance and other industries and government agencies which use the tagging and statistical sorting of discrete attributes to manage people and their movements within narrow parameters. AI, as McQuillan repeatedly stresses, supercharges already existing methods and ways of thinking, embedded within system logic. We don’t get a future, we are trapped in a frozen present, in which new thinking and new arrangements are inhibited via the computational enforcement of past structures.


Necropolitics

For me, the most powerful diagnostic section of the book is chapter 4, ‘Necropolitics.’ Although McQuillan is careful to not declare AI systems fascist by nature (beginning the work of imagining other uses for computational infrastructure in Chapter 5, ‘Post Machinic Learning’) he does make the critical point that these systems, embedded within a fraying political economy,  are being promoted and made inescapable at a moment of mounting danger:

Al is entangled with our systems of ordering society. […] It helps accelerate a shift towards far-right politics. AI is emerging from within a convolution of ongoing crises, each of which has the  potential to  be fascism-inducing, including austerity, COVID-19 and climate change. Alongside these there is an  internal  crisis in the ‘relations of oppression’, especially the general destabilization of White male supremacy by decolonial,  feminist,  LGBTQI  and other social movements (Palheta, 2021). The enrollment of AI  in the management of these various crises produces ‘states  of  exception’ – forms of exclusion that render people vulnerable in an absolute sense. The multiplication of algorithmic states of exception across carceral, social and healthcare systems makes visible the necropolitics of Al; that is, its role in deciding who should live and who should be allowed to die.

As 20th century Marxists were fond of saying, it is no accident that as the capitalist social order faces ever more significant challenges, ranging from demands from the multitudes subjected to its tyranny to the growing instability of nature itself as climate change’s impacts accelerate, there is a turn, by elites, to a technology of command and control to reassert some sense of order.  McQuillan’s urgency is born of a recognition of global emergency and the ways the collection of computational methods called ‘AI’ is being marshalled to meet that emergency using what can clearly be identified as fascist approaches.
There’s much more to say but I will leave it here so you can explore on your own. Resisting AI: An Anti-Facist Approach to Artificial Intelligence, is an important and necessary book.

As the hype, indeed, propaganda about AI and its supposed benefits and even dangers (such as the delusions about ‘superintelligence’ a red herring) are broadcast ever more loudly, we need a collectivity of counterbalancing ideas and voices. McQuillan has provided us with a powerful contribution.