We Will Demand it For You. On AI and Nuclear Power

I vividly remember the Three Mile Island incident which, to-date, remains the most severe accident in US commercial nuclear plant history. The military’s own radiation soaked history, still mostly classified, surely includes even darker moments. At the time, I was a boy who, among other things, studied nuclear energy.  We all need hobbies, and learning about reactors was one of mine. Softball, lemonade and subcritical atomics; a good childhood, various things considered. Once the story broke on local news in Philadelphia on what I recall as a crisp, March day in 1979 that TMI, as it was known, was in trouble, the adults in my life – at church and school and in my family – aware of my interests, turned to me to explain what it all meant. Would it explode, like the warhead of a Titan II, meant for Moscow? Or would radiation creep down the Susquehanna River from TMI’s upstate Pennsylvania location, killing us softly? Unexpectedly, I had an audience for ad hoc lectures about failing coolant systems. 

What motivated those adults to listen to a child was unease, approaching terror. That was the dominant emotion. Quietly managed, ever present unease. It was appropriate. How close we came, we now know, to a full meltdown, a Chernobyl-level event.

***

TMI recently came back to my thoughts, like a suddenly remembered nightmare, because of news stories that Microsoft, claiming an acute need for electrical power to supply its ‘AI’ data centers, had signed an agreement with Constellation Energy, the plant’s owner, to re-open one of its reactors. 

Here’s an excerpt from the Financial Times article, ‘Microsoft in deal for Three Mile Island nuclear power to meet AI demand –

Constellation Energy will reopen the Three Mile Island nuclear plant in Pennsylvania to provide power to Microsoft as the tech giant scours for ways to satisfy its soaring energy demand while keeping its emissions in check.

The companies on Friday unveiled a 20-year power supply deal which will entail Constellation reopening Unit 1 of the nuclear facility which was shuttered in 2019, in what would be the second such reopening of a plant in the US.

Three Mile Island’s second unit, which was closed in 1979 after a partial meltdown that led to the most serious nuclear accident in US history, will remain closed.

“The decision here is the most powerful symbol of the rebirth of nuclear power as a clean and reliable energy source,” said Constellation chief executive Joe Dominguez on a call with investors.

[…]

As I began writing this essay, I tried to think of an appropriate introduction, perhaps a quote from Phillip K Dick, whose work is a meditation on technology and madness, leitmotifs of our barbarous era. In the end, I decided to let the situation’s dangerous absurdity speak for itself. 

Let’s then, state the absurd: Microsoft and its ‘hyper-scale’ competitors (more co-conspirators, at this point), Amazon and Google, are turning to nuclear power to provide energy for their generative AI data centers. Pause for a moment to reflect on that sentence which I wrote as plainly as possible, foregoing writerly effects. To some, it’s a dream materialized, the science fiction world they imagined, come to life. To more sober minds, it’s a nightmare; an indication of how detached the software wing of capitalism is from the work of providing anything related to the goods and services people and organizations need or want.

It also puts flesh on the bones of that old phrase, ‘late stage capitalism’.

***

No one asked for so-called ‘generative AI’, the marketing name for a collection of algorithmic methods that ingest text, images, sounds etc – primarily from the Internet, without permission or compensation – iteratively processed using statistics, adjusted by poorly paid workers, computationally kneaded, to produce plausible outputs, that are sold as products. No one asked for it, but as I’ve discussed in a previous essay, the US tech industry’s key players, like gamblers drunk on hubris and hope, have bet their futures on super profits, courtesy of ‘AI’.

And, like desperate gamblers who, as their streak of luck ends, insist everyone around them just believe, the tech industry uses its media leverage to push a story: there’s an urgent need for more electricity to power the ‘AI’ the world allegedly clamors for. We are told there is a demand so great that even old nuclear power plants, such as the Three Mile Island facility must be restarted.

“AI demand” is the theme, the leitmotif; a story that ‘demand’ (no numbers are offered) is extraordinary, requiring that an ancient and indeed, infamous, nuclear plant must be resurrected, rising unbidden, like Godzilla, patron saint of the atomic age, from Tokyo bay. In 1966, Phillip K Dick wrote a novelette titled ‘We Can Remember it for You Wholesale’, the basis for the 1990 action film, ‘Total Recall’. Today, looking around at our world, PKD might be inspired to write a sequel, ‘We Will Demand it, For You’.

But what, exactly, is being demanded? According to Microsoft, its products such as Copilot, the company’s rebranding of OpenAI’s suite of large language model based systems (ChatGPT is the best known example):

Microsoft Copilot is an AI-powered digital assistant designed to help people with a range of tasks and activities on their devices. It can create drafts of content, suggest different ways to word things you’ve written, suggest and insert images or banners, create PowerPoint presentations from Word documents and many other helpful things.

[…]

Our demand for creating automated drafts of documents is so incredible, Microsoft tells us, that it is running out of electricity to spark the data centers providing this vital service and nuclear power, even if supplied by a decades old plant, best known for being the site of a partial meltdown, is their, and we’re encouraged to think, our last, best hope to keep the document summaries flowing. In the science fiction stories I read as a boy, nuclear power took humanity to the stars and energized the glowing hearts of robots. In the world crafted by the tech giants, it helps us create pivot tables for spreadsheets the sales team must have, lest darkness fall.

***

As lies go, the tech industry’s promotion of the idea that we’re demanding it build more data centers, to host more computational equipment, to produce more ‘generative AI’, for more chatbots and variations thereof, ranks as among the most incredible and ridiculous. It seems however, that we live in an age in which danger, lies and absurdity walk arm and arm, dragging us straight into the abyss. This is the moment in a critical essay when it is expected that the author proposes solutions, an answer to the question, ‘what is to be done?’.

Instead of that I offer a warning: the tech industry cannot be regulated and ‘ethics’ is only a diversion. Instead of trying to reform this system, monstrous in conception and execution, our efforts would be better spent preparing to circumvent and eventually, replace it.

References

Three Mile Island Accident

https://en.wikipedia.org/wiki/Three_Mile_Island_accident?wprov=sfti1#

Microsoft’s AI Power Needs Prompt Revival of Three Mile Island Nuclear Plant

Bloomberg

https://www.bloomberg.com/news/articles/2024-09-20/microsoft-s-ai-power-needs-prompt-revival-of-three-mile-island-nuclear-plant?sref=vuYGislZ

Financial Times

https://www.ft.com/content/ddcb5ab6-965f-4034-96e1-7f668bad1801

Why data centers want to have their own nuclear reactors

https://english.elpais.com/technology/2024-04-30/why-data-centers-want-to-have-their-own-nuclear-reactors.html#

About Microsoft CoPiliot

https://www.microsoft.com/en-us/microsoft-copilot/learn?form=MA13FV

Oracle will use three small nuclear reactors to power new 1-gigawatt AI data center

https://www.tomshardware.com/tech-industry/oracle-will-use-three-small-nuclear-reactors-to-power-new-1-gigawatt-ai-data-center

Amazon Vies for Nuclear-Powered Data Center

https://spectrum.ieee.org/amazon-data-center-nuclear-power

How to Read AI Hype: References

In this video, I walk through the document, ‘The Decade Ahead‘ by Leopold Aschenbrenner published at the Situational Awareness dot ai website. In the document, Aschenbrenner makes the usual bold assertions about ‘AGI’ (artificial general intelligence) equalling and soon, exceeding human cognition. How do you critically read such hype? Let’s go through it.

References

SITUATIONAL AWARENESS: The Decade Ahead

How GPT-3 Works

‘It’s a Scam.’ Accusations of Mass Non-Payment Grow Against Scale AI’s Subsidiary, Outlier AI

Reclaiming AI as a theoretical tool for cognitive science

AI as Stagnation: On Tech Imperialism

Unless you’ve been under a rock, and probably, even if you have, you’ve noticed that ‘AI’ is being promoted as the solution to everything from climate change to making tacos. There’s an old joke: how do you know when a politician is lying? Their mouth is moving. Similarly, anytime businesses relentlessly push something, the first question that should come to mind is: how are they trying to make money?

Microsoft, in particular, has, as the saying goes, gone all in rebranding its implementation of OpenAI’s ChatGPT large language model based products as CoPilot, embedded across Microsoft’s catalog. Leaving aside, for the sake of this essay, the question of what so-called AI actually is, (hint: statistics)  considering this push, it’s reasonable to ask: what is going on?

Ideology certainly plays a role

That is, the belief (or at least, the assertion) of a loud segment of the tech industry that they are building Artificial General Intelligence – a successor to humanity, genuinely thinking machines

Ideology is an important factor but it’s more useful to place technology firms such as Microsoft back within capitalism in our thinking. This is a way to reject the diversions this sector uses to obscure that fact

To do this, let’s consider Vladimir Lenin’s theory of imperialism as expressed in his essay, ‘Imperialism the highest stage of capitalism’.

In January of 2023, I published an essay to my blog titled, ChatGPT: Super Rentier.

The thesis of that essay is that Microsoft’s partnership with, and investment in, OpenAI and the insertion of OpenAI’s large language model software, known as ChatGPT into Microsoft’s product catalog, was done to create a platform Microsoft would use to make it a kind of super rentier – or, super landlord – of AI systems. Others, sub-rentiers, would build their platforms using Microsoft’s platform as the backend making it the super rentier – the landlord of landlords.

With this in mind, let’s take a look at this visualization of Lenin’s concept of imperialism I cooked up:

For me, the key element is the relationship between the tendency towards monopoly which leads to stagnation (after all, what’s the incentive to stay sharp if you control a market?) and the expansion of capitalist activity to other, weaker territories to temporarily resolve this stagnation – this is the material motive for capitalist imperialism or as Lenin also phrased it, parasitism.

Let’s apply this theory to Microsoft and its push for AI everywhere:

Microsoft, as a software firm, once derived most of its profit from selling products such as SQL Server, Exchange Server and the Office Suite. 

This became a near monopoly for Microsoft as it dominated the corporate market for these and other types of what’s known as enterprise applications. 

This monopoly led to stagnation – how many different ways can you try to derive profit from Microsoft Office, for example? By stagnation, I don’t mean that Microsoft did not make money or profit from its dominance, but this dominance no longer supported the growth capitalists demand.

The answer, for a time, was the subscription model of the Microsoft 365 platform which moved corporations from a model in which products such as Exchange would be hosted in-house in corporate data centers and licensed, to one in which there was a recurring charge for access and guaranteed revenue stream for Microsoft.

No longer was it possible for a company to buy a copy of a product and use it even after licensing expired. Now, you have to pay up, routinely, to maintain access.

After a time, even this led to a near monopoly and the return of stagnation as the market for expansion was saturated

Into this situation, enter ‘AI’

By inserting AI – chatbots and image generators into every product and pushing for this to be used by its corporate customers, Microsoft is enacting a form of the imperialist expansion Lenin described – it is a colonization of business process, education, art, filmmaking science and more on an unprecedented scale

But what haunts the AI push is the very stagnation it is supposed to remedy

There is no escape from the stagnation caused by monopoly, only temporary fixes which merely serve to create the conditions for future decay and conflict.

References

ChatGPT

Microsoft Copilot

Imperialism the highest stage of capitalism by VI Lenin

ChatGPT – Super Rentier

All Roads Lead to Surveillance Valley (on Windows 11 Recall)

Microsoft’s recent announcement of a product named Recall for Copilot Plus PCs, which reportedly features built-in ‘AI’ hosted on a ‘Neural Processing Unit’, provides us with an opportunity to take a look at the political economy of the technology industry in the era of decline.

I say ‘decline’, because Recall, despite the hosannas we’re hearing from the tech press – Silicon Valley’s Pravda – does not represent an advance but a rearguard move to accomplish what I see as two goals: 

  1. Increase and guarantee Microsoft’s ‘AI’ related revenue stream by using its dominance of the PC operating system market (both consumer and corporate) to force a failing product on customers (Tesla’s so-called full self driving software provides another example)
  2. Increase ‘AI’ related revenue by marketing Recall as a surveillance tool to governments and corporations

On point one: Despite a massive investment in OpenAI, including hosting and operating Azure data centers for the ChatGPT suite of resource destroying text calculators and embedding the large language model in flagship products Azure and Microsoft 365, it’s not clear Microsoft (or any company) has seen a return on its ‘AI’ investment. Quite the contrary. Recall creates a compelled revenue stream as corporations refresh their fleets of laptops. Microsoft has tried to recoup costs via high prices for products such as Github Copilot but this does not seem to be working as hoped; organizations can opt out. 

On point two: In a Wall Street Journal interview, Microsoft CEO Satya Nadella described Recall’s capabilities as a “photographic memory” that is, recording every image and action on a PC, using an onboard neural processing unit to run this data (supposedly kept on the machine) through a model or models to enable more sophisticated, ‘AI’ enabled searching. 

This seems like a lot of engineering effort to make it easier to find a photo you took at the beach a few years ago. Corporations don’t care about making anyone’s life easier so we must look for more adult, power-aware explanations for what we’re seeing here. 

Consider the precedent of Windows Vista, released in 2006. Vista, which employed a complex method for enforcing corporate digital rights, was created by Microsoft to attract the attention of the film and music industries as the preferred way to exert command and control over our use of ‘content’.  With Vista, Microsoft’s goal was to become the gatekeeper for the digital distribution of entertainment and derive profit from that position. This didn’t work out as planned but the effort is a key indicator of intent. I interpret Recall as being the ‘AI’ variant of the gatekeeper gambit.

We can safely ignore happy talk and promises of privacy to see what is right before us: a system for recording everything you do will be marketed to businesses and governments as a means of mass surveillance. What was once the description of malware has, in the age of ‘AI’ become a product. In its quest for profits, Microsoft is creating a difficult to escape, hardware based, globally distributed monitoring platform. We can be certain that its competitors, such as Apple, are making similar moves.

***

When thinking about the tech industry and its endless stream of product announcements, particularly about ‘AI’, a good rule of thumb is to ignore whatever glittering words are used to ask one question: how do they plan to make money? But not just ‘money’ in the abstract, profit. Looking at Recall for Windows 11, a follow the money approach leads directly to what Yasha Levine called ‘Surveillance Valley’.


References

Recall is Microsoft’s key to unlocking the future of PCs The Verge

ChatGPT costs $700,000 per day to run, which is why Microsoft wants to make its own AI chipsWindows Central

OpenAI and Microsoft Plan $100 Billion ‘Stargate’ Data Center in the U.S.Enterprise AI

A Cost Analysis of Windows Vista Content ProtectionPeter Gutmann

Surveillance Valley Yasha Levine

Pygmalion Displacement – A Review

From the beginning, like a fast talking shell game huckster, the computer technology industry has relied on sleight of hand. 

First, in the 1950s and 60s, to obscure its military origins and purposes by describing early electronic computers as ‘electronic brains’ fashioned from softly glowing arrays of vacuum tubes. Later, by the 1980s, as the consumer electronics era was launched, the industry presented itself as the silicon wielding embodiment of ideas of ‘freedom’ and ‘self expression’ that are at the heart of the Californian Ideology (even as it was fully embedded within systems of command, control and counter-insurgency).

The manic, venture capitalist funded age of corporate ‘AI’ we’re currently subjected to has provided the industry with new opportunities for deception; we are encouraged to believe large language models and other computationally enacted, statistical methods are doing the same things as minds. Earlier, I called this deception but as Lelia A. Erscoi, Annelies Kleinherenbrink, and Olivia Guest, describe in their paper, “Pygmalion Displacement: When Humanising AI Dehumanises Women“, a more precise term is, displacement.


Uniquely for the field of AI critique, ‘Pygmalion Displacement’ identifies the specific ways women have been theorized and thought about within Western societies and how these ideas have persisted into, and shaped the computer age. 

The paper’s abstract introduces the reader to the authors’ concept:

We use the myth of Pygmalion as a lens to investigate the relationship between women and artificial intelligence (AI). Pygmalion was a legendary king who, repulsed by women, sculpted a statue, which was imbued with life by the goddess Aphrodite. This can be seen as a primordial AI-like myth, wherein humanity creates life-like self-images. The myth prefigures gendered dynamics within AI and between AI and society. Throughout history, the theme of women being replaced by automata or algorithms has been repeated, and continues to repeat in contemporary AI technologies. However, this pattern—that we dub Pygmalion displacement—is under-examined, due to naive excitement or due to an unacknowledged sexist history of the field. As we demonstrate, Pygmalion displacement prefigures heavily, but in an unacknowledged way, in the Turing test: a thought experiment foundational to AI. With women and the feminine being dislocated and erased from and by technology, AI is and has been (presented as) created mainly by privileged men, subserving capitalist patriarchal ends. This poses serious dangers to women and other marginalised people. By tracing the historical and ongoing entwinement of femininity and AI, we aim to understand and start a dialogue on how AI harms women.

Pygmalion Displacement: When Humanising AI Dehumanises Women – Pg 1

Like all great theoretical frameworks (such as Marx’s dialectical and historical materialism), Pygmalion Displacement provides us with a toolkit, the Pygmalion Lens, which can be applied to real world situations and conditions, sharpening our understanding and revealing what is hiding in plain sight, obscured by ideology.

Pygmalion Lens Table: Pygmalion Displacement: When Humanising AI Dehumanises Women, Pg 14

Apex Delusions

We generally assume that humanity – whether via evolutionary process or divine creation – is at the top of a ladder of being. Many of us love our dogs and cats but believe that because we build rockets and computers and they don’t, we occupy a loftier perch (I recall a Chomsky lecture during which he threw cold water on this vainglory by observing that the creation of nuclear weapons suggested our vaunted intelligence ‘may not be a successful adaptation’).

In the Introduction section titled, ‘The man, the myth,’ the authors describe another rung on this mythical ladder:

At the top of the proverbial food chain, a majority presence consists of straight white men, those who created, profit from, and work to maintain the capitalist patriarchy and kyriarchy generally (viz. Schüssler Fiorenza 2001). From this perspective, AI can be seen as aiming to seal all humanity’s best qualities in an eternal form, without the setbacks of a mortal human body. It is up for debate, however, what this idealised human(oid) form should look or behave like. When our creation is designed to mimic or be compatible with us, its creator, it will enact, fortify, or extend our pre-existing social values. Therefore, in a field where the vast majority is straight, cisgender, white, and male (Lecher 2019), AI seems less like a promise for all humanity and more like contempt for or even a threat against marginalized communities.

Pygmalion Displacement: When Humanising AI Dehumanises Women – Pg 3

The AI field, dominated by a small cohort, is shaped not only by the idea of humans as superior to the rest of nature but certain humans being superior to others. The imagined artificial general intelligence (AGI) is not simply a thinking machine, but a god-like, machine version of the type of person seen as being at the apex of humanity.

Further on in the introduction, the authors describe how these notions impact women specifically:

Our focus herein is on women in particular, who dwell within the limits of what is expected, having to adhere to standards of ideal and colonial femininity to be considered adequate and then sexualized and deemed incompetent for conforming to them (Lugones 2007). Attitudes towards women and the feminised, especially in the field of technology, have developed over a timeline of gender bias and systemic oppression and rejection. From myths, to hidden careers and stolen achievements (Allen 2017; Evans 2020), to feminized machines, and finally to current AI applications, this paper aims to shine a light on how we currently develop certain AI technologies, in the hope that such harms can be better recognized and curtailed in the future.

Pygmalion Displacement: When Humanising AI Dehumanises Women – Pg 3

On Twitter, as in our walkabout lives, we see and experience these harms in action as the contributions of women in science and technology (and much else besides) are dismissed or attributed to men. I always imagine an army of Jordan Peterson-esque pontificators but alas these pirates come in all shapes and sizes.

From Fiction to History and Back Again

Brilliantly, the authors create parallel timelines – one fictional, the other real – to illustrate how displacement has worked in cultural production and material outcomes.

In the fictional timeline, which includes stories ranging from ‘The Sandman’ (1816) to 2018’s PS4 and PC sci-fi adventure game, Detroit: Become Human, we are shown how displacement is woven into our cultural fabric.

Consider this passage on the 2013 film, ‘Her’ which depicts a relationship (of sorts) between Theodore, a lonely writer, played by Joaquin Phoenix and an operating system named Samantha, voiced by Scarlett Johansson:

…it is interesting to note that unlike her fictional predecessors, Samantha has no physical form — what makes her appear female is only her name and how she sounds (voiced by Scarlett Johansson), and arguably (that is, from a stereotypical, patriarchal perspective) her cheerful and flirty performance of secretarial, emotional, and sexual labor. In relation to this, Bergen (2016) argues that virtual personal assistants like Siri and Alexa are not perceived as potentially dangerous AI that might turn on us because, in addition to being so integrated into our lives, their embodied form does not evoke unruliness or untrustworthiness: “Unlike Pygmalion’s Galatea or Lang’s Maria, today’s virtual assistants have no body; they consist of calm, rational and cool disembodied voices […] devoid of that leaky, emotive quality that we have come to associate with the feminine body” (p. 101). In such a disembodied state, femininity appears much less duplicitous—however, in Bergen’s analysis, this is deceptive: just as real secretaries and housekeepers are often an invisible presence in the house owing to their femininity (and other marginalized identity markers), people do not take virtual assistants seriously enough to be bothered by their access to private information.

Pygmalion Displacement: When Humanising AI Dehumanises Women – Pg 8

Fictional depictions are juxtaposed with real examples of displacement such as the often told (in computer history circles) but not fully appreciated story of the ELIZA and Pedro speech generation systems:

Non-human speech generation has a long history, harking back to systems such as Pedro the voder (voice operating demonstration) in the 1930s (Eschner 2017). Pedro was operated solely by women, despite the fact the name adopted is stereotypically male. The first modern chatbot, however, is often considered to be ELIZA, created by Joseph Weizenbaum in 1964 to simulate a therapist that resulted in users believing a real person was behind the automated responses(Dillon 2020; Hirshbein 2004). The mechanism behind ELIZA was simple pattern matching, but it managed to fool people enough to be considered to have passed the Turing test. ELIZA was designed to learn from its interactions, (Weizenbaum 1966) named precisely for this reason. In his paper introducing the chatbot, Weizenbaum (1966) invokes the Pygmalion myth: “Like the Eliza of Pygmalion fame, it can be made to appear even more civilized, the relation of appearance to reality, however, remaining in the domain of the playwright.” (p. 36) Yet ELIZA the chatbot had the opposite effect than Weizenbaum intended, further fuelling a narrative of human-inspired machines.

Pygmalion Displacement: When Humanising AI Dehumanises Women – Pg 20

Later in this section, quoting from a work by Sarah Dillon on ‘The Eliza Effect’ we’re told about Weizenbaum’s contextual gendering of ELIZA:

Weizenbaum genders the program as female when it is under the control of the male computer programmer, but it is gendered as male when it interacts with a [female] user. Note in particular that in the example conversation given [in Weizenbaum’s Computer Power and Human Reason, 1976], this is a disempowered female user, at the mercy of her boyfriend’s wishes and her father’s bullying, defined by and in her relationship to the men whom, she declares, ‘are all alike.’ Weizenbaum’s choice of names is therefore adapted and adjusted to ensure that the passive, weaker or more subservient position at any one time is always gendered as female, whether that is the female-gendered computer program controlled by its designers, or the female-gendered human woman controlled by the patriarchal figures in her life.

Pygmalion Displacement: When Humanising AI Dehumanises Women – Pg 21

This passage was particularly interesting to me because I’ve long admired Weizenbaum’s thoughtful dissection of his work. I learned from the critique of computation as an ideology but missed his Pygmalion framing; the Pygmalion Lens enables a new way of seeing assumptions and ideas that are taken for granted like the air we breathe.


There is much more to discuss such as an eye-opening investigation into the over-celebrated Turing Test (today, more marketing gimmick than assessment technique) which began as a theorized method to create a guessing game about gender, a test which (astoundingly) “…required a real woman […] to prove her own humanity in competition with the computer.”

This is a marvellous and important paper which presents more than a theory, it gives us a toolkit and method for changing the way we think about the field of computation (and its loud ‘AI’ partisans) under patriarchal capitalism 

Manifesto on Algorithmic Sabotage: A Review

On 1 April, 2024, Twitter user mr.w0bb1t posted the following to their feed:

The post points readers to the document, MANIFESTO ON “ALGORITHMIC SABOTAGE” created by the Algorithmic Sabotage Research Group (ASRG) and described as follows:

[the Manifesto] presents a preliminary version of 10 statements  on the principles and practice of algorithmic sabotage ..

… The #manifesto is designed to be developed and will be regularly updated, please consider it under the GNU Free Documentation License v1.3 ..

The struggle for “algorithmic sabotage” is everywhere in the algorithmic factory. Full frontal resistance against digital oppression & authoritarianism  ..

Internationalist solidarity & confidence in popular self-determination, in the only force that can lead the struggle to the end ..

MANIFESTO ON “ALGORITHMIC SABOTAGE” – https://tldr.nettime.org/@asrg/112195008380261222

Tech industry critique is fixated on resistance to false narratives, debunking as praxis. This is understandable; the industry’s propaganda campaign is relentless and successful, requiring an informed and equally relentless response.

This traps us in a feedback loop of call and response in which, OpenAI (for example) makes absurd, anti-worker and supremacist claims about the capabilities of the systems its selling, prompting researchers and technologists who know these claims to be lies to spend precious time ‘debunking.’


The ‘Manifesto’ consists of ten statements, numbered 0 through 9. In what follows, I’ll list each and offer some thoughts based on my experience of the political economy of the technology industry (i.e., how computation is used in large scale private and public environments and for what purposes) and thoughts about resistance.

Statement 0. The “Algorithmic Sabotage” is a figure of techno-disobedience for the militancy that’s absent from technology critique.

Comment: This is undeniably true. Among technologists as a class of workers, and tech industry analysts as a loosely organized grouping, there is very little said or apparently thought about what “techno-disobedience” might look like. One thing that immediately occurs to me, what resistance might look like, is a complete rejection of the idea of obsolescence and adoption of an attitude of, if not computational perma-culture, the idea of long computation.

Statement 1. Rather than some atavistic dislike of technology, “Algorithmic Sabotage” can be read as a form of counter-power that emerges from the strength of the community that wields it.

Comment: “Counter power,” something the historic Luddites – who were not ‘anti technology’ (whatever that means) understood, is a marvellous turn of phrase. An example might be the use of concepts that hyper-scale computation rentiers such as Microsoft and Amazon call ‘cloud computing’ for our own purposes. Imagine a shared computational resource for a community built from a ‘long computing’ infrastructure that rejects obsolescence and offers the resources a community might need for telecommunications, data analysis as a decision aid and other benefits.

Statement 2. The “Algorithmic Sabotage” cuts through the capitalist ideological framework that thrives on misery by performing a labour of subversion in the present, dismantling contemporary forms of algorithmic domination and reclaiming spaces for ethical action from generalized thoughtlessness and automaticity.

Comment: We see examples of “contemporary forms of algorithmic domination” and “generalized thoughtlessness” in what is called ‘AI,’ particularly the push to insert large language models into every nook and cranny. Products such as Microsoft Co-pilot serve no purpose aside from profit maximization. This is thoughtlessness manifested. Resistance to this means rejecting the idea there is any use for such systems and proposing an alternative view; for example, the creation of knowledge retrieval techniques that are built on attribution and open access to information.

Statement 3. The “Algorithmic Sabotage” is an action-oriented commitment to solidarity that precedes any system of social, legal or algorithmic classification.

Comment: Alongside other capitalist sectors, the tech industry creates and benefits from alienation. There was a moment in the 1980s and 90s when technology workers could have achieved a class consciousness, understanding the critical importance of their work as a collective to the functioning of society. This was intercepted by the introduction of the idea of atomized professionalism that successfully created a perceptual gulf between tech workers and workers in other sectors and also, between tech workers and the people who utilize the systems they craft and manage, reduced to the label, ‘users.’ Arrogance in tech industry circles is common, preventing solidarity within the group and with others. Resistance to this might start with the rejection of the false elevation of ‘professionalism’ (which has been successfully used in other sectors, such as academics, to neutralize solidarity).

Statement 4. The “Algorithmic Sabotage” is a part of a structural renewal of a wider movement for social autonomy that opposes the predations of hegemonic technology through wildcat direct action, consciously aligned itself with ideals of social justice and egalitarianism.

Comment: There is a link between statement 3, which calls for a commitment to solidarity, and statement 4, which imagines wildcat action against hegemonic technology. Solidarity is the linking idea. Is it possible to build such solidarity within existing tech industry circles? The signs are not good. Resistance might come from distributing expertise outside of the usual circles. We see examples of this in indigenous and diaspora communities in which, there are often tech adepts able and willing to act as interpreters, bridges, troubleshooters and teachers.

Statement 5. The “Algorithmic Sabotage” radically reworks our technopolitical arrangements away from the structural injustices, supremacist perspectives and necropolitical power layered into the “algorithmic empire”, highlighting its materiality and consequences in terms of both carbon emissions and the centralisation of control.

Comment: This statement uses the debunking framework as its baseline – for example, the critique of ‘cloud’ must be grounded by an understanding of the materiality of computation – mineral extraction and processing (and associated labor, environmental and societal impacts). And also, the necropolitical, command and control nature of applied computation. Resistance here might include an insistence on materiality (including open education about the computational supply chain) and a robust rejection of computation as a means of control and obscured decision making.

I’ll list the next two statements together because I think they form a theme:

Statement 6. The “Algorithmic Sabotage” refuses algorithmic humiliation for power and profit maximisation, focusing on activities of mutual aid and solidarity.

Statement 7. The first step of techno-politics is not technological but political. Radical feminist,anti-fascist and decolonial perspectives are a political challenge to “Algorithmic Sabotage”, placing matters of interdependence and collective care against reductive optimisations of the “algorithmic empire”.

Comment: Ideas are hegemonic. We accept, without question, Meta/Facebook’s surveillance based business model as the cost of entry to a platform countless millions depend on to maintain far flung connections (and sometimes even local ones in our age of forced disconnection and busy-ness). The ‘refusal to accept humiliation’ would mean recognizing algorithmic exploitation and consciously rejecting it. Resistance here, means not assuming good intent and staying alert but also, choosing ‘collective care.’ This is the opposite of the war of all against all created by social media platforms whose system behaviors are manipulated via the use of attention directing methods.

The final two statements can also be treated as parts of a whole:

Statement 8. The “Algorithmic Sabotage” struggles against algorithmic violence and fascistic solutionism, focusing on artistic-activist resistances that can express a different mentality, a collective “counter-intelligence”.

Statement 9. The “Algorithmic Sabotage” is an emancipatory defence of the need for community constraint of harmful technology, a struggle against the abstract segregation “above” and “below” the algorithm.

Comment: Statement 8 conveys an important insight: what we accept, despite our complaints, as normal systems behavior on platforms such as Twitter is indeed “algorithmic violence.” When we use these platforms, finding friends and comrades (if we’re fortunate) we are moving through enemy terrain and constantly engaged in a struggle against harm. I’m not certain, but I imagine that by “fascistic solutionism,” the ASRG mean the proposing of control to manage control – that is, the sort of ‘solution’ we see as the US Congress claims to address issues with TikTok via nationalistic and thereby, fascistic appeals and legislation. We are encouraged by the ‘Manifesto’ to go beyond acceptance above or below ‘the algorithm’ to build a path that rejects the tyranny that creates and nurtures these systems.

Beyond Command and Control

In his book, ‘Surveillance Valley’ (published in 2018) journalist Yasha Levine traces the Internet’s use as a population control tool to its start as an ARPA project for the military. Again and again, detailing efforts such as Project Camelot and many others besides, Levine describes the technology platforms we see as essentially benign, but off course (and therefore, reformable) as a counter-insurgency initiative by the US government and its corporate partners which persists to this day. The ‘insurgents’, in this situation, are the population as a whole.

Viewed this way, it’s impossible to see the current digital computation regime as anything but a terrain of struggle. The MANIFESTO ON “ALGORITHMIC SABOTAGE” is an effort to help us get our heads right. From the moment of digital computation’s inception, war was declared but most of us don’t yet recognize it. In the course of this war, much has been lost including alternative visions of algorithmic use. The MANIFESTO ON “ALGORITHMIC SABOTAGE” calls on us to assume a persona (where resistance starts) of the person, and people who know they’re under attack and think and plan accordingly.

It’s an incomplete but vital response to the debunking perspective which assumes a new world can be fashioned from ideas that are inherently anti-human.

Leaving the Lyceum

Can large language models – known by the acronym LLM – reason? 

This is a hotly debated topic in so-called ‘tech’ circles and the academic and media groups that orbit that world like one of Jupiter’s radiation blasted moons.  I dropped the phrase, ‘can large language models reason’ into Google, (that rusting machine) and got this result:

This is only a small sample. According to Google there are “About 352.000.000 results.” We can safely conclude from this, and the back and forth that endlessly repeats on Twitter in groups that discuss ‘AI’ that there is a lot of interest in arguing the matter: pro and con. Is this debate, if indeed it can be called that, the least bit important? What is at stake?

***

According to ‘AI’ industry enthusiasts, nearly everything is at stake; a bold new world of thinking machines is upon us. What could be more important?  To answer this question, let’s do another Google search, this time, for the phrase, Project Nimbus:

The first result returned was a Wikipedia article, which starts with this:

Project Nimbus (Hebrew: פרויקט נימבוס) is a cloud computing project of the Israeli government and its military. The Israeli Finance Ministry announced in April 2021, that the contract is to provide “the government, the defense establishment, and others with an all-encompassing cloud solution.” Under the contract, the companies will establish local cloud sites that will “keep information within Israel’s borders under strict security guidelines.”

Wikipedia: https://en.wikipedia.org/wiki/Project_Nimbus

What sorts of things does Israel do with the system described above? We don’t have precise details but there are clues such as what’s described in this excerpt from the +972 Magazine article, ‘A mass assassination factory’: Inside Israel’s calculated bombing of Gaza’ –

According to the [+972 Magazine] investigation, another reason for the large number of targets, and the extensive harm to civilian life in Gaza, is the widespread use of a system called “Habsora” (“The Gospel”), which is largely built on artificial intelligence and can “generate” targets almost automatically at a rate that far exceeds what was previously possible. This AI system, as described by a former intelligence officer, essentially facilitates a “mass assassination factory.”

+972: https://www.972mag.com/mass-assassination-factory-israel-calculated-bombing-gaza/

***

History, and legend tell us that in ancient Athens there was a place called the Lyceum, founded by Aristotle, where the techniques of the Peripatetic school were practiced. Peripatetic means, more or less, ‘walking about’ which reflects the method: philosophers and students, mingling freely, discussing ideas. There are centuries of accumulated hagiography about this school. No doubt it was nice for those not subject to the slave system of ancient Greece.

Similarly, debates about whether or not LLMs can reason are nice for those of us not subject to hellfire missiles, fired by Apache helicopters sent on their errands based on targeting algorithms. But, I am aware of the pain of people who are subject to those missiles. I can’t unsee the death facilitated by computation.

This is why I have to leave the debating square, the social media crafted lyceum. Do large language models reason? No. But even spending time debating the question offends me now. A more pressing question is what the people building the systems killing our fellow human beings are thinking. What is their reasoning?

For My Sins, The Gods Made Me A Technology Consultant

Cutting to the chase, if your activist organization needs technical advisory I’m offering my expertise, built over decades and still in play. The Internet is enemy territory so I won’t post an email in the wild, so to speak, for every poorly adjusted fool to use but if you follow me on Twitter, Bluesky or Mastodon reach out or direct your friends and colleagues to this post.

What’s being offered?

In a previous essay, I thought aloud – worked through, perhaps we could say – how an activist organization which lacks the deep pockets of NGOs (and certainly of a multinational) and which wants to minimize the vulnerabilities and ethical issues that arise from using the usual corporate platforms (hyperscalers such as AWS and Azure and ‘productivity’ platforms like Microsoft 365) might navigate available options and create a method for the effective use of computation.

This received some notice but I think the plot was lost; the point wasn’t Yet Another Debate but an offer to contribute.

This is a variation, I’m imagining, of what I’ve done for massive corporations for many years to pay the bills but tailored to the needs and requirements of activist organizations. 

That’s enough preamble, let’s discuss specifics.

Consultation

To corporate technology departments, consultation is marketed as a way to achieve a goal (let’s say, ‘cloud modernization’ a popular buzz term before ‘AI’ was ushered onstage half dressed and without a script) using the skills of people who are specialists. There are other forms of consulting, such as the management advisory work of McKinsey, a firm so sinister, Lucifer himself might think twice about hiring them. Technical consultation, though as full of politics and prejudices as any other aspect of this life, is usually centered around getting something done.

The consultation I’m offering (I think of it as an open statement of work, to use another term of art from the field) is to help your organization sort through options to hopefully, make the best possible technology choices in a world of artificially constrained possibilities (certainly fewer than existed a decade or so ago). Do you have questions about email systems, collaboration tools, databases, storage the ins and outs of so-called ‘cloud’ and how to coherently knit this and more together? I’m your guy; maybe. Let’s get into the maybe part next.

Who will I Help?

Sure, I moved to Europe, drink scotch, wear cool boots and smoke the occasional cigar like a Bond villain but I’m from Philadelphia and, like most of my city kin, believe in speaking directly and plainly, this is why the language and point of view of Film Noir appeals to me. I’m not interested in helping left media types who bloviate on Youtube (a plague of opinions) or groups of leftoids who argue about obscure aspects of the 18th Brumaire. Dante, were he resurrected, would include all this in a level of Hades.

I’m making myself available to publishers and organizations who are focused on and peopled by marginalized and indigenous folk. We are at war and you need a tech savvy wartime consigliere.

Closer

Well, that’s it. I’m here, the door is open. Reach out via the means I mentioned above if you have the need and fit the profile. Of course, I’ll share email and Discord server details with any serious takers. Ciao.

Kinetic Harm

I write about the information technology industry.

I’ve written about other topics, such as the copaganda of Young Turks’ host Ana Kasparian and Zizek, whose work, to quote John Bellamy Foster, has become “a carnival of irrationalism.” In the main, however, the technology industry generally, and its so-called ‘AI’ sub-category, specifically, are my topics. This isn’t random; I’ve worked in this industry for decades and know its dark heart. Honest tech journalism (rather than the boosterism we mostly get) and scholarly examinations are important but, who better to tell a war story than someone in the trenches?

Because I focus on harm and not the fantasy of progress, this isn’t a pursuit that brings wealth or notoriety. There have been a few podcast appearances (a type of sub-micro celebrity, as fleeting as a lightning flash) and opportunities to be published in respected magazines. That’s nice, as far as it goes. It’s important however, to see clearly and be honest with yourself; it’s a sisyphean task with few rewards; motivations must be found within and from a community of like minded people.

Originally, my motivation was to pierce the curtain. If you’ve seen the 1939 MGM film, ‘Wizard of Oz’ you know my meaning: there’s a moment when the supposed wizard, granter of dreams, is revealed to be a sweaty, nervous man, hidden by a curtain, frantically pulling levers and spinning dials to keep the machinery of delusion functioning. This was my guiding metaphor for the tech industry, which claims its products defy the limits of material reality and surpass human thought.

As you learn more, your understanding should change. Parting the curtain, or, debunking was an acceptable way to start but it’s insufficient; the promotion of so-called ‘AI’ is producing real-world harms. From automated recidivism decision systems to facial recognition based arrests and innumerable other intrusions. A technology sold as bringing about a bright future is being deployed to limit possibilities. Digital computation began as a means of enacting a command and control methodology on the world for various purposes (military applications being among the first) and is, in our age, reaching its apotheosis.

Kinetic Harm

Reporting on these harms, as deadly as they often are, fails to tell the entire story of computation in this era of growing instability. The same technologies and methods used to, for example, automate actuarial decision making in the insurance industry can also be used for other, more directly violent aims. The US military, which is known for applying euphemisms to terrible things like a thin coat of paint over rust, calls warfare – that is, killing – kinetic military action. We can call forms of applied computation deliberately intended to produce death and destruction kinetic harm.

Consider the IDF’s Habsora system, described in the +972 Magazine article, ‘A mass assassination factory’: Inside Israel’s calculated bombing of Gaza’ –

In one case discussed by the sources, the Israeli military command knowingly approved the killing of hundreds of Palestinian civilians in an attempt to assassinate a single top Hamas military commander. “The numbers increased from dozens of civilian deaths [permitted] as collateral damage as part of an attack on a senior official in previous operations, to hundreds of civilian deaths as collateral damage,” said one source.

“Nothing happens by accident,” said another source. “When a 3-year-old girl is killed in a home in Gaza, it’s because someone in the army decided it wasn’t a big deal for her to be killed — that it was a price worth paying in order to hit [another] target. We are not Hamas. These are not random rockets. Everything is intentional. We know exactly how much collateral damage there is in every home.”

According to the investigation, another reason for the large number of targets, and the extensive harm to civilian life in Gaza, is the widespread use of a system called “Habsora” (“The Gospel”), which is largely built on artificial intelligence and can “generate” targets almost automatically at a rate that far exceeds what was previously possible. This AI system, as described by a former intelligence officer, essentially facilitates a “mass assassination factory.”

+972 Magazine – https://www.972mag.com/mass-assassination-factory-israel-calculated-bombing-gaza/

The popular phrase, artificial intelligence, a marketing term, really, since no such thing exists, is used to describe the Habsora system. This creates an exotic distance, as if a glowing black cube floats in space deciding who dies and how many deaths will occur.

The reality is more mundane, more familiar, even banal; the components of this machine are constantly in use around us. Here is a graphic that shows some of the likely elements:

As we use our phones, register our locations, fill in online forms for business and government services, interact on social media and so many other things, we unknowingly create threads and weave patterns, stored in databases. The same type of system that enables a credit card fraud detection algorithm to block your card if in-person store transactions are registered in two, geographically distant locations on the same day can be used to build a map of your activities and relations to find and kill you and those you know and love. This is what the IDF has done with Habsora. The distance separating the intrusive methods of Meta, Google and fellow travelers from this killing machine is not as great as it seems.

Before being driven from their homes by the IDF – homes that were destroyed under the most intensive bombing campaign of this and perhaps even the previous, hyper-violent century, Palestinians in Gaza were subject to a program of surveillance and control which put them completely at the mercy of the Israeli government. All data about their movements and activities passed through electronic infrastructure owned and controlled by Israeli entities. This infrastructure, and the data processing and analysis built upon it, have been assembled into a factory whose product is death – whether targeted or en masse.

The Thin Curtain

Surveillance. Control. Punishment. This is what the age of digital computation has brought on an unprecedented scale. For those of us who live in places where the bombs don’t yet fall, there are things like the following, excerpted from the Forbes article (Feb 23, 2024) ‘Dozens Of KFC, Taco Bell And Dairy Queen Franchises Are Using AI To Track Workers’ –

Like many restaurant owners, Andrew Valkanoff hands out bonuses to employees who’ve done a good job. But at five of his Dairy Queen franchises across North Carolina, those bonuses are determined by AI.

The AI system, called Riley, collects streams of video and audio data to assess workers’ performance, and then assigns bonuses to those who are able to sell more. Valkanoff installed the system, which is developed by Rochester-based surveillance company Hoptix, less than a year ago with the hopes that it would help increase sales at a time when margins were shrinking and food and labor costs were skyrocketing.

Forbes – https://www.forbes.com/sites/rashishrivastava/2024/02/23/dozens-of-kfc-taco-bell-and-dairy-queen-franchises-are-using-ai-to-track-workers/

Inside the zone of comparative safety but, deprivation for many and control imposed on all, there are systems like the IDF’s Habsora in service, employing the same computational techniques, which, instead of directing sniper rifle armed quadcopters and F-16s on deadly errands, deprive people of jobs, medical care and freedom.  Just as a rocket’s payload can be changed from peaceful to fatal ends, the intended outcomes of such systems can be altered to fit the goals of the states that employ them.

The Shadow

As I write this, approximately 1.4 million Palestinians have been violently pushed to Rafah, a city in the southern Gaza strip. There, they are facing starvation and incomprehensible cruelty. Meanwhile, southwest of the ruins of Gaza City, in what has come to be known as the Al Nabulsi massacre, over one hundred Palestians were killed by IDF fire while desperately trying to get flour.  These horrors were accelerated by the use of computationally driven killing systems. In the wake of Habsora’s use in what journalist Antony Loewenstein calls the Palestine Laboratory, we should expect similar techniques to be used elsewhere and to become a standard part of the arsenal of states (yes, even those we call democratic) in their efforts to impose their will on an ever more restless world that struggles for freedom.


References

Artificial intelligence and insurance, part 1: AI’s impact on the insurance value chain

https://www.milliman.com/en/insight/critical-point-50-artificial-intelligence-insurance-value-chain

Kinetic Military Action

https://en.wikipedia.org/wiki/Kinetic_military_action

A mass assassination factory’: Inside Israel’s calculated bombing of Gaza

https://www.972mag.com/mass-assassination-factory-israel-calculated-bombing-gaza

Report: Israel’s Gaza Bombing Campaign is the Most Destructive of this Century

https://english.aawsat.com/features/4760791-report-israels-gaza-bombing-campaign-most-destructive-century

‘Massacre’: Dozens killed by Israeli fire in Gaza while collecting food aid

https://www.aljazeera.com/news/2024/2/29/dozens-killed-injured-by-israeli-fire-in-gaza-while-collecting-food-aid

Dozens Of KFC, Taco Bell And Dairy Queen Franchises Are Using AI To Track Workers

https://www.forbes.com/sites/rashishrivastava/2024/02/23/dozens-of-kfc-taco-bell-and-dairy-queen-franchises-are-using-ai-to-track-workers

The Palestine Laboratory: How Israel Exports the Technology of Occupation Around the World

Examples of Other Algorithm Directed Targeting Systems

Project Maven

https://www.engadget.com/the-pentagon-used-project-maven-developed-ai-to-identify-air-strike-targets-103940709.html

Generative AI for Defence (marketing material from C3)

https://c3.ai/generative-ai-for-defense

Command, Control, Kill

The IDF assault on Nasser hospital in Southern Gaza joined a long and growing list of bloody infamies committed by Israel since Oct 7, 2023. During a Democracy Now interview, broadcast on Feb 15, 2024, Dr. Khaled Al Serr, who was later kidnapped by the IDF, described what he saw:

Actually, the situation here in the hospital at this moment is in chaos. All of the patients, all the relatives, refugees and also the medical staff are afraid because of what happened. We could not imagine that at any time the Israeli army will bomb the hospital directly, and they will kill patients and medical personnel directly by bombing the hospital building. Yesterday also, Israeli snipers and Israeli quadcopters, which is a drone, carry on it an AR, and with a sniper, they shot all over the building. And they shot my colleague, Dr. Karam. He has a shrapnel inside his head. I can upload for you a CT for him. You can see, alhamdulillah, it was superficial, nothing serious. But a lot of bullets inside their bedroom and the restroom.”

The Israeli military is using quadcopters, armed with sniper rifles, as part of its assassination arsenal. These remote operated drones, which possess limited but still important automatic capabilities (flight stability, targeting persistence) are being used in the genocidal war in Gaza and the war between Russia and Ukraine to name two, prominent examples. They are likely to make an appearance near you in some form, soon enough.


I haven’t seen reporting on the type of quadcopter used but it’s probably the Smash Dragon, a model produced by the Israeli firm Smart Shooter which, on its website, describes its mission:

SMARTSHOOTER develops state-of-the-art Fire Control Systems for small arms that significantly increase weapon accuracy and lethality when engaging static and moving targets, on the ground and in the air, day and night.

Here is a promotional video for the Smash Dragon:

Smart Shooter’s product, and profit source are the application of computation to the tasks of increasing accuracy and automating weapon firing. One of their ‘solutions’ (solving, apparently, the ‘problem’ of people being alive) is a fixed position ‘weapon station’ called the Smash Hopper that enables a distant operator to target-lock the weapon on a person, initiating the firing of a constant stream of bullets. For some reason, the cartoonish word,  ‘smash’ is popular with the Smart Shooter marketing team.


‘AI’, as used under the current global order, serves three primary purposes: control via sorting, anti-labor propaganda and obscuring culpability. Whenever a hospital deploys an algorithmic system, rather than healthcare worker judgment, to decide how long patients stay, sorting is being used as a means of control, for profit. Whenever a tech CEO tells you that ‘AI’ can replace artists, drivers, filmmakers, etc. the idea of artificial intelligence is employed as an anti-labor propaganda tool. And whenever someone tells you that the ‘AI’ has decided, well, anything, they are trying to hide the responsibility of the people behind the scenes, pushing algorithmic systems on the world.

The armed quadcopter brings all of these purposes together, wrapped in a blood stained ribbon. Who lives and who dies is decided via remote control while the fingers pulling the trigger, and the people directing them are hidden from view. These systems are marketed as using ‘AI’ implying machines are making life and death decisions rather than people.


In the introduction to his 2023  book, The Palestine Laboratory, which details Israel’s role in the global arms trade and use of the Palestinians as lethal examples, journalist Anthony Lowenstein describes a weapons demonstration video attended by Andrew Feinstein in 2009:

“Israel is admired as a nation that stands on its own and is unashamed in using extreme force to maintain it. [Andrew Feinstein is] a former South African politician. journalist, and author. He told me about attending the Paris Air Show in 2009, the world’s largest aerospace industry and air show exhibitions. [The Israel-based defense firm Elbit Systems] was showing a promotional video about killer drones, which have been used in Israel’s war against Gaza and over the West Bank.

The footage had been filmed a few months before and showed the reconnaissance of Palestinians in the occupied territories. A target was assassinated. […] Months later, Feinstein investigated the drone strike and discovered that the incident featured in the video had killed a number of innocent Palestinians, including children.  This salient fact wasn’t featured at the Paris Air Show. “This was my introduction to the Israeli arms industry and the way it markets itself.”

The armed quadcopter drone, one of the fruits of an industry built on occupation and death, can be added to the long list of the harms of computation. ‘Keep watching the skies!’ someone said at the end of a 1950s science fiction film whose name escapes me. Never mind though, the advice stands.

References

Democracy Now Interview with Dr. Khaled Al Serr

https://www.democracynow.org/2024/2/15/nasser_hospital_stormed_gaza

Dr. Al Serr kidnapped

The Palestine Laboratory