Pygmalion Displacement – A Review

From the beginning, like a fast talking shell game huckster, the computer technology industry has relied on sleight of hand. 

First, in the 1950s and 60s, to obscure its military origins and purposes by describing early electronic computers as ‘electronic brains’ fashioned from softly glowing arrays of vacuum tubes. Later, by the 1980s, as the consumer electronics era was launched, the industry presented itself as the silicon wielding embodiment of ideas of ‘freedom’ and ‘self expression’ that are at the heart of the Californian Ideology (even as it was fully embedded within systems of command, control and counter-insurgency).

The manic, venture capitalist funded age of corporate ‘AI’ we’re currently subjected to has provided the industry with new opportunities for deception; we are encouraged to believe large language models and other computationally enacted, statistical methods are doing the same things as minds. Earlier, I called this deception but as Lelia A. Erscoi, Annelies Kleinherenbrink, and Olivia Guest, describe in their paper, “Pygmalion Displacement: When Humanising AI Dehumanises Women“, a more precise term is, displacement.


Uniquely for the field of AI critique, ‘Pygmalion Displacement’ identifies the specific ways women have been theorized and thought about within Western societies and how these ideas have persisted into, and shaped the computer age. 

The paper’s abstract introduces the reader to the authors’ concept:

We use the myth of Pygmalion as a lens to investigate the relationship between women and artificial intelligence (AI). Pygmalion was a legendary king who, repulsed by women, sculpted a statue, which was imbued with life by the goddess Aphrodite. This can be seen as a primordial AI-like myth, wherein humanity creates life-like self-images. The myth prefigures gendered dynamics within AI and between AI and society. Throughout history, the theme of women being replaced by automata or algorithms has been repeated, and continues to repeat in contemporary AI technologies. However, this pattern—that we dub Pygmalion displacement—is under-examined, due to naive excitement or due to an unacknowledged sexist history of the field. As we demonstrate, Pygmalion displacement prefigures heavily, but in an unacknowledged way, in the Turing test: a thought experiment foundational to AI. With women and the feminine being dislocated and erased from and by technology, AI is and has been (presented as) created mainly by privileged men, subserving capitalist patriarchal ends. This poses serious dangers to women and other marginalised people. By tracing the historical and ongoing entwinement of femininity and AI, we aim to understand and start a dialogue on how AI harms women.

Pygmalion Displacement: When Humanising AI Dehumanises Women – Pg 1

Like all great theoretical frameworks (such as Marx’s dialectical and historical materialism), Pygmalion Displacement provides us with a toolkit, the Pygmalion Lens, which can be applied to real world situations and conditions, sharpening our understanding and revealing what is hiding in plain sight, obscured by ideology.

Pygmalion Lens Table: Pygmalion Displacement: When Humanising AI Dehumanises Women, Pg 14

Apex Delusions

We generally assume that humanity – whether via evolutionary process or divine creation – is at the top of a ladder of being. Many of us love our dogs and cats but believe that because we build rockets and computers and they don’t, we occupy a loftier perch (I recall a Chomsky lecture during which he threw cold water on this vainglory by observing that the creation of nuclear weapons suggested our vaunted intelligence ‘may not be a successful adaptation’).

In the Introduction section titled, ‘The man, the myth,’ the authors describe another rung on this mythical ladder:

At the top of the proverbial food chain, a majority presence consists of straight white men, those who created, profit from, and work to maintain the capitalist patriarchy and kyriarchy generally (viz. Schüssler Fiorenza 2001). From this perspective, AI can be seen as aiming to seal all humanity’s best qualities in an eternal form, without the setbacks of a mortal human body. It is up for debate, however, what this idealised human(oid) form should look or behave like. When our creation is designed to mimic or be compatible with us, its creator, it will enact, fortify, or extend our pre-existing social values. Therefore, in a field where the vast majority is straight, cisgender, white, and male (Lecher 2019), AI seems less like a promise for all humanity and more like contempt for or even a threat against marginalized communities.

Pygmalion Displacement: When Humanising AI Dehumanises Women – Pg 3

The AI field, dominated by a small cohort, is shaped not only by the idea of humans as superior to the rest of nature but certain humans being superior to others. The imagined artificial general intelligence (AGI) is not simply a thinking machine, but a god-like, machine version of the type of person seen as being at the apex of humanity.

Further on in the introduction, the authors describe how these notions impact women specifically:

Our focus herein is on women in particular, who dwell within the limits of what is expected, having to adhere to standards of ideal and colonial femininity to be considered adequate and then sexualized and deemed incompetent for conforming to them (Lugones 2007). Attitudes towards women and the feminised, especially in the field of technology, have developed over a timeline of gender bias and systemic oppression and rejection. From myths, to hidden careers and stolen achievements (Allen 2017; Evans 2020), to feminized machines, and finally to current AI applications, this paper aims to shine a light on how we currently develop certain AI technologies, in the hope that such harms can be better recognized and curtailed in the future.

Pygmalion Displacement: When Humanising AI Dehumanises Women – Pg 3

On Twitter, as in our walkabout lives, we see and experience these harms in action as the contributions of women in science and technology (and much else besides) are dismissed or attributed to men. I always imagine an army of Jordan Peterson-esque pontificators but alas these pirates come in all shapes and sizes.

From Fiction to History and Back Again

Brilliantly, the authors create parallel timelines – one fictional, the other real – to illustrate how displacement has worked in cultural production and material outcomes.

In the fictional timeline, which includes stories ranging from ‘The Sandman’ (1816) to 2018’s PS4 and PC sci-fi adventure game, Detroit: Become Human, we are shown how displacement is woven into our cultural fabric.

Consider this passage on the 2013 film, ‘Her’ which depicts a relationship (of sorts) between Theodore, a lonely writer, played by Joaquin Phoenix and an operating system named Samantha, voiced by Scarlett Johansson:

…it is interesting to note that unlike her fictional predecessors, Samantha has no physical form — what makes her appear female is only her name and how she sounds (voiced by Scarlett Johansson), and arguably (that is, from a stereotypical, patriarchal perspective) her cheerful and flirty performance of secretarial, emotional, and sexual labor. In relation to this, Bergen (2016) argues that virtual personal assistants like Siri and Alexa are not perceived as potentially dangerous AI that might turn on us because, in addition to being so integrated into our lives, their embodied form does not evoke unruliness or untrustworthiness: “Unlike Pygmalion’s Galatea or Lang’s Maria, today’s virtual assistants have no body; they consist of calm, rational and cool disembodied voices […] devoid of that leaky, emotive quality that we have come to associate with the feminine body” (p. 101). In such a disembodied state, femininity appears much less duplicitous—however, in Bergen’s analysis, this is deceptive: just as real secretaries and housekeepers are often an invisible presence in the house owing to their femininity (and other marginalized identity markers), people do not take virtual assistants seriously enough to be bothered by their access to private information.

Pygmalion Displacement: When Humanising AI Dehumanises Women – Pg 8

Fictional depictions are juxtaposed with real examples of displacement such as the often told (in computer history circles) but not fully appreciated story of the ELIZA and Pedro speech generation systems:

Non-human speech generation has a long history, harking back to systems such as Pedro the voder (voice operating demonstration) in the 1930s (Eschner 2017). Pedro was operated solely by women, despite the fact the name adopted is stereotypically male. The first modern chatbot, however, is often considered to be ELIZA, created by Joseph Weizenbaum in 1964 to simulate a therapist that resulted in users believing a real person was behind the automated responses(Dillon 2020; Hirshbein 2004). The mechanism behind ELIZA was simple pattern matching, but it managed to fool people enough to be considered to have passed the Turing test. ELIZA was designed to learn from its interactions, (Weizenbaum 1966) named precisely for this reason. In his paper introducing the chatbot, Weizenbaum (1966) invokes the Pygmalion myth: “Like the Eliza of Pygmalion fame, it can be made to appear even more civilized, the relation of appearance to reality, however, remaining in the domain of the playwright.” (p. 36) Yet ELIZA the chatbot had the opposite effect than Weizenbaum intended, further fuelling a narrative of human-inspired machines.

Pygmalion Displacement: When Humanising AI Dehumanises Women – Pg 20

Later in this section, quoting from a work by Sarah Dillon on ‘The Eliza Effect’ we’re told about Weizenbaum’s contextual gendering of ELIZA:

Weizenbaum genders the program as female when it is under the control of the male computer programmer, but it is gendered as male when it interacts with a [female] user. Note in particular that in the example conversation given [in Weizenbaum’s Computer Power and Human Reason, 1976], this is a disempowered female user, at the mercy of her boyfriend’s wishes and her father’s bullying, defined by and in her relationship to the men whom, she declares, ‘are all alike.’ Weizenbaum’s choice of names is therefore adapted and adjusted to ensure that the passive, weaker or more subservient position at any one time is always gendered as female, whether that is the female-gendered computer program controlled by its designers, or the female-gendered human woman controlled by the patriarchal figures in her life.

Pygmalion Displacement: When Humanising AI Dehumanises Women – Pg 21

This passage was particularly interesting to me because I’ve long admired Weizenbaum’s thoughtful dissection of his work. I learned from the critique of computation as an ideology but missed his Pygmalion framing; the Pygmalion Lens enables a new way of seeing assumptions and ideas that are taken for granted like the air we breathe.


There is much more to discuss such as an eye-opening investigation into the over-celebrated Turing Test (today, more marketing gimmick than assessment technique) which began as a theorized method to create a guessing game about gender, a test which (astoundingly) “…required a real woman […] to prove her own humanity in competition with the computer.”

This is a marvellous and important paper which presents more than a theory, it gives us a toolkit and method for changing the way we think about the field of computation (and its loud ‘AI’ partisans) under patriarchal capitalism 

Manifesto on Algorithmic Sabotage: A Review

On 1 April, 2024, Twitter user mr.w0bb1t posted the following to their feed:

The post points readers to the document, MANIFESTO ON “ALGORITHMIC SABOTAGE” created by the Algorithmic Sabotage Research Group (ASRG) and described as follows:

[the Manifesto] presents a preliminary version of 10 statements  on the principles and practice of algorithmic sabotage ..

… The #manifesto is designed to be developed and will be regularly updated, please consider it under the GNU Free Documentation License v1.3 ..

The struggle for “algorithmic sabotage” is everywhere in the algorithmic factory. Full frontal resistance against digital oppression & authoritarianism  ..

Internationalist solidarity & confidence in popular self-determination, in the only force that can lead the struggle to the end ..

MANIFESTO ON “ALGORITHMIC SABOTAGE” – https://tldr.nettime.org/@asrg/112195008380261222

Tech industry critique is fixated on resistance to false narratives, debunking as praxis. This is understandable; the industry’s propaganda campaign is relentless and successful, requiring an informed and equally relentless response.

This traps us in a feedback loop of call and response in which, OpenAI (for example) makes absurd, anti-worker and supremacist claims about the capabilities of the systems its selling, prompting researchers and technologists who know these claims to be lies to spend precious time ‘debunking.’


The ‘Manifesto’ consists of ten statements, numbered 0 through 9. In what follows, I’ll list each and offer some thoughts based on my experience of the political economy of the technology industry (i.e., how computation is used in large scale private and public environments and for what purposes) and thoughts about resistance.

Statement 0. The “Algorithmic Sabotage” is a figure of techno-disobedience for the militancy that’s absent from technology critique.

Comment: This is undeniably true. Among technologists as a class of workers, and tech industry analysts as a loosely organized grouping, there is very little said or apparently thought about what “techno-disobedience” might look like. One thing that immediately occurs to me, what resistance might look like, is a complete rejection of the idea of obsolescence and adoption of an attitude of, if not computational perma-culture, the idea of long computation.

Statement 1. Rather than some atavistic dislike of technology, “Algorithmic Sabotage” can be read as a form of counter-power that emerges from the strength of the community that wields it.

Comment: “Counter power,” something the historic Luddites – who were not ‘anti technology’ (whatever that means) understood, is a marvellous turn of phrase. An example might be the use of concepts that hyper-scale computation rentiers such as Microsoft and Amazon call ‘cloud computing’ for our own purposes. Imagine a shared computational resource for a community built from a ‘long computing’ infrastructure that rejects obsolescence and offers the resources a community might need for telecommunications, data analysis as a decision aid and other benefits.

Statement 2. The “Algorithmic Sabotage” cuts through the capitalist ideological framework that thrives on misery by performing a labour of subversion in the present, dismantling contemporary forms of algorithmic domination and reclaiming spaces for ethical action from generalized thoughtlessness and automaticity.

Comment: We see examples of “contemporary forms of algorithmic domination” and “generalized thoughtlessness” in what is called ‘AI,’ particularly the push to insert large language models into every nook and cranny. Products such as Microsoft Co-pilot serve no purpose aside from profit maximization. This is thoughtlessness manifested. Resistance to this means rejecting the idea there is any use for such systems and proposing an alternative view; for example, the creation of knowledge retrieval techniques that are built on attribution and open access to information.

Statement 3. The “Algorithmic Sabotage” is an action-oriented commitment to solidarity that precedes any system of social, legal or algorithmic classification.

Comment: Alongside other capitalist sectors, the tech industry creates and benefits from alienation. There was a moment in the 1980s and 90s when technology workers could have achieved a class consciousness, understanding the critical importance of their work as a collective to the functioning of society. This was intercepted by the introduction of the idea of atomized professionalism that successfully created a perceptual gulf between tech workers and workers in other sectors and also, between tech workers and the people who utilize the systems they craft and manage, reduced to the label, ‘users.’ Arrogance in tech industry circles is common, preventing solidarity within the group and with others. Resistance to this might start with the rejection of the false elevation of ‘professionalism’ (which has been successfully used in other sectors, such as academics, to neutralize solidarity).

Statement 4. The “Algorithmic Sabotage” is a part of a structural renewal of a wider movement for social autonomy that opposes the predations of hegemonic technology through wildcat direct action, consciously aligned itself with ideals of social justice and egalitarianism.

Comment: There is a link between statement 3, which calls for a commitment to solidarity, and statement 4, which imagines wildcat action against hegemonic technology. Solidarity is the linking idea. Is it possible to build such solidarity within existing tech industry circles? The signs are not good. Resistance might come from distributing expertise outside of the usual circles. We see examples of this in indigenous and diaspora communities in which, there are often tech adepts able and willing to act as interpreters, bridges, troubleshooters and teachers.

Statement 5. The “Algorithmic Sabotage” radically reworks our technopolitical arrangements away from the structural injustices, supremacist perspectives and necropolitical power layered into the “algorithmic empire”, highlighting its materiality and consequences in terms of both carbon emissions and the centralisation of control.

Comment: This statement uses the debunking framework as its baseline – for example, the critique of ‘cloud’ must be grounded by an understanding of the materiality of computation – mineral extraction and processing (and associated labor, environmental and societal impacts). And also, the necropolitical, command and control nature of applied computation. Resistance here might include an insistence on materiality (including open education about the computational supply chain) and a robust rejection of computation as a means of control and obscured decision making.

I’ll list the next two statements together because I think they form a theme:

Statement 6. The “Algorithmic Sabotage” refuses algorithmic humiliation for power and profit maximisation, focusing on activities of mutual aid and solidarity.

Statement 7. The first step of techno-politics is not technological but political. Radical feminist,anti-fascist and decolonial perspectives are a political challenge to “Algorithmic Sabotage”, placing matters of interdependence and collective care against reductive optimisations of the “algorithmic empire”.

Comment: Ideas are hegemonic. We accept, without question, Meta/Facebook’s surveillance based business model as the cost of entry to a platform countless millions depend on to maintain far flung connections (and sometimes even local ones in our age of forced disconnection and busy-ness). The ‘refusal to accept humiliation’ would mean recognizing algorithmic exploitation and consciously rejecting it. Resistance here, means not assuming good intent and staying alert but also, choosing ‘collective care.’ This is the opposite of the war of all against all created by social media platforms whose system behaviors are manipulated via the use of attention directing methods.

The final two statements can also be treated as parts of a whole:

Statement 8. The “Algorithmic Sabotage” struggles against algorithmic violence and fascistic solutionism, focusing on artistic-activist resistances that can express a different mentality, a collective “counter-intelligence”.

Statement 9. The “Algorithmic Sabotage” is an emancipatory defence of the need for community constraint of harmful technology, a struggle against the abstract segregation “above” and “below” the algorithm.

Comment: Statement 8 conveys an important insight: what we accept, despite our complaints, as normal systems behavior on platforms such as Twitter is indeed “algorithmic violence.” When we use these platforms, finding friends and comrades (if we’re fortunate) we are moving through enemy terrain and constantly engaged in a struggle against harm. I’m not certain, but I imagine that by “fascistic solutionism,” the ASRG mean the proposing of control to manage control – that is, the sort of ‘solution’ we see as the US Congress claims to address issues with TikTok via nationalistic and thereby, fascistic appeals and legislation. We are encouraged by the ‘Manifesto’ to go beyond acceptance above or below ‘the algorithm’ to build a path that rejects the tyranny that creates and nurtures these systems.

Beyond Command and Control

In his book, ‘Surveillance Valley’ (published in 2018) journalist Yasha Levine traces the Internet’s use as a population control tool to its start as an ARPA project for the military. Again and again, detailing efforts such as Project Camelot and many others besides, Levine describes the technology platforms we see as essentially benign, but off course (and therefore, reformable) as a counter-insurgency initiative by the US government and its corporate partners which persists to this day. The ‘insurgents’, in this situation, are the population as a whole.

Viewed this way, it’s impossible to see the current digital computation regime as anything but a terrain of struggle. The MANIFESTO ON “ALGORITHMIC SABOTAGE” is an effort to help us get our heads right. From the moment of digital computation’s inception, war was declared but most of us don’t yet recognize it. In the course of this war, much has been lost including alternative visions of algorithmic use. The MANIFESTO ON “ALGORITHMIC SABOTAGE” calls on us to assume a persona (where resistance starts) of the person, and people who know they’re under attack and think and plan accordingly.

It’s an incomplete but vital response to the debunking perspective which assumes a new world can be fashioned from ideas that are inherently anti-human.

Leaving the Lyceum

Can large language models – known by the acronym LLM – reason? 

This is a hotly debated topic in so-called ‘tech’ circles and the academic and media groups that orbit that world like one of Jupiter’s radiation blasted moons.  I dropped the phrase, ‘can large language models reason’ into Google, (that rusting machine) and got this result:

This is only a small sample. According to Google there are “About 352.000.000 results.” We can safely conclude from this, and the back and forth that endlessly repeats on Twitter in groups that discuss ‘AI’ that there is a lot of interest in arguing the matter: pro and con. Is this debate, if indeed it can be called that, the least bit important? What is at stake?

***

According to ‘AI’ industry enthusiasts, nearly everything is at stake; a bold new world of thinking machines is upon us. What could be more important?  To answer this question, let’s do another Google search, this time, for the phrase, Project Nimbus:

The first result returned was a Wikipedia article, which starts with this:

Project Nimbus (Hebrew: פרויקט נימבוס) is a cloud computing project of the Israeli government and its military. The Israeli Finance Ministry announced in April 2021, that the contract is to provide “the government, the defense establishment, and others with an all-encompassing cloud solution.” Under the contract, the companies will establish local cloud sites that will “keep information within Israel’s borders under strict security guidelines.”

Wikipedia: https://en.wikipedia.org/wiki/Project_Nimbus

What sorts of things does Israel do with the system described above? We don’t have precise details but there are clues such as what’s described in this excerpt from the +972 Magazine article, ‘A mass assassination factory’: Inside Israel’s calculated bombing of Gaza’ –

According to the [+972 Magazine] investigation, another reason for the large number of targets, and the extensive harm to civilian life in Gaza, is the widespread use of a system called “Habsora” (“The Gospel”), which is largely built on artificial intelligence and can “generate” targets almost automatically at a rate that far exceeds what was previously possible. This AI system, as described by a former intelligence officer, essentially facilitates a “mass assassination factory.”

+972: https://www.972mag.com/mass-assassination-factory-israel-calculated-bombing-gaza/

***

History, and legend tell us that in ancient Athens there was a place called the Lyceum, founded by Aristotle, where the techniques of the Peripatetic school were practiced. Peripatetic means, more or less, ‘walking about’ which reflects the method: philosophers and students, mingling freely, discussing ideas. There are centuries of accumulated hagiography about this school. No doubt it was nice for those not subject to the slave system of ancient Greece.

Similarly, debates about whether or not LLMs can reason are nice for those of us not subject to hellfire missiles, fired by Apache helicopters sent on their errands based on targeting algorithms. But, I am aware of the pain of people who are subject to those missiles. I can’t unsee the death facilitated by computation.

This is why I have to leave the debating square, the social media crafted lyceum. Do large language models reason? No. But even spending time debating the question offends me now. A more pressing question is what the people building the systems killing our fellow human beings are thinking. What is their reasoning?

For My Sins, The Gods Made Me A Technology Consultant

Cutting to the chase, if your activist organization needs technical advisory I’m offering my expertise, built over decades and still in play. The Internet is enemy territory so I won’t post an email in the wild, so to speak, for every poorly adjusted fool to use but if you follow me on Twitter, Bluesky or Mastodon reach out or direct your friends and colleagues to this post.

What’s being offered?

In a previous essay, I thought aloud – worked through, perhaps we could say – how an activist organization which lacks the deep pockets of NGOs (and certainly of a multinational) and which wants to minimize the vulnerabilities and ethical issues that arise from using the usual corporate platforms (hyperscalers such as AWS and Azure and ‘productivity’ platforms like Microsoft 365) might navigate available options and create a method for the effective use of computation.

This received some notice but I think the plot was lost; the point wasn’t Yet Another Debate but an offer to contribute.

This is a variation, I’m imagining, of what I’ve done for massive corporations for many years to pay the bills but tailored to the needs and requirements of activist organizations. 

That’s enough preamble, let’s discuss specifics.

Consultation

To corporate technology departments, consultation is marketed as a way to achieve a goal (let’s say, ‘cloud modernization’ a popular buzz term before ‘AI’ was ushered onstage half dressed and without a script) using the skills of people who are specialists. There are other forms of consulting, such as the management advisory work of McKinsey, a firm so sinister, Lucifer himself might think twice about hiring them. Technical consultation, though as full of politics and prejudices as any other aspect of this life, is usually centered around getting something done.

The consultation I’m offering (I think of it as an open statement of work, to use another term of art from the field) is to help your organization sort through options to hopefully, make the best possible technology choices in a world of artificially constrained possibilities (certainly fewer than existed a decade or so ago). Do you have questions about email systems, collaboration tools, databases, storage the ins and outs of so-called ‘cloud’ and how to coherently knit this and more together? I’m your guy; maybe. Let’s get into the maybe part next.

Who will I Help?

Sure, I moved to Europe, drink scotch, wear cool boots and smoke the occasional cigar like a Bond villain but I’m from Philadelphia and, like most of my city kin, believe in speaking directly and plainly, this is why the language and point of view of Film Noir appeals to me. I’m not interested in helping left media types who bloviate on Youtube (a plague of opinions) or groups of leftoids who argue about obscure aspects of the 18th Brumaire. Dante, were he resurrected, would include all this in a level of Hades.

I’m making myself available to publishers and organizations who are focused on and peopled by marginalized and indigenous folk. We are at war and you need a tech savvy wartime consigliere.

Closer

Well, that’s it. I’m here, the door is open. Reach out via the means I mentioned above if you have the need and fit the profile. Of course, I’ll share email and Discord server details with any serious takers. Ciao.

Kinetic Harm

I write about the information technology industry.

I’ve written about other topics, such as the copaganda of Young Turks’ host Ana Kasparian and Zizek, whose work, to quote John Bellamy Foster, has become “a carnival of irrationalism.” In the main, however, the technology industry generally, and its so-called ‘AI’ sub-category, specifically, are my topics. This isn’t random; I’ve worked in this industry for decades and know its dark heart. Honest tech journalism (rather than the boosterism we mostly get) and scholarly examinations are important but, who better to tell a war story than someone in the trenches?

Because I focus on harm and not the fantasy of progress, this isn’t a pursuit that brings wealth or notoriety. There have been a few podcast appearances (a type of sub-micro celebrity, as fleeting as a lightning flash) and opportunities to be published in respected magazines. That’s nice, as far as it goes. It’s important however, to see clearly and be honest with yourself; it’s a sisyphean task with few rewards; motivations must be found within and from a community of like minded people.

Originally, my motivation was to pierce the curtain. If you’ve seen the 1939 MGM film, ‘Wizard of Oz’ you know my meaning: there’s a moment when the supposed wizard, granter of dreams, is revealed to be a sweaty, nervous man, hidden by a curtain, frantically pulling levers and spinning dials to keep the machinery of delusion functioning. This was my guiding metaphor for the tech industry, which claims its products defy the limits of material reality and surpass human thought.

As you learn more, your understanding should change. Parting the curtain, or, debunking was an acceptable way to start but it’s insufficient; the promotion of so-called ‘AI’ is producing real-world harms. From automated recidivism decision systems to facial recognition based arrests and innumerable other intrusions. A technology sold as bringing about a bright future is being deployed to limit possibilities. Digital computation began as a means of enacting a command and control methodology on the world for various purposes (military applications being among the first) and is, in our age, reaching its apotheosis.

Kinetic Harm

Reporting on these harms, as deadly as they often are, fails to tell the entire story of computation in this era of growing instability. The same technologies and methods used to, for example, automate actuarial decision making in the insurance industry can also be used for other, more directly violent aims. The US military, which is known for applying euphemisms to terrible things like a thin coat of paint over rust, calls warfare – that is, killing – kinetic military action. We can call forms of applied computation deliberately intended to produce death and destruction kinetic harm.

Consider the IDF’s Habsora system, described in the +972 Magazine article, ‘A mass assassination factory’: Inside Israel’s calculated bombing of Gaza’ –

In one case discussed by the sources, the Israeli military command knowingly approved the killing of hundreds of Palestinian civilians in an attempt to assassinate a single top Hamas military commander. “The numbers increased from dozens of civilian deaths [permitted] as collateral damage as part of an attack on a senior official in previous operations, to hundreds of civilian deaths as collateral damage,” said one source.

“Nothing happens by accident,” said another source. “When a 3-year-old girl is killed in a home in Gaza, it’s because someone in the army decided it wasn’t a big deal for her to be killed — that it was a price worth paying in order to hit [another] target. We are not Hamas. These are not random rockets. Everything is intentional. We know exactly how much collateral damage there is in every home.”

According to the investigation, another reason for the large number of targets, and the extensive harm to civilian life in Gaza, is the widespread use of a system called “Habsora” (“The Gospel”), which is largely built on artificial intelligence and can “generate” targets almost automatically at a rate that far exceeds what was previously possible. This AI system, as described by a former intelligence officer, essentially facilitates a “mass assassination factory.”

+972 Magazine – https://www.972mag.com/mass-assassination-factory-israel-calculated-bombing-gaza/

The popular phrase, artificial intelligence, a marketing term, really, since no such thing exists, is used to describe the Habsora system. This creates an exotic distance, as if a glowing black cube floats in space deciding who dies and how many deaths will occur.

The reality is more mundane, more familiar, even banal; the components of this machine are constantly in use around us. Here is a graphic that shows some of the likely elements:

As we use our phones, register our locations, fill in online forms for business and government services, interact on social media and so many other things, we unknowingly create threads and weave patterns, stored in databases. The same type of system that enables a credit card fraud detection algorithm to block your card if in-person store transactions are registered in two, geographically distant locations on the same day can be used to build a map of your activities and relations to find and kill you and those you know and love. This is what the IDF has done with Habsora. The distance separating the intrusive methods of Meta, Google and fellow travelers from this killing machine is not as great as it seems.

Before being driven from their homes by the IDF – homes that were destroyed under the most intensive bombing campaign of this and perhaps even the previous, hyper-violent century, Palestinians in Gaza were subject to a program of surveillance and control which put them completely at the mercy of the Israeli government. All data about their movements and activities passed through electronic infrastructure owned and controlled by Israeli entities. This infrastructure, and the data processing and analysis built upon it, have been assembled into a factory whose product is death – whether targeted or en masse.

The Thin Curtain

Surveillance. Control. Punishment. This is what the age of digital computation has brought on an unprecedented scale. For those of us who live in places where the bombs don’t yet fall, there are things like the following, excerpted from the Forbes article (Feb 23, 2024) ‘Dozens Of KFC, Taco Bell And Dairy Queen Franchises Are Using AI To Track Workers’ –

Like many restaurant owners, Andrew Valkanoff hands out bonuses to employees who’ve done a good job. But at five of his Dairy Queen franchises across North Carolina, those bonuses are determined by AI.

The AI system, called Riley, collects streams of video and audio data to assess workers’ performance, and then assigns bonuses to those who are able to sell more. Valkanoff installed the system, which is developed by Rochester-based surveillance company Hoptix, less than a year ago with the hopes that it would help increase sales at a time when margins were shrinking and food and labor costs were skyrocketing.

Forbes – https://www.forbes.com/sites/rashishrivastava/2024/02/23/dozens-of-kfc-taco-bell-and-dairy-queen-franchises-are-using-ai-to-track-workers/

Inside the zone of comparative safety but, deprivation for many and control imposed on all, there are systems like the IDF’s Habsora in service, employing the same computational techniques, which, instead of directing sniper rifle armed quadcopters and F-16s on deadly errands, deprive people of jobs, medical care and freedom.  Just as a rocket’s payload can be changed from peaceful to fatal ends, the intended outcomes of such systems can be altered to fit the goals of the states that employ them.

The Shadow

As I write this, approximately 1.4 million Palestinians have been violently pushed to Rafah, a city in the southern Gaza strip. There, they are facing starvation and incomprehensible cruelty. Meanwhile, southwest of the ruins of Gaza City, in what has come to be known as the Al Nabulsi massacre, over one hundred Palestians were killed by IDF fire while desperately trying to get flour.  These horrors were accelerated by the use of computationally driven killing systems. In the wake of Habsora’s use in what journalist Antony Loewenstein calls the Palestine Laboratory, we should expect similar techniques to be used elsewhere and to become a standard part of the arsenal of states (yes, even those we call democratic) in their efforts to impose their will on an ever more restless world that struggles for freedom.


References

Artificial intelligence and insurance, part 1: AI’s impact on the insurance value chain

https://www.milliman.com/en/insight/critical-point-50-artificial-intelligence-insurance-value-chain

Kinetic Military Action

https://en.wikipedia.org/wiki/Kinetic_military_action

A mass assassination factory’: Inside Israel’s calculated bombing of Gaza

https://www.972mag.com/mass-assassination-factory-israel-calculated-bombing-gaza

Report: Israel’s Gaza Bombing Campaign is the Most Destructive of this Century

https://english.aawsat.com/features/4760791-report-israels-gaza-bombing-campaign-most-destructive-century

‘Massacre’: Dozens killed by Israeli fire in Gaza while collecting food aid

https://www.aljazeera.com/news/2024/2/29/dozens-killed-injured-by-israeli-fire-in-gaza-while-collecting-food-aid

Dozens Of KFC, Taco Bell And Dairy Queen Franchises Are Using AI To Track Workers

https://www.forbes.com/sites/rashishrivastava/2024/02/23/dozens-of-kfc-taco-bell-and-dairy-queen-franchises-are-using-ai-to-track-workers

The Palestine Laboratory: How Israel Exports the Technology of Occupation Around the World

Examples of Other Algorithm Directed Targeting Systems

Project Maven

https://www.engadget.com/the-pentagon-used-project-maven-developed-ai-to-identify-air-strike-targets-103940709.html

Generative AI for Defence (marketing material from C3)

https://c3.ai/generative-ai-for-defense

Command, Control, Kill

The IDF assault on Nasser hospital in Southern Gaza joined a long and growing list of bloody infamies committed by Israel since Oct 7, 2023. During a Democracy Now interview, broadcast on Feb 15, 2024, Dr. Khaled Al Serr, who was later kidnapped by the IDF, described what he saw:

Actually, the situation here in the hospital at this moment is in chaos. All of the patients, all the relatives, refugees and also the medical staff are afraid because of what happened. We could not imagine that at any time the Israeli army will bomb the hospital directly, and they will kill patients and medical personnel directly by bombing the hospital building. Yesterday also, Israeli snipers and Israeli quadcopters, which is a drone, carry on it an AR, and with a sniper, they shot all over the building. And they shot my colleague, Dr. Karam. He has a shrapnel inside his head. I can upload for you a CT for him. You can see, alhamdulillah, it was superficial, nothing serious. But a lot of bullets inside their bedroom and the restroom.”

The Israeli military is using quadcopters, armed with sniper rifles, as part of its assassination arsenal. These remote operated drones, which possess limited but still important automatic capabilities (flight stability, targeting persistence) are being used in the genocidal war in Gaza and the war between Russia and Ukraine to name two, prominent examples. They are likely to make an appearance near you in some form, soon enough.


I haven’t seen reporting on the type of quadcopter used but it’s probably the Smash Dragon, a model produced by the Israeli firm Smart Shooter which, on its website, describes its mission:

SMARTSHOOTER develops state-of-the-art Fire Control Systems for small arms that significantly increase weapon accuracy and lethality when engaging static and moving targets, on the ground and in the air, day and night.

Here is a promotional video for the Smash Dragon:

Smart Shooter’s product, and profit source are the application of computation to the tasks of increasing accuracy and automating weapon firing. One of their ‘solutions’ (solving, apparently, the ‘problem’ of people being alive) is a fixed position ‘weapon station’ called the Smash Hopper that enables a distant operator to target-lock the weapon on a person, initiating the firing of a constant stream of bullets. For some reason, the cartoonish word,  ‘smash’ is popular with the Smart Shooter marketing team.


‘AI’, as used under the current global order, serves three primary purposes: control via sorting, anti-labor propaganda and obscuring culpability. Whenever a hospital deploys an algorithmic system, rather than healthcare worker judgment, to decide how long patients stay, sorting is being used as a means of control, for profit. Whenever a tech CEO tells you that ‘AI’ can replace artists, drivers, filmmakers, etc. the idea of artificial intelligence is employed as an anti-labor propaganda tool. And whenever someone tells you that the ‘AI’ has decided, well, anything, they are trying to hide the responsibility of the people behind the scenes, pushing algorithmic systems on the world.

The armed quadcopter brings all of these purposes together, wrapped in a blood stained ribbon. Who lives and who dies is decided via remote control while the fingers pulling the trigger, and the people directing them are hidden from view. These systems are marketed as using ‘AI’ implying machines are making life and death decisions rather than people.


In the introduction to his 2023  book, The Palestine Laboratory, which details Israel’s role in the global arms trade and use of the Palestinians as lethal examples, journalist Anthony Lowenstein describes a weapons demonstration video attended by Andrew Feinstein in 2009:

“Israel is admired as a nation that stands on its own and is unashamed in using extreme force to maintain it. [Andrew Feinstein is] a former South African politician. journalist, and author. He told me about attending the Paris Air Show in 2009, the world’s largest aerospace industry and air show exhibitions. [The Israel-based defense firm Elbit Systems] was showing a promotional video about killer drones, which have been used in Israel’s war against Gaza and over the West Bank.

The footage had been filmed a few months before and showed the reconnaissance of Palestinians in the occupied territories. A target was assassinated. […] Months later, Feinstein investigated the drone strike and discovered that the incident featured in the video had killed a number of innocent Palestinians, including children.  This salient fact wasn’t featured at the Paris Air Show. “This was my introduction to the Israeli arms industry and the way it markets itself.”

The armed quadcopter drone, one of the fruits of an industry built on occupation and death, can be added to the long list of the harms of computation. ‘Keep watching the skies!’ someone said at the end of a 1950s science fiction film whose name escapes me. Never mind though, the advice stands.

References

Democracy Now Interview with Dr. Khaled Al Serr

https://www.democracynow.org/2024/2/15/nasser_hospital_stormed_gaza

Dr. Al Serr kidnapped

The Palestine Laboratory

Information Technology for Activists – What is To Be Done?

Introduction

This is written in the spirit of the Request for Comments memorandums that shaped the early Internet. RFCs, as they are known, are submitted to propose a technology or methodology and gather comments/corrections from relevant and knowledgeable community members in the hope of becoming a widely accepted standard.

Purpose

This is a consideration of the information technology options for politically and socially active organizations. It’s also a high level overview of the technical landscape. The target audience is technical decision makers in groups whose political commitments challenge the prevailing order, focused on liberation. In this document, I will provide a brief history of past patterns and compare these to current choices, identifying the problems of various models and potential opportunities.

Alongside this blog post there is a living document posted for collaboration here. I invite a discussion of ideas, methods and technologies I may have missed or might be unaware of to improve accuracy and usefulness.

Being Intentional About Technology Choices

It is a truism that modern organizations require technology services. Less commonly discussed are the political, operational, cost and security implications of this dependence from the perspective of activists. It’s important to be intentional about technological choices and deployments with these and other factors in mind. The path of least resistance, such as choosing Microsoft 365 for collaboration rather than building on-premises systems, may be the best, or least terrible choice for an organization but the decision to use it should come after weighing the pros and cons of other options. What follows is not an exhaustive history; I am purposefully leaving out many granular details to get to the point as efficiently as possible.

A Brief History of Organizational Computing

By ‘organizational computing’ I’m referring to the use of digital computers arranged into service platforms by non-governmental and non-military organizations. In this section, there is a high level walk through of the patterns which have been utilized in this sector.

Mainframes

IBM 360 in Computer Room – mid 1960s

The first use of digital computing at-scale was the deployment of mainframe systems as centrally hosted resources. User access, limited to specialists, was provided via a time sharing method in which ‘dumb’ terminals displayed results of programs and enabled input (punch cards were also used for inputting program instructions). One of the most successful systems was the IBM 360 (operational from 1965 to 1978). Due to expense, the typical customer was large banks, universities and other organizations with deep pockets.

Client Server

Classic Client Server Architecture (Microsoft)

The introduction of personal computers in the 1980s created the raw material for the development of networked, smaller scale systems that could supplement mainframes and provide organizations with the ability to host relatively modest computing platforms that suited their requirements. By the 1990s, this became the dominant model used by organizations at all scales (mainframes remain in service but the usage profile became narrower – for example, to run applications requiring greater processing capability than what’s possible using PC servers).

The client server model era spawned a variety of software applications to meet organizational needs such as email servers (for example, Sendmail and Microsoft Exchange), database servers (for ex. Postgres and SQL Server), web servers such as Apache and so on. Companies such as Novell, Cisco, Dell and Microsoft rose to prominence during this time.

As the client server era matured and the need for computing power grew, companies like VMWare sold platforms that enabled the creation of virtual machines (software mimics of physical servers). Organizations that could not afford to own or rent large data centers could deploy the equivalent of hundreds or thousands of servers within a smaller number of more powerful (in terms of processing capacity and memory) computing systems running VMWare’s ESX software platform. Of course, the irony of this return to something like a mainframe was not lost on information technology workers whose careers spanned the mainframe to client server era.

Cloud computing

Cloud Pattern (Amazon Web Services)

Virtualization, combined with the improved Internet access of the early 2000s, gave rise to what is now called ‘cloud.’ Among information technology workers, it was popular to say ‘there is no cloud, it’s just someone else’s computer.’ Overconfident cloud enthusiasts considered this to be the complaint of a fading old guard but it is undeniably true.

The Cloud Model

There are four modes of cloud computing:

  • Infrastructure as a service – IaaS: (for example, building virtual machines on platforms such as Microsoft Azure, Amazon Web Services or Google Cloud Platform)
  • Platform as a service – PaaS:  (for example, databases offered as a service utility eliminating the need to create a server as host)
  • Software as a Service – SaaS: (platforms like Microsoft 365 fall into this category)
  • Function as a Service – FaaS:  (focused on deployment using software development – ‘code’ – alone with no infrastructural management responsibilities)

A combination of perceived (but rarely realized) convenience, marketing hype and mostly unfulfilled promises of lower running costs have made the cloud model the dominant mode of the 2020s. In the 1990s and early 2000s, an organization requiring an email system was compelled to acquire hardware and software to configure and host their own platform (the Microsoft Exchange email system running on Dell server or VMWare hardware was a common pattern). The availability of Office 365 (later, Microsoft 365) and Google’s G-Suite provided another, attractive option that eliminated the need to manage systems while providing the email function.

A Review of Current Options for Organizations

Although tech industry marketing presents new developments as replacing old, all of the pre-cloud patterns mentioned above still exist. The question is, what makes sense for your organization from the perspectives of:

  • Cost
  • Operational complexity
  • Maintenance complexity
  • Security and exposure to vulnerabilities
  • Availability of skilled workers (related to the ability to effectively manage all of the above)

We needn’t include mainframes in this section since they are cost prohibitive and today, intended for specialized, high performance applications.

Client Server (on-premises)

By ‘on-premises’ we are referring to systems that are not cloud-based. Before the cloud era, the client server model was the dominant pattern for organizations of all sizes. Servers can be hosted within a data center the organization owns or within rented space in a colocation facility (a business that provides rented space for the servers of various clients).

Using a client server model requires employing staff who can install, configure and maintain systems. These skills were once common, indeed standard, and salaries were within the reach of many mid-size organizations. The cloud era has made these skills harder to come by (although there are still many skilled and enthusiastic practitioners). A key question is, how much investment does your organization want to make in the time and effort required to build and manage its own system? Additional questions for consideration come from software licensing and software and hardware maintenance cycles.

Sub-categories of client server to consider

Virtualization and Hyper-converged hardware

As mentioned above, the use of virtualization systems, offered by companies such as VMWare, was one method that arose during the heyday of client server to address the need for more concentrated computing power in a smaller data center footprint.

Hyper-converged infrastructure (HCI) systems, combining compute, storage and networking into a single hardware chassis, is a further development of this method. HCI systems and virtualization reduce the required operational overhead. More about this later.

Hybrid architectures

A hybrid architecture uses a mixture of on-premises and off-site, typically ‘cloud’ based systems. For example, an organization’s data might be stored on-site but the applications using that data are hosted by a cloud provider.

Cloud

Software as a Service

Software as a Service platforms such as Microsoft 365 are the most popular cloud services used by firms of all types and sizes, including activist groups. The reasons are easy to understand:

  • Email services without the need to host an email server
  • Collaboration tools (SharePoint and MS Teams for example) built into the standard licensing schemes
  • Lower (but not zero) operational responsibility
  • Hardware maintenance and uptime are handled by the service provider

The convenience comes at a price, both financial, as licensing costs increase and operational inasmuch as organizations tend to place all of their data and workflows within these platforms, creating deep dependencies.

Build Platforms

The use of ‘build platforms’ like Azure and AWS is more complex than the consumption model of services such as Microsoft 365. Originally, these were designed to meet the needs of organizations that have development and infrastructure teams and host complex applications. More recently, the ‘AI’ hype push has made these platforms trojan horses for pushing hyperscale algorithmic platforms (note, as an example, Microsoft’s investment in and use of OpenAI’s Large Language Model kit) The most common pattern is a replication of large-scale on-premises architectures using virtual machines on a cloud platform. 

Although marketed as superior to, and simpler than on-premises options, cloud platforms require as much, and often more technical expertise. Cost overruns are common; cloud platforms make it easy to deploy new things but each item generates a cost. Even small organizations can create very large bills. Security is another factor; configuration mistakes are common and there are many examples of data breaches produced by error.

Private Cloud

The potential key advantage of the cloud model is the ability to abstract technical complexity. Ideally, programmers are able to create applications that run on hardware without the requirement to manage operating systems (a topic outside of the scope of this document). Private cloud enables the staging of the necessary hardware on-premises. A well known example is Openstack which is very technically challenging. Commercial options include Microsoft’s Azure Stack which extends the Azure technology method to hyper converged infrastructure (HCI) hosted within an organization’s data center.


Information Technology for Activists – What is To Be Done?

In the recent past, the answer was simple: purchase hardware and software and install and configure it with the help of technically adept staff, volunteers or a mix. In the 1990s and early 2000s it was typical for small to midsize organizations to have a collection of networked personal computers connected to a shared printer within an office. Through the network (known as a local area network or LAN) these computers were connected to more powerful computers called servers that provide centralized storage and the means through which each individual computer could communicate in a coordinated manner and share resources.  Organizations often hosted their own websites which were made available to the Internet via connections from telecommunications providers.

Changes in the technology market since the mid 2000s, pushed to increase the market dominance and profits of a small group of firms (primarily, Amazon, Microsoft and Google) have limited options even as these changes appear to offer greater convenience. How can these constraints be navigated?

Proposed Methodology and Doctrines

Earlier in this document, I mentioned the importance of being intentional about technology usage. In this section, more detail is provided.

Let’s divide this into high level operational doctrines and build a proposed architecture from that.

First Doctrine: Data Sovereignty

Organizational data should be stored on-premises using dedicated storage systems rather than in a SaaS such as Microsoft 365 or Google Workspace

Second Doctrine: Bias Towards Hybrid

By ‘hybrid’ I am referring to system architectures that utilize a combination of on-premises and ‘cloud’ assets

Third Doctrine: Bias Towards System Diversity

This might also be called the right tool for the right job doctrine. After consideration of relevant factors (cost, technical ability, etc) an organization may decide to use Microsoft 365 (for example) to provide some services but other options should be explored in the areas of:

  • Document management and related real time collaboration tooling
  • Online Meeting Platforms
  • Database platforms
  • Email platforms

Commercial platforms offer integration methods between platforms that make it possible to create an aggregated solution from disparate tools.

These doctrines can be applied as guidelines for designing an organizational system architecture:

The above is only one option. More are possible depending on the aforementioned factors of:

  • Cost
  • Operational complexity
  • Maintenance complexity
  • Security and exposure to vulnerabilities
  • Availability of skilled workers (related to the ability to effectively manage all of the above)

I invite others to add to this document to improve its content and sharpen the argument.


Activist Documents and Resources Regarding Alternative Methods

Counter Cloud Action Plan – The Institute for Technology In the Public Interest

https://titipi.org/pub/Counter_Cloud_Action_Plan.pdf

Measurement Network

“measurement.network provides non-profit network measurement support to academic researchers”

https://measurement.network

Crisis, Ethics, Reliability & a measurement.network by Tobias Fiebig Max-Planck-Institut für Informatik Saarbrücken, Germany

https://dl.acm.org/doi/pdf/10.1145/3606464.3606483

Tobias Fiebig Max-Planck-Institut für Informatik and Doris Aschenbrenner Aalen University

https://dl.acm.org/doi/pdf/10.1145/3538395.3545312

Decentralized Internet Infrastructure Research Group Session Video

“Oh yes! over-preparing for meetings is my jam :)”:The Gendered Experiences of System Administrators

https://dl.acm.org/doi/pdf/10.1145/3579617

Revolutionary Technology: The Political Economy of Left-Wing Digital Infrastructure by Michael Nolan

https://osf.io/hva2y/


References in the Post

RFC

https://en.wikipedia.org/wiki/Request_for_Comments

Openstack

https://en.wikipedia.org/wiki/OpenStack

Self Hosted Document Management Systems

https://noted.lol/self-hosted-dms-applications/

Overview

https://noted.lol/self-hosted-dms-applications/

Teedy

https://teedy.io/?ref=noted.lol#!/

Only Office

https://www.onlyoffice.com/desktop.aspx

Digital Ocean

https://www.digitalocean.com/

IBM 360 Architecture

https://www.researchgate.net/figure/BM-System-360-architectural-layers_fig2_228974972

Client Server Model

https://en.wikipedia.org/wiki/Client–server_model

Mainframe

https://en.wikipedia.org/wiki/Mainframe_computer

Virtual Machine

https://en.wikipedia.org/wiki/Virtual_machine

Server Colocation

https://www.techopedia.com/definition/29868/server-colocation

What is server virtualization

https://www.techtarget.com/searchitoperations/definition/What-is-server-virtualization-The-ultimate-guide

The Interpretation of Tech Dreams – On the EU Commission Post

On September 14, 2023, while touring Twitter the way you might survey the ruins of Pompey, I came across a series of posts responding to this statement from the EU Commission account:

Mitigating the risk of extinction from AI should be a global priority…

What attracted critical attention was the use of the phrase, ‘risk of extinction‘ a fear of which, as Dr. Timnit Gebru alerts us (among others, mostly women researchers I can’t help but notice) lies at the heart of what Gebru calls the ´TESCREAL Bundle.’ The acronym, TESCREAL, which brings together the terms Transhumanism, Extropianism, Singularitarianism, Cosmism, Rationalism, Effective Altruism and Longtermism, describes an interlocked and related group of ideologies that have one idea in common: techno-utopianism (with a generous helping of eugenics and racialized ideas of what ‘intelligence’ means mixed in to make everything old new again).

Risk of extinction. It sounds dramatic, doesn’t it? The sort of phrase you hear in a Marvel movie, Robert Downey Jr, as Iron Man stands in front of a green screen and turns to one of his costumed comrades as some yet to be added animated threat approaches and screams about the risk of extinction if the animated thing isn’t stopped. There are, of course, actual existential risks; asteroids come to mind and although climate change is certainly a risk to the lives of billions and the mode of life of the industrial capitalist age upon which we depend, it might not be ‘existential’ strictly speaking (though, that’s most likely a distinction without a difference as the seas consume the most celebrated cities and uncelebrated communities).

The idea that what is called ‘AI’ – which, when all the tech industry’s glittering makeup is removed, is revealed plainly to be software, running on computers, warehoused in data centers – poses a risk of extinction requires a special kind of gullibility, self interest, and, as Dr, Gebru reminds us, supremacist delusions about human intelligence to promote, let alone believe. 

***

In the picture posted to X, Ursula von der Leyen, President of the European Commission, is standing at a podium before the assembled group of commissioners, presumably in the EU Commission building (the Berlaymont) in Brussels, a city I’ve visited quite a few times, regretfully. The building itself and the main hall for commissioners, are large and imposing, conveying, in glass, steel and stone, seriousness. Of course, between the idea and the act there usually falls a long shadow. How serious can this group be, I wondered, about a ‘risk of extinction’ from ‘AI’?

***

To find out, I decided to look at the document referenced and trumpeted in the post, the EU Artificial Intelligence Act. There’s a link to the act in the reference section below. My question was simple: is there a reference to ‘risk of extinction’ in this document? The word, ‘risk’, appears 71 times. It’s used in passages such as the following, from the overview:

The Commission proposes to establish a technology-neutral definition of AI systems in EU law and to lay down a classification for AI systems with different requirements and obligations tailored on a ‘risk-based approach’. Some AI systems presenting ‘unacceptable’ risks would be prohibited. A wide range of ‘high-risk’ AI systems would be authorised, but subject to a set of requirements and obligations to gain access to the EU market.

The emphasis is on a ‘risk based approach’ which seems sensible at first look but there are inevitable problems and objections. Some of the objections come from the corporate sector, claiming, with mind-deadening predictability, that any and all regulation hinders ‘innovation’ a word that is invoked like an incantation only not as intriguing or lyrical. More interesting critiques come from those who see risk (though, notably, not existential) and who agree something must be done but who view the EU’s act as not going far enough or going in the wrong direction. 

Here is the listing of high-risk activities and areas for algorithmic systems in the EU Artificial Intelligence Act:

o Biometric identification and categorisation of natural persons

o Management and operation of critical infrastructure

o Education and vocational training

o Employment, worker management and access to self-employment

o Access to and enjoyment of essential private services and public services and benefits

o Law enforcement

o Migration, asylum and border control management

o Administration of justice and democratic processes

Missing from this list is the risk of extinction; which, putting aside the Act’s flaws, makes sense. Including it would have been as out of place in a consideration of real-world harms as adding a concern about time traveling bandits.. And so, now we must wonder, why include the phrase, “risk of extinction” in a social media post?

***

On March 22, 2023, the modestly named Future of Life Institute, an organization initially funded by the bathroom fixture toting Lord of X himself, Musk (a 10 million USD investment in 2015) whose board is as alabaster as the snows of Antarctica once were, kept afloat by donations from other tech besotted wealthies, published an open letter titled, ‘Pause Giant AI Experiments: An Open Letter.’ This letter was joined by similarly themed statements from OpenAI (‘Planning for AGI and beyond’) and Microsoft (‘Sparks of Artificial General Intelligence: Early experiments with GPT-4’).

Each of these documents has received strong criticism from people, such as yours truly, and others with more notoriety and for good reason: they promote the idea that the imprecisely defined Artificial General Intelligence (AGI) is not only possible, but inevitable.  Critiques of this idea – whether based on a detailed analysis of mathematics (‘Reclaiming AI as a theoretical tool for cognitive science’) or of computational limits (The Computational Limits of Deep Learning) have the benefit of being firmly grounded in material reality. 

But as Freud might have warned us, we live in a society shaped not only by our understanding of the world as it is but also, in no small part by dreams and fantasies. White supremacists harbor the self congratulating fantasy that any random white person (well, man) is an astounding genius when compared to those not in that club. This notion endures despite innumerable and daily examples to the contrary because it serves the interests of certain individuals and groups to persist in delusion and impose this delusion on the world. The ‘risk of extinction’ fantasy has caught on because it builds on decades of fiction, like the idea of an American Dream and adds spice to an otherwise deadly serious and grounded business: controlling the tech industry’s scope of action. Journalists who ignore the actual harms of algorithmic systems rush to write stories about a ‘risk of extinction’ which is far sexier than talking about the software now called ‘AI’ that is used to deny insurance benefits or determine criminal activity.

 The European Union’s Artificial Intelligence Act does not explicitly reference ‘existential risk’ but the social media post using this idea is noteworthy. It shows that lurking in the background, the ideas promoted by the tech industry – by OpenAI and its paymaster Microsoft and innumerable camp followers – have seeped into the thinking of decision makers at the highest levels.

And how could it be otherwise? How flattering to think you’re rescuing the world from Skynet, the fictional, nuclear missile tossing system featured in the ‘Terminator’ franchise, rather than trying, at long last, to actually regulate Google.

***

References

European Union

A European approach to artificial intelligence

EU Artificial Intelligence  Act

EU Post on X

Critique

Timnit Gebru on Tescreal (YouTube)

The Acronym Behind Our Wildest AI Dreams and Nightmares (on TESCREAL)

The EU still needs to get its AI Act together

Reclaiming AI as a theoretical tool for cognitive science

The Computational Limits of Deep Learning

Boosterism

Pause Giant AI Experiments: An Open Letter

Planning for AGI and beyond

Sparks of Artificial General Intelligence: Early experiments with GPT-4

The Future Circles the Drain

There’s a story we tell ourselves, a lullaby, really, which is that science fiction is a predictor of the terrain of that magical land, always just over the horizon, ‘the future.’ This story is deeply embedded in the consciousness of US’ians, (no, I’m not calling people from the US alone ‘Americans’ as if the rest of the Americas is in another hemisphere) even by people who don’t care for stories about spacecraft, robots and malevolent AI (always malevolent, for some reason, a sign of some aspect of US thinking requiring psychoanalytic investigation).

The evidence for this tendency is all around us; every ‘Black Mirror’ episode, for example, is treated as if it’s a prognostication from Nostradamus; the same tired tales of out of control AI, murderous machines and derelict space colonies cycled again and again, each time treated like a bold revelation of Things to Come.

Of course, there is real technological change; we have mobile, computer radio phones with glass screens and ICBMs, things our great grandparents would have found miraculous for a little while before the phone bills came due and the nuclear missiles, patiently waiting in their silos, were forgotten to aid sleep. It’s undeniable that we live in a world shaped by applied scientific inquiry and technological modification. These things have a social impact and fashion our political economy, driven by profit motivations. That’s the reality; the idea there’s a feedback loop between science fiction and what someone will breathlessly shout to be ‘science fact!’ is not entirely bankrupt, but there’s a mustiness to it, it smells like mouldy bread, slathered in butter and presented as still fresh.

All of which brings me to an essay published in the Atlantic “When Sci-Fi Anticipates Reality.” There’s a laziness to this piece which may not be the author – Lora Kelley’s fault – after all, the topic itself is weary.

Here’s an excerpt:

Reading about this news, [Meta adding legs to avatars] I told my editor—mostly as a joke—that the metaverse users interested in accessing alternative realities and stepping into other lives should consider simply reading a novel. I stand by that cranky opinion, but it also got me thinking about the fact that the metaverse actually owes a lot to the novel. The term metaverse was coined in a 1992 science-fiction novel titled Snow Crash. (The book also helped popularize the term avatar, to refer to digital selves.) And when you start to look for them, you can find links between science fiction and real-world tech all over.

https://www.theatlantic.com/newsletters/archive/2023/08/science-fiction-technology/675206/

The word “cranky” is used and I admit to feeling a bit cranky myself after reading this attempt to link a product Meta is struggling to make viable (using actual computers requiring power and labor) with a term from a novel as old as someone with credit problems. There’s about as much of a connection between the ‘metaverse’ nightmaringly imagined in Snow Crash and what Meta is capable of as between a piece of paper upon which someone has written the word, ‘laser’ and an actual laser.

A bit later in the piece, another favorite of the science fiction to fact genre gets its time in the sun, ‘anticipation’ –

Ross Andersen, an Atlantic writer who covers science and technology, also told me he suspects that “a messy feedback loop” operates between sci-fi and real-world tech. Both technologists and writers who have come up with fresh ideas, he said, “might have simply been responding to the same preexisting human desires: to explore the deep ocean and outer space, or to connect with anyone on Earth instantaneously.” Citing examples such as Jules Verne’s novels and Isaac Asimov’s stories, Ross added that “whether or not science fiction influenced technology, it certainly anticipated a lot of it.”

https://www.theatlantic.com/newsletters/archive/2023/08/science-fiction-technology/675206/

Leaving aside the question of whether there is indeed a “preexisting human desire” to explore outer space (and thus far, almost all of our examples of ‘exploration’ have been for exploitation so one wonders if other desires were being met) there’s an ironic assertion that ‘fresh ideas’ are what’s on offer. Fresh ideas, like a warmed over Second Life platform based, in name if not experienced reality, on a decades old novel. 

2023 is not the year of bold new visions, brought to life by intrepid scientists and technologists inspired by science fiction (it’s always warmed over cyberpunk and Asimov, never Stanislaw Lem, I note). It’s the year in which the industry runs, like a rat in flames, from one thing to another – crypto, web3, metaverse, AI, generative AI and chatbots for every task. This isn’t evidence of a ‘messy feedback loop’ but of an emptiness, a void. The bag of tricks is almost empty. Where will the new profits come from?

Perhaps there is a feedback loop after all, from stale idea to stale implementation, all wrapped in a marketing bow and sold as new when it’s as old as a Jules Verne novel.