Pygmalion Displacement – A Review

From the beginning, like a fast talking shell game huckster, the computer technology industry has relied on sleight of hand. 

First, in the 1950s and 60s, to obscure its military origins and purposes by describing early electronic computers as ‘electronic brains’ fashioned from softly glowing arrays of vacuum tubes. Later, by the 1980s, as the consumer electronics era was launched, the industry presented itself as the silicon wielding embodiment of ideas of ‘freedom’ and ‘self expression’ that are at the heart of the Californian Ideology (even as it was fully embedded within systems of command, control and counter-insurgency).

The manic, venture capitalist funded age of corporate ‘AI’ we’re currently subjected to has provided the industry with new opportunities for deception; we are encouraged to believe large language models and other computationally enacted, statistical methods are doing the same things as minds. Earlier, I called this deception but as Lelia A. Erscoi, Annelies Kleinherenbrink, and Olivia Guest, describe in their paper, “Pygmalion Displacement: When Humanising AI Dehumanises Women“, a more precise term is, displacement.


Uniquely for the field of AI critique, ‘Pygmalion Displacement’ identifies the specific ways women have been theorized and thought about within Western societies and how these ideas have persisted into, and shaped the computer age. 

The paper’s abstract introduces the reader to the authors’ concept:

We use the myth of Pygmalion as a lens to investigate the relationship between women and artificial intelligence (AI). Pygmalion was a legendary king who, repulsed by women, sculpted a statue, which was imbued with life by the goddess Aphrodite. This can be seen as a primordial AI-like myth, wherein humanity creates life-like self-images. The myth prefigures gendered dynamics within AI and between AI and society. Throughout history, the theme of women being replaced by automata or algorithms has been repeated, and continues to repeat in contemporary AI technologies. However, this pattern—that we dub Pygmalion displacement—is under-examined, due to naive excitement or due to an unacknowledged sexist history of the field. As we demonstrate, Pygmalion displacement prefigures heavily, but in an unacknowledged way, in the Turing test: a thought experiment foundational to AI. With women and the feminine being dislocated and erased from and by technology, AI is and has been (presented as) created mainly by privileged men, subserving capitalist patriarchal ends. This poses serious dangers to women and other marginalised people. By tracing the historical and ongoing entwinement of femininity and AI, we aim to understand and start a dialogue on how AI harms women.

Pygmalion Displacement: When Humanising AI Dehumanises Women – Pg 1

Like all great theoretical frameworks (such as Marx’s dialectical and historical materialism), Pygmalion Displacement provides us with a toolkit, the Pygmalion Lens, which can be applied to real world situations and conditions, sharpening our understanding and revealing what is hiding in plain sight, obscured by ideology.

Pygmalion Lens Table: Pygmalion Displacement: When Humanising AI Dehumanises Women, Pg 14

Apex Delusions

We generally assume that humanity – whether via evolutionary process or divine creation – is at the top of a ladder of being. Many of us love our dogs and cats but believe that because we build rockets and computers and they don’t, we occupy a loftier perch (I recall a Chomsky lecture during which he threw cold water on this vainglory by observing that the creation of nuclear weapons suggested our vaunted intelligence ‘may not be a successful adaptation’).

In the Introduction section titled, ‘The man, the myth,’ the authors describe another rung on this mythical ladder:

At the top of the proverbial food chain, a majority presence consists of straight white men, those who created, profit from, and work to maintain the capitalist patriarchy and kyriarchy generally (viz. Schüssler Fiorenza 2001). From this perspective, AI can be seen as aiming to seal all humanity’s best qualities in an eternal form, without the setbacks of a mortal human body. It is up for debate, however, what this idealised human(oid) form should look or behave like. When our creation is designed to mimic or be compatible with us, its creator, it will enact, fortify, or extend our pre-existing social values. Therefore, in a field where the vast majority is straight, cisgender, white, and male (Lecher 2019), AI seems less like a promise for all humanity and more like contempt for or even a threat against marginalized communities.

Pygmalion Displacement: When Humanising AI Dehumanises Women – Pg 3

The AI field, dominated by a small cohort, is shaped not only by the idea of humans as superior to the rest of nature but certain humans being superior to others. The imagined artificial general intelligence (AGI) is not simply a thinking machine, but a god-like, machine version of the type of person seen as being at the apex of humanity.

Further on in the introduction, the authors describe how these notions impact women specifically:

Our focus herein is on women in particular, who dwell within the limits of what is expected, having to adhere to standards of ideal and colonial femininity to be considered adequate and then sexualized and deemed incompetent for conforming to them (Lugones 2007). Attitudes towards women and the feminised, especially in the field of technology, have developed over a timeline of gender bias and systemic oppression and rejection. From myths, to hidden careers and stolen achievements (Allen 2017; Evans 2020), to feminized machines, and finally to current AI applications, this paper aims to shine a light on how we currently develop certain AI technologies, in the hope that such harms can be better recognized and curtailed in the future.

Pygmalion Displacement: When Humanising AI Dehumanises Women – Pg 3

On Twitter, as in our walkabout lives, we see and experience these harms in action as the contributions of women in science and technology (and much else besides) are dismissed or attributed to men. I always imagine an army of Jordan Peterson-esque pontificators but alas these pirates come in all shapes and sizes.

From Fiction to History and Back Again

Brilliantly, the authors create parallel timelines – one fictional, the other real – to illustrate how displacement has worked in cultural production and material outcomes.

In the fictional timeline, which includes stories ranging from ‘The Sandman’ (1816) to 2018’s PS4 and PC sci-fi adventure game, Detroit: Become Human, we are shown how displacement is woven into our cultural fabric.

Consider this passage on the 2013 film, ‘Her’ which depicts a relationship (of sorts) between Theodore, a lonely writer, played by Joaquin Phoenix and an operating system named Samantha, voiced by Scarlett Johansson:

…it is interesting to note that unlike her fictional predecessors, Samantha has no physical form — what makes her appear female is only her name and how she sounds (voiced by Scarlett Johansson), and arguably (that is, from a stereotypical, patriarchal perspective) her cheerful and flirty performance of secretarial, emotional, and sexual labor. In relation to this, Bergen (2016) argues that virtual personal assistants like Siri and Alexa are not perceived as potentially dangerous AI that might turn on us because, in addition to being so integrated into our lives, their embodied form does not evoke unruliness or untrustworthiness: “Unlike Pygmalion’s Galatea or Lang’s Maria, today’s virtual assistants have no body; they consist of calm, rational and cool disembodied voices […] devoid of that leaky, emotive quality that we have come to associate with the feminine body” (p. 101). In such a disembodied state, femininity appears much less duplicitous—however, in Bergen’s analysis, this is deceptive: just as real secretaries and housekeepers are often an invisible presence in the house owing to their femininity (and other marginalized identity markers), people do not take virtual assistants seriously enough to be bothered by their access to private information.

Pygmalion Displacement: When Humanising AI Dehumanises Women – Pg 8

Fictional depictions are juxtaposed with real examples of displacement such as the often told (in computer history circles) but not fully appreciated story of the ELIZA and Pedro speech generation systems:

Non-human speech generation has a long history, harking back to systems such as Pedro the voder (voice operating demonstration) in the 1930s (Eschner 2017). Pedro was operated solely by women, despite the fact the name adopted is stereotypically male. The first modern chatbot, however, is often considered to be ELIZA, created by Joseph Weizenbaum in 1964 to simulate a therapist that resulted in users believing a real person was behind the automated responses(Dillon 2020; Hirshbein 2004). The mechanism behind ELIZA was simple pattern matching, but it managed to fool people enough to be considered to have passed the Turing test. ELIZA was designed to learn from its interactions, (Weizenbaum 1966) named precisely for this reason. In his paper introducing the chatbot, Weizenbaum (1966) invokes the Pygmalion myth: “Like the Eliza of Pygmalion fame, it can be made to appear even more civilized, the relation of appearance to reality, however, remaining in the domain of the playwright.” (p. 36) Yet ELIZA the chatbot had the opposite effect than Weizenbaum intended, further fuelling a narrative of human-inspired machines.

Pygmalion Displacement: When Humanising AI Dehumanises Women – Pg 20

Later in this section, quoting from a work by Sarah Dillon on ‘The Eliza Effect’ we’re told about Weizenbaum’s contextual gendering of ELIZA:

Weizenbaum genders the program as female when it is under the control of the male computer programmer, but it is gendered as male when it interacts with a [female] user. Note in particular that in the example conversation given [in Weizenbaum’s Computer Power and Human Reason, 1976], this is a disempowered female user, at the mercy of her boyfriend’s wishes and her father’s bullying, defined by and in her relationship to the men whom, she declares, ‘are all alike.’ Weizenbaum’s choice of names is therefore adapted and adjusted to ensure that the passive, weaker or more subservient position at any one time is always gendered as female, whether that is the female-gendered computer program controlled by its designers, or the female-gendered human woman controlled by the patriarchal figures in her life.

Pygmalion Displacement: When Humanising AI Dehumanises Women – Pg 21

This passage was particularly interesting to me because I’ve long admired Weizenbaum’s thoughtful dissection of his work. I learned from the critique of computation as an ideology but missed his Pygmalion framing; the Pygmalion Lens enables a new way of seeing assumptions and ideas that are taken for granted like the air we breathe.


There is much more to discuss such as an eye-opening investigation into the over-celebrated Turing Test (today, more marketing gimmick than assessment technique) which began as a theorized method to create a guessing game about gender, a test which (astoundingly) “…required a real woman […] to prove her own humanity in competition with the computer.”

This is a marvellous and important paper which presents more than a theory, it gives us a toolkit and method for changing the way we think about the field of computation (and its loud ‘AI’ partisans) under patriarchal capitalism 

Manifesto on Algorithmic Sabotage: A Review

On 1 April, 2024, Twitter user mr.w0bb1t posted the following to their feed:

The post points readers to the document, MANIFESTO ON “ALGORITHMIC SABOTAGE” created by the Algorithmic Sabotage Research Group (ASRG) and described as follows:

[the Manifesto] presents a preliminary version of 10 statements  on the principles and practice of algorithmic sabotage ..

… The #manifesto is designed to be developed and will be regularly updated, please consider it under the GNU Free Documentation License v1.3 ..

The struggle for “algorithmic sabotage” is everywhere in the algorithmic factory. Full frontal resistance against digital oppression & authoritarianism  ..

Internationalist solidarity & confidence in popular self-determination, in the only force that can lead the struggle to the end ..

MANIFESTO ON “ALGORITHMIC SABOTAGE” – https://tldr.nettime.org/@asrg/112195008380261222

Tech industry critique is fixated on resistance to false narratives, debunking as praxis. This is understandable; the industry’s propaganda campaign is relentless and successful, requiring an informed and equally relentless response.

This traps us in a feedback loop of call and response in which, OpenAI (for example) makes absurd, anti-worker and supremacist claims about the capabilities of the systems its selling, prompting researchers and technologists who know these claims to be lies to spend precious time ‘debunking.’


The ‘Manifesto’ consists of ten statements, numbered 0 through 9. In what follows, I’ll list each and offer some thoughts based on my experience of the political economy of the technology industry (i.e., how computation is used in large scale private and public environments and for what purposes) and thoughts about resistance.

Statement 0. The “Algorithmic Sabotage” is a figure of techno-disobedience for the militancy that’s absent from technology critique.

Comment: This is undeniably true. Among technologists as a class of workers, and tech industry analysts as a loosely organized grouping, there is very little said or apparently thought about what “techno-disobedience” might look like. One thing that immediately occurs to me, what resistance might look like, is a complete rejection of the idea of obsolescence and adoption of an attitude of, if not computational perma-culture, the idea of long computation.

Statement 1. Rather than some atavistic dislike of technology, “Algorithmic Sabotage” can be read as a form of counter-power that emerges from the strength of the community that wields it.

Comment: “Counter power,” something the historic Luddites – who were not ‘anti technology’ (whatever that means) understood, is a marvellous turn of phrase. An example might be the use of concepts that hyper-scale computation rentiers such as Microsoft and Amazon call ‘cloud computing’ for our own purposes. Imagine a shared computational resource for a community built from a ‘long computing’ infrastructure that rejects obsolescence and offers the resources a community might need for telecommunications, data analysis as a decision aid and other benefits.

Statement 2. The “Algorithmic Sabotage” cuts through the capitalist ideological framework that thrives on misery by performing a labour of subversion in the present, dismantling contemporary forms of algorithmic domination and reclaiming spaces for ethical action from generalized thoughtlessness and automaticity.

Comment: We see examples of “contemporary forms of algorithmic domination” and “generalized thoughtlessness” in what is called ‘AI,’ particularly the push to insert large language models into every nook and cranny. Products such as Microsoft Co-pilot serve no purpose aside from profit maximization. This is thoughtlessness manifested. Resistance to this means rejecting the idea there is any use for such systems and proposing an alternative view; for example, the creation of knowledge retrieval techniques that are built on attribution and open access to information.

Statement 3. The “Algorithmic Sabotage” is an action-oriented commitment to solidarity that precedes any system of social, legal or algorithmic classification.

Comment: Alongside other capitalist sectors, the tech industry creates and benefits from alienation. There was a moment in the 1980s and 90s when technology workers could have achieved a class consciousness, understanding the critical importance of their work as a collective to the functioning of society. This was intercepted by the introduction of the idea of atomized professionalism that successfully created a perceptual gulf between tech workers and workers in other sectors and also, between tech workers and the people who utilize the systems they craft and manage, reduced to the label, ‘users.’ Arrogance in tech industry circles is common, preventing solidarity within the group and with others. Resistance to this might start with the rejection of the false elevation of ‘professionalism’ (which has been successfully used in other sectors, such as academics, to neutralize solidarity).

Statement 4. The “Algorithmic Sabotage” is a part of a structural renewal of a wider movement for social autonomy that opposes the predations of hegemonic technology through wildcat direct action, consciously aligned itself with ideals of social justice and egalitarianism.

Comment: There is a link between statement 3, which calls for a commitment to solidarity, and statement 4, which imagines wildcat action against hegemonic technology. Solidarity is the linking idea. Is it possible to build such solidarity within existing tech industry circles? The signs are not good. Resistance might come from distributing expertise outside of the usual circles. We see examples of this in indigenous and diaspora communities in which, there are often tech adepts able and willing to act as interpreters, bridges, troubleshooters and teachers.

Statement 5. The “Algorithmic Sabotage” radically reworks our technopolitical arrangements away from the structural injustices, supremacist perspectives and necropolitical power layered into the “algorithmic empire”, highlighting its materiality and consequences in terms of both carbon emissions and the centralisation of control.

Comment: This statement uses the debunking framework as its baseline – for example, the critique of ‘cloud’ must be grounded by an understanding of the materiality of computation – mineral extraction and processing (and associated labor, environmental and societal impacts). And also, the necropolitical, command and control nature of applied computation. Resistance here might include an insistence on materiality (including open education about the computational supply chain) and a robust rejection of computation as a means of control and obscured decision making.

I’ll list the next two statements together because I think they form a theme:

Statement 6. The “Algorithmic Sabotage” refuses algorithmic humiliation for power and profit maximisation, focusing on activities of mutual aid and solidarity.

Statement 7. The first step of techno-politics is not technological but political. Radical feminist,anti-fascist and decolonial perspectives are a political challenge to “Algorithmic Sabotage”, placing matters of interdependence and collective care against reductive optimisations of the “algorithmic empire”.

Comment: Ideas are hegemonic. We accept, without question, Meta/Facebook’s surveillance based business model as the cost of entry to a platform countless millions depend on to maintain far flung connections (and sometimes even local ones in our age of forced disconnection and busy-ness). The ‘refusal to accept humiliation’ would mean recognizing algorithmic exploitation and consciously rejecting it. Resistance here, means not assuming good intent and staying alert but also, choosing ‘collective care.’ This is the opposite of the war of all against all created by social media platforms whose system behaviors are manipulated via the use of attention directing methods.

The final two statements can also be treated as parts of a whole:

Statement 8. The “Algorithmic Sabotage” struggles against algorithmic violence and fascistic solutionism, focusing on artistic-activist resistances that can express a different mentality, a collective “counter-intelligence”.

Statement 9. The “Algorithmic Sabotage” is an emancipatory defence of the need for community constraint of harmful technology, a struggle against the abstract segregation “above” and “below” the algorithm.

Comment: Statement 8 conveys an important insight: what we accept, despite our complaints, as normal systems behavior on platforms such as Twitter is indeed “algorithmic violence.” When we use these platforms, finding friends and comrades (if we’re fortunate) we are moving through enemy terrain and constantly engaged in a struggle against harm. I’m not certain, but I imagine that by “fascistic solutionism,” the ASRG mean the proposing of control to manage control – that is, the sort of ‘solution’ we see as the US Congress claims to address issues with TikTok via nationalistic and thereby, fascistic appeals and legislation. We are encouraged by the ‘Manifesto’ to go beyond acceptance above or below ‘the algorithm’ to build a path that rejects the tyranny that creates and nurtures these systems.

Beyond Command and Control

In his book, ‘Surveillance Valley’ (published in 2018) journalist Yasha Levine traces the Internet’s use as a population control tool to its start as an ARPA project for the military. Again and again, detailing efforts such as Project Camelot and many others besides, Levine describes the technology platforms we see as essentially benign, but off course (and therefore, reformable) as a counter-insurgency initiative by the US government and its corporate partners which persists to this day. The ‘insurgents’, in this situation, are the population as a whole.

Viewed this way, it’s impossible to see the current digital computation regime as anything but a terrain of struggle. The MANIFESTO ON “ALGORITHMIC SABOTAGE” is an effort to help us get our heads right. From the moment of digital computation’s inception, war was declared but most of us don’t yet recognize it. In the course of this war, much has been lost including alternative visions of algorithmic use. The MANIFESTO ON “ALGORITHMIC SABOTAGE” calls on us to assume a persona (where resistance starts) of the person, and people who know they’re under attack and think and plan accordingly.

It’s an incomplete but vital response to the debunking perspective which assumes a new world can be fashioned from ideas that are inherently anti-human.