Not Mutual, But Assured

I came of age – emerging into young adulthood, liberated, it seemed, from teenaged concerns by entering my 20s – during what we were told was the Cold War’s end. A year before the Soviet Union fell in 1991, President George H.W. Bush, in a speech that was once infamous but is rarely discussed today, delivered to a joint session of the US Congress September 11, 1990 declared that a ‘new world order’ was born (Bush went on to repeat this phrase – a leitmotif of his foreign policy – during a speech at the UN in 1991). For those of us who grew up under the shadow of nuclear annihilation, what macabre war planners called Mutual Assured Destruction or MAD, this provided a form of comfort or at least, the prospect of release from modernity’s prime terror.

Fear inspires a variety of reactions, among them, real or pretended ignorance of danger or the opposite: a desire to know more, to feel, some sense of, not control, always an illusion, but awareness. If I was destined to die, vaporized by a luminous ball of atomic fire, at least there’d be a millisecond of knowing the infernal mechanism’s workings. Growing up in the latter stages of the MAD era, to get this sense of awareness, I studied nuclear weapons and nuclear war doctrines (at least, what was made public). If, on a given Sunday, during lunch, you wanted to know about hydrogen bombs and turned to me for an answer, I could take a sip of vodka and give a solid, well studied non-specialist’s reply.

In the collective imagination, there was a fixation on the scale of devastation. Whether in fictional depictions such as the Terminator films or grimly matter of fact Pentagon strategy documents, the total destruction of major cities – millions dead from blast, heat, radiation and fallout – was a common theme.  When anyone said, ‘nuclear war’, it meant the end of the world. What most of us did not know was that just as rust never sleeps, war planners do not cease working to sharpen their blades. New types of nuclear weapons were in the minds of designers, expressed via mathematics and simulated using computation. There is evidence these abstractions have recently taken solid form to be unleashed on Syria’s tortured soil.


On December 23, 2024, Swiss physicist Hans-Benjamin Braun posted the following to his Twitter account:

Nuclear attack in Tartus (Syria):

Radioactive fingerprint of nuke (Tartus) measured in Cyprus within ~16 hours after the attack.

[Note that the dose rate peak cannot be ascribed to precipitation as higher precipitation occurred on Dec 5 with no discernible radiation increase]

The post, first in series that read like urgent dispatches, was based on an analysis of several data points: seismic, radiation and blast effects, used to present a dark conclusion: a new class of nuclear weapon, called Fourth Generation Fusion Nuclear Weapons (FGNW) by US Air Force researcher James Denton in a report titled ‘The Third Nuclear Age: How I learned to Start Worrying About the Clean Bomb’ was deployed in Tartus, Syria.

As the days wore on, more evidence appeared. In a post made on December 26, Dr. Braun posted this update::

Tartus nuke:

DoD data yield for a 99.9% clean weapon (e.g. “Housatonic”) with 0.3kt yield at 110 miles a (max) fallout of 0.035 mR/h. With the obs. time decay at 15 to 20h this yields a dose rate of 9-12nSv/h.

This agrees with observation of 11.55 +/-1.27 nSv/h (>8 sigma signal). 

A great deal of technical terrain is covered in this brief post so let’s walk through it.

By “DoD data yield” Dr. Braun is referring to the calculations of nuclear weapon outcomes derived from a US Department of Defense document, ‘The Effects of Nuclear Weapons’ (originally published in 1977). Using these calculations, Braun determined that the Tartus detonation’s characteristics were in line with what was calculated for the last of the 31 test explosions in the 1962, Operation Dominic series, the “Housatonic” detonation of Oct 30, 1962. This explosion was declared 99.9% ‘clean’ , that is, the amount of radiation was significantly less than what is usually produced by nuclear explosions. The reduction of radiation, while retaining other nuclear effects, was the result of the use of a design approach called Ripple.

The impetus for the Ripple program is described in the document, ‘Ripple: An Investigation of the World’s Most Advanced High-Yield Thermonuclear Weapon’. Here is an excerpt:

Operation Redwing and “Clean” Weapons

To help explain the significance of the Ripple concept and the context in which it was devised, we begin with this 1955 letter from then Secretary of Defense Charles E. Wilson:

Until the CASTLE (1954) tests confirmed the feasibility of megaton yields at comparatively small cost, military economy in the atomic weapons field had been largely dominated by blast effects and means of maximizing these (effects) in relation to design and delivery costs. As important as these blast considerations still are, we are now confronted with perhaps even more important considerations in the radioactive by-products field. Stated broadly, the problem appears to be that of maximizing the military effect at the desired time and place, and minimizing such effects where they are not desired. While blast effects are essentially instantaneous and local, the radioactive effects may cover very large areas and may persist for very long periods ranging, in fact, from days in the local fallout effects to many years in atmospheric contamination effects. In other words, radioactive effects force us to bring time in as an additional dimension in dealing with this problem. Moreover, the areas subject to lethal radiation are so large, that in planning the use of these weapons we must carefully weigh the damage to friendly as well as enemy installations.

[…]

Stated broadly, the problem appears to be that of maximizing the military effect at the desired time and place, and minimizing such effects where they are not desired”.  

Unsurprisingly and appropriately, Dr. Braun has faced objections, which, when offered in a spirit of scientific inquiry, he seems to welcome. Social media is an arena, where attempts at conversation or debate are as likely to come to the attention of people who are uninformed, yet confident in their ignorance as a peer who knows what you’re talking about.  Among the informed challenges (as opposed to random objections and, potentially, IDF bots) were questions about the radiation levels; shouldn’t they be much higher? Dr. Braun’s answer, based on his understanding of the effects of more advanced designs – the ‘Housatonic’ class – is no, the Tartus detonation represents the first use of a new type of weapon. If he is right, we have entered a more dangerous phase of the nuclear era in which the use of nuclear weapons becomes more attractive because the goal of ‘maximizing the military effects while minimizing undesired effects’ has been achieved.


The March 6, 2022 edition of the BBC’s ‘Point of View’ radio program featured British novelist and essayist, Will Self, reading his work titled ‘Return of the Bomb’. Self used the Russian invasion of Ukraine, then only a month old, and statements President Putin made at the time about Russia’s readiness to use nuclear weapons to discuss what Self called the ‘60th year of the Arkhipov age’. Arkhipov, as in Vasily Arkhipov, the Soviet naval officer who, at a crucial moment in the Cuban Missile Crisis of 1962, prevented the firing of an atomic torpedo on US naval vessels, which surely would have led to a full nuclear war and an end to all things.  This decision, Self accurately tells us, earned Arkhipov a special place of honor (a place he has not been given, certainly not in ‘the West’).

Discussing the contradictions of the MAD doctrine we were told maintained a sort of nervous equilibrium, Self stated:

“One of the curious things about the doctrine [of mutually assured destruction] is that it assumes nation states, and even empires, behave as rational, self interested individuals, while the Arkhipov incident tells us that in fact, armageddon is often only averted by actual individuals who will rebel against groupthink. Another paradox of MAD besides its worrying acronym, is that it relies on hostile powers’ motivations and dispensations being transparent to one another. However, what we know from the record, is that both the possibility of nuclear war and its avoidance during the Cuban crisis were a function of ignorance and misreading of intelligence.”

If Dr. Braun is correct and, a precision type of nuclear weapon was used in Tartus, Syria, a productionized refinement of what was deployed in the ‘Housatonic’ test of 64 years ago, we have exited the MAD era (perhaps, as Self notes, we were never in it) and entered an age in which nuclear weapons become a regular part of military action.

New world order indeed.


References

George H.W. Bush New World Order Speech

https://bush41library.tamu.edu/archives/public-papers/2217

The Third Nuclear Age

The Effects of Nuclear Weapons

https://www.atomicarchive.com/resources/documents/effects/glasstone-dolan/index.html

Operation Dominic

https://en.wikipedia.org/wiki/Operation_Dominic

An Investigation of the World’s Most Advanced Nuclear Weapon

Will Self: The Return of the Bomb

https://www.bbc.co.uk/programmes/m0014xyd

Video of Tartus Explosion:

https://twitter.com/i/status/1872739489858371867

Dr. Braun’s Bio

https://www.geophysical-forensics.ch/about.html

Vasily Arkhipov

https://en.wikipedia.org/wiki/Vasily_Arkhipov

Cuban Missile Crisis

https://en.wikipedia.org/wiki/Cuban_Missile_Crisis

Pygmalion Displacement – A Review

From the beginning, like a fast talking shell game huckster, the computer technology industry has relied on sleight of hand. 

First, in the 1950s and 60s, to obscure its military origins and purposes by describing early electronic computers as ‘electronic brains’ fashioned from softly glowing arrays of vacuum tubes. Later, by the 1980s, as the consumer electronics era was launched, the industry presented itself as the silicon wielding embodiment of ideas of ‘freedom’ and ‘self expression’ that are at the heart of the Californian Ideology (even as it was fully embedded within systems of command, control and counter-insurgency).

The manic, venture capitalist funded age of corporate ‘AI’ we’re currently subjected to has provided the industry with new opportunities for deception; we are encouraged to believe large language models and other computationally enacted, statistical methods are doing the same things as minds. Earlier, I called this deception but as Lelia A. Erscoi, Annelies Kleinherenbrink, and Olivia Guest, describe in their paper, “Pygmalion Displacement: When Humanising AI Dehumanises Women“, a more precise term is, displacement.


Uniquely for the field of AI critique, ‘Pygmalion Displacement’ identifies the specific ways women have been theorized and thought about within Western societies and how these ideas have persisted into, and shaped the computer age. 

The paper’s abstract introduces the reader to the authors’ concept:

We use the myth of Pygmalion as a lens to investigate the relationship between women and artificial intelligence (AI). Pygmalion was a legendary king who, repulsed by women, sculpted a statue, which was imbued with life by the goddess Aphrodite. This can be seen as a primordial AI-like myth, wherein humanity creates life-like self-images. The myth prefigures gendered dynamics within AI and between AI and society. Throughout history, the theme of women being replaced by automata or algorithms has been repeated, and continues to repeat in contemporary AI technologies. However, this pattern—that we dub Pygmalion displacement—is under-examined, due to naive excitement or due to an unacknowledged sexist history of the field. As we demonstrate, Pygmalion displacement prefigures heavily, but in an unacknowledged way, in the Turing test: a thought experiment foundational to AI. With women and the feminine being dislocated and erased from and by technology, AI is and has been (presented as) created mainly by privileged men, subserving capitalist patriarchal ends. This poses serious dangers to women and other marginalised people. By tracing the historical and ongoing entwinement of femininity and AI, we aim to understand and start a dialogue on how AI harms women.

Pygmalion Displacement: When Humanising AI Dehumanises Women – Pg 1

Like all great theoretical frameworks (such as Marx’s dialectical and historical materialism), Pygmalion Displacement provides us with a toolkit, the Pygmalion Lens, which can be applied to real world situations and conditions, sharpening our understanding and revealing what is hiding in plain sight, obscured by ideology.

Pygmalion Lens Table: Pygmalion Displacement: When Humanising AI Dehumanises Women, Pg 14

Apex Delusions

We generally assume that humanity – whether via evolutionary process or divine creation – is at the top of a ladder of being. Many of us love our dogs and cats but believe that because we build rockets and computers and they don’t, we occupy a loftier perch (I recall a Chomsky lecture during which he threw cold water on this vainglory by observing that the creation of nuclear weapons suggested our vaunted intelligence ‘may not be a successful adaptation’).

In the Introduction section titled, ‘The man, the myth,’ the authors describe another rung on this mythical ladder:

At the top of the proverbial food chain, a majority presence consists of straight white men, those who created, profit from, and work to maintain the capitalist patriarchy and kyriarchy generally (viz. Schüssler Fiorenza 2001). From this perspective, AI can be seen as aiming to seal all humanity’s best qualities in an eternal form, without the setbacks of a mortal human body. It is up for debate, however, what this idealised human(oid) form should look or behave like. When our creation is designed to mimic or be compatible with us, its creator, it will enact, fortify, or extend our pre-existing social values. Therefore, in a field where the vast majority is straight, cisgender, white, and male (Lecher 2019), AI seems less like a promise for all humanity and more like contempt for or even a threat against marginalized communities.

Pygmalion Displacement: When Humanising AI Dehumanises Women – Pg 3

The AI field, dominated by a small cohort, is shaped not only by the idea of humans as superior to the rest of nature but certain humans being superior to others. The imagined artificial general intelligence (AGI) is not simply a thinking machine, but a god-like, machine version of the type of person seen as being at the apex of humanity.

Further on in the introduction, the authors describe how these notions impact women specifically:

Our focus herein is on women in particular, who dwell within the limits of what is expected, having to adhere to standards of ideal and colonial femininity to be considered adequate and then sexualized and deemed incompetent for conforming to them (Lugones 2007). Attitudes towards women and the feminised, especially in the field of technology, have developed over a timeline of gender bias and systemic oppression and rejection. From myths, to hidden careers and stolen achievements (Allen 2017; Evans 2020), to feminized machines, and finally to current AI applications, this paper aims to shine a light on how we currently develop certain AI technologies, in the hope that such harms can be better recognized and curtailed in the future.

Pygmalion Displacement: When Humanising AI Dehumanises Women – Pg 3

On Twitter, as in our walkabout lives, we see and experience these harms in action as the contributions of women in science and technology (and much else besides) are dismissed or attributed to men. I always imagine an army of Jordan Peterson-esque pontificators but alas these pirates come in all shapes and sizes.

From Fiction to History and Back Again

Brilliantly, the authors create parallel timelines – one fictional, the other real – to illustrate how displacement has worked in cultural production and material outcomes.

In the fictional timeline, which includes stories ranging from ‘The Sandman’ (1816) to 2018’s PS4 and PC sci-fi adventure game, Detroit: Become Human, we are shown how displacement is woven into our cultural fabric.

Consider this passage on the 2013 film, ‘Her’ which depicts a relationship (of sorts) between Theodore, a lonely writer, played by Joaquin Phoenix and an operating system named Samantha, voiced by Scarlett Johansson:

…it is interesting to note that unlike her fictional predecessors, Samantha has no physical form — what makes her appear female is only her name and how she sounds (voiced by Scarlett Johansson), and arguably (that is, from a stereotypical, patriarchal perspective) her cheerful and flirty performance of secretarial, emotional, and sexual labor. In relation to this, Bergen (2016) argues that virtual personal assistants like Siri and Alexa are not perceived as potentially dangerous AI that might turn on us because, in addition to being so integrated into our lives, their embodied form does not evoke unruliness or untrustworthiness: “Unlike Pygmalion’s Galatea or Lang’s Maria, today’s virtual assistants have no body; they consist of calm, rational and cool disembodied voices […] devoid of that leaky, emotive quality that we have come to associate with the feminine body” (p. 101). In such a disembodied state, femininity appears much less duplicitous—however, in Bergen’s analysis, this is deceptive: just as real secretaries and housekeepers are often an invisible presence in the house owing to their femininity (and other marginalized identity markers), people do not take virtual assistants seriously enough to be bothered by their access to private information.

Pygmalion Displacement: When Humanising AI Dehumanises Women – Pg 8

Fictional depictions are juxtaposed with real examples of displacement such as the often told (in computer history circles) but not fully appreciated story of the ELIZA and Pedro speech generation systems:

Non-human speech generation has a long history, harking back to systems such as Pedro the voder (voice operating demonstration) in the 1930s (Eschner 2017). Pedro was operated solely by women, despite the fact the name adopted is stereotypically male. The first modern chatbot, however, is often considered to be ELIZA, created by Joseph Weizenbaum in 1964 to simulate a therapist that resulted in users believing a real person was behind the automated responses(Dillon 2020; Hirshbein 2004). The mechanism behind ELIZA was simple pattern matching, but it managed to fool people enough to be considered to have passed the Turing test. ELIZA was designed to learn from its interactions, (Weizenbaum 1966) named precisely for this reason. In his paper introducing the chatbot, Weizenbaum (1966) invokes the Pygmalion myth: “Like the Eliza of Pygmalion fame, it can be made to appear even more civilized, the relation of appearance to reality, however, remaining in the domain of the playwright.” (p. 36) Yet ELIZA the chatbot had the opposite effect than Weizenbaum intended, further fuelling a narrative of human-inspired machines.

Pygmalion Displacement: When Humanising AI Dehumanises Women – Pg 20

Later in this section, quoting from a work by Sarah Dillon on ‘The Eliza Effect’ we’re told about Weizenbaum’s contextual gendering of ELIZA:

Weizenbaum genders the program as female when it is under the control of the male computer programmer, but it is gendered as male when it interacts with a [female] user. Note in particular that in the example conversation given [in Weizenbaum’s Computer Power and Human Reason, 1976], this is a disempowered female user, at the mercy of her boyfriend’s wishes and her father’s bullying, defined by and in her relationship to the men whom, she declares, ‘are all alike.’ Weizenbaum’s choice of names is therefore adapted and adjusted to ensure that the passive, weaker or more subservient position at any one time is always gendered as female, whether that is the female-gendered computer program controlled by its designers, or the female-gendered human woman controlled by the patriarchal figures in her life.

Pygmalion Displacement: When Humanising AI Dehumanises Women – Pg 21

This passage was particularly interesting to me because I’ve long admired Weizenbaum’s thoughtful dissection of his work. I learned from the critique of computation as an ideology but missed his Pygmalion framing; the Pygmalion Lens enables a new way of seeing assumptions and ideas that are taken for granted like the air we breathe.


There is much more to discuss such as an eye-opening investigation into the over-celebrated Turing Test (today, more marketing gimmick than assessment technique) which began as a theorized method to create a guessing game about gender, a test which (astoundingly) “…required a real woman […] to prove her own humanity in competition with the computer.”

This is a marvellous and important paper which presents more than a theory, it gives us a toolkit and method for changing the way we think about the field of computation (and its loud ‘AI’ partisans) under patriarchal capitalism