Pygmalion Displacement – A Review

From the beginning, like a fast talking shell game huckster, the computer technology industry has relied on sleight of hand. 

First, in the 1950s and 60s, to obscure its military origins and purposes by describing early electronic computers as ‘electronic brains’ fashioned from softly glowing arrays of vacuum tubes. Later, by the 1980s, as the consumer electronics era was launched, the industry presented itself as the silicon wielding embodiment of ideas of ‘freedom’ and ‘self expression’ that are at the heart of the Californian Ideology (even as it was fully embedded within systems of command, control and counter-insurgency).

The manic, venture capitalist funded age of corporate ‘AI’ we’re currently subjected to has provided the industry with new opportunities for deception; we are encouraged to believe large language models and other computationally enacted, statistical methods are doing the same things as minds. Earlier, I called this deception but as Lelia A. Erscoi, Annelies Kleinherenbrink, and Olivia Guest, describe in their paper, “Pygmalion Displacement: When Humanising AI Dehumanises Women“, a more precise term is, displacement.


Uniquely for the field of AI critique, ‘Pygmalion Displacement’ identifies the specific ways women have been theorized and thought about within Western societies and how these ideas have persisted into, and shaped the computer age. 

The paper’s abstract introduces the reader to the authors’ concept:

We use the myth of Pygmalion as a lens to investigate the relationship between women and artificial intelligence (AI). Pygmalion was a legendary king who, repulsed by women, sculpted a statue, which was imbued with life by the goddess Aphrodite. This can be seen as a primordial AI-like myth, wherein humanity creates life-like self-images. The myth prefigures gendered dynamics within AI and between AI and society. Throughout history, the theme of women being replaced by automata or algorithms has been repeated, and continues to repeat in contemporary AI technologies. However, this pattern—that we dub Pygmalion displacement—is under-examined, due to naive excitement or due to an unacknowledged sexist history of the field. As we demonstrate, Pygmalion displacement prefigures heavily, but in an unacknowledged way, in the Turing test: a thought experiment foundational to AI. With women and the feminine being dislocated and erased from and by technology, AI is and has been (presented as) created mainly by privileged men, subserving capitalist patriarchal ends. This poses serious dangers to women and other marginalised people. By tracing the historical and ongoing entwinement of femininity and AI, we aim to understand and start a dialogue on how AI harms women.

Pygmalion Displacement: When Humanising AI Dehumanises Women – Pg 1

Like all great theoretical frameworks (such as Marx’s dialectical and historical materialism), Pygmalion Displacement provides us with a toolkit, the Pygmalion Lens, which can be applied to real world situations and conditions, sharpening our understanding and revealing what is hiding in plain sight, obscured by ideology.

Pygmalion Lens Table: Pygmalion Displacement: When Humanising AI Dehumanises Women, Pg 14

Apex Delusions

We generally assume that humanity – whether via evolutionary process or divine creation – is at the top of a ladder of being. Many of us love our dogs and cats but believe that because we build rockets and computers and they don’t, we occupy a loftier perch (I recall a Chomsky lecture during which he threw cold water on this vainglory by observing that the creation of nuclear weapons suggested our vaunted intelligence ‘may not be a successful adaptation’).

In the Introduction section titled, ‘The man, the myth,’ the authors describe another rung on this mythical ladder:

At the top of the proverbial food chain, a majority presence consists of straight white men, those who created, profit from, and work to maintain the capitalist patriarchy and kyriarchy generally (viz. Schüssler Fiorenza 2001). From this perspective, AI can be seen as aiming to seal all humanity’s best qualities in an eternal form, without the setbacks of a mortal human body. It is up for debate, however, what this idealised human(oid) form should look or behave like. When our creation is designed to mimic or be compatible with us, its creator, it will enact, fortify, or extend our pre-existing social values. Therefore, in a field where the vast majority is straight, cisgender, white, and male (Lecher 2019), AI seems less like a promise for all humanity and more like contempt for or even a threat against marginalized communities.

Pygmalion Displacement: When Humanising AI Dehumanises Women – Pg 3

The AI field, dominated by a small cohort, is shaped not only by the idea of humans as superior to the rest of nature but certain humans being superior to others. The imagined artificial general intelligence (AGI) is not simply a thinking machine, but a god-like, machine version of the type of person seen as being at the apex of humanity.

Further on in the introduction, the authors describe how these notions impact women specifically:

Our focus herein is on women in particular, who dwell within the limits of what is expected, having to adhere to standards of ideal and colonial femininity to be considered adequate and then sexualized and deemed incompetent for conforming to them (Lugones 2007). Attitudes towards women and the feminised, especially in the field of technology, have developed over a timeline of gender bias and systemic oppression and rejection. From myths, to hidden careers and stolen achievements (Allen 2017; Evans 2020), to feminized machines, and finally to current AI applications, this paper aims to shine a light on how we currently develop certain AI technologies, in the hope that such harms can be better recognized and curtailed in the future.

Pygmalion Displacement: When Humanising AI Dehumanises Women – Pg 3

On Twitter, as in our walkabout lives, we see and experience these harms in action as the contributions of women in science and technology (and much else besides) are dismissed or attributed to men. I always imagine an army of Jordan Peterson-esque pontificators but alas these pirates come in all shapes and sizes.

From Fiction to History and Back Again

Brilliantly, the authors create parallel timelines – one fictional, the other real – to illustrate how displacement has worked in cultural production and material outcomes.

In the fictional timeline, which includes stories ranging from ‘The Sandman’ (1816) to 2018’s PS4 and PC sci-fi adventure game, Detroit: Become Human, we are shown how displacement is woven into our cultural fabric.

Consider this passage on the 2013 film, ‘Her’ which depicts a relationship (of sorts) between Theodore, a lonely writer, played by Joaquin Phoenix and an operating system named Samantha, voiced by Scarlett Johansson:

…it is interesting to note that unlike her fictional predecessors, Samantha has no physical form — what makes her appear female is only her name and how she sounds (voiced by Scarlett Johansson), and arguably (that is, from a stereotypical, patriarchal perspective) her cheerful and flirty performance of secretarial, emotional, and sexual labor. In relation to this, Bergen (2016) argues that virtual personal assistants like Siri and Alexa are not perceived as potentially dangerous AI that might turn on us because, in addition to being so integrated into our lives, their embodied form does not evoke unruliness or untrustworthiness: “Unlike Pygmalion’s Galatea or Lang’s Maria, today’s virtual assistants have no body; they consist of calm, rational and cool disembodied voices […] devoid of that leaky, emotive quality that we have come to associate with the feminine body” (p. 101). In such a disembodied state, femininity appears much less duplicitous—however, in Bergen’s analysis, this is deceptive: just as real secretaries and housekeepers are often an invisible presence in the house owing to their femininity (and other marginalized identity markers), people do not take virtual assistants seriously enough to be bothered by their access to private information.

Pygmalion Displacement: When Humanising AI Dehumanises Women – Pg 8

Fictional depictions are juxtaposed with real examples of displacement such as the often told (in computer history circles) but not fully appreciated story of the ELIZA and Pedro speech generation systems:

Non-human speech generation has a long history, harking back to systems such as Pedro the voder (voice operating demonstration) in the 1930s (Eschner 2017). Pedro was operated solely by women, despite the fact the name adopted is stereotypically male. The first modern chatbot, however, is often considered to be ELIZA, created by Joseph Weizenbaum in 1964 to simulate a therapist that resulted in users believing a real person was behind the automated responses(Dillon 2020; Hirshbein 2004). The mechanism behind ELIZA was simple pattern matching, but it managed to fool people enough to be considered to have passed the Turing test. ELIZA was designed to learn from its interactions, (Weizenbaum 1966) named precisely for this reason. In his paper introducing the chatbot, Weizenbaum (1966) invokes the Pygmalion myth: “Like the Eliza of Pygmalion fame, it can be made to appear even more civilized, the relation of appearance to reality, however, remaining in the domain of the playwright.” (p. 36) Yet ELIZA the chatbot had the opposite effect than Weizenbaum intended, further fuelling a narrative of human-inspired machines.

Pygmalion Displacement: When Humanising AI Dehumanises Women – Pg 20

Later in this section, quoting from a work by Sarah Dillon on ‘The Eliza Effect’ we’re told about Weizenbaum’s contextual gendering of ELIZA:

Weizenbaum genders the program as female when it is under the control of the male computer programmer, but it is gendered as male when it interacts with a [female] user. Note in particular that in the example conversation given [in Weizenbaum’s Computer Power and Human Reason, 1976], this is a disempowered female user, at the mercy of her boyfriend’s wishes and her father’s bullying, defined by and in her relationship to the men whom, she declares, ‘are all alike.’ Weizenbaum’s choice of names is therefore adapted and adjusted to ensure that the passive, weaker or more subservient position at any one time is always gendered as female, whether that is the female-gendered computer program controlled by its designers, or the female-gendered human woman controlled by the patriarchal figures in her life.

Pygmalion Displacement: When Humanising AI Dehumanises Women – Pg 21

This passage was particularly interesting to me because I’ve long admired Weizenbaum’s thoughtful dissection of his work. I learned from the critique of computation as an ideology but missed his Pygmalion framing; the Pygmalion Lens enables a new way of seeing assumptions and ideas that are taken for granted like the air we breathe.


There is much more to discuss such as an eye-opening investigation into the over-celebrated Turing Test (today, more marketing gimmick than assessment technique) which began as a theorized method to create a guessing game about gender, a test which (astoundingly) “…required a real woman […] to prove her own humanity in competition with the computer.”

This is a marvellous and important paper which presents more than a theory, it gives us a toolkit and method for changing the way we think about the field of computation (and its loud ‘AI’ partisans) under patriarchal capitalism 

Kinetic Harm

I write about the information technology industry.

I’ve written about other topics, such as the copaganda of Young Turks’ host Ana Kasparian and Zizek, whose work, to quote John Bellamy Foster, has become “a carnival of irrationalism.” In the main, however, the technology industry generally, and its so-called ‘AI’ sub-category, specifically, are my topics. This isn’t random; I’ve worked in this industry for decades and know its dark heart. Honest tech journalism (rather than the boosterism we mostly get) and scholarly examinations are important but, who better to tell a war story than someone in the trenches?

Because I focus on harm and not the fantasy of progress, this isn’t a pursuit that brings wealth or notoriety. There have been a few podcast appearances (a type of sub-micro celebrity, as fleeting as a lightning flash) and opportunities to be published in respected magazines. That’s nice, as far as it goes. It’s important however, to see clearly and be honest with yourself; it’s a sisyphean task with few rewards; motivations must be found within and from a community of like minded people.

Originally, my motivation was to pierce the curtain. If you’ve seen the 1939 MGM film, ‘Wizard of Oz’ you know my meaning: there’s a moment when the supposed wizard, granter of dreams, is revealed to be a sweaty, nervous man, hidden by a curtain, frantically pulling levers and spinning dials to keep the machinery of delusion functioning. This was my guiding metaphor for the tech industry, which claims its products defy the limits of material reality and surpass human thought.

As you learn more, your understanding should change. Parting the curtain, or, debunking was an acceptable way to start but it’s insufficient; the promotion of so-called ‘AI’ is producing real-world harms. From automated recidivism decision systems to facial recognition based arrests and innumerable other intrusions. A technology sold as bringing about a bright future is being deployed to limit possibilities. Digital computation began as a means of enacting a command and control methodology on the world for various purposes (military applications being among the first) and is, in our age, reaching its apotheosis.

Kinetic Harm

Reporting on these harms, as deadly as they often are, fails to tell the entire story of computation in this era of growing instability. The same technologies and methods used to, for example, automate actuarial decision making in the insurance industry can also be used for other, more directly violent aims. The US military, which is known for applying euphemisms to terrible things like a thin coat of paint over rust, calls warfare – that is, killing – kinetic military action. We can call forms of applied computation deliberately intended to produce death and destruction kinetic harm.

Consider the IDF’s Habsora system, described in the +972 Magazine article, ‘A mass assassination factory’: Inside Israel’s calculated bombing of Gaza’ –

In one case discussed by the sources, the Israeli military command knowingly approved the killing of hundreds of Palestinian civilians in an attempt to assassinate a single top Hamas military commander. “The numbers increased from dozens of civilian deaths [permitted] as collateral damage as part of an attack on a senior official in previous operations, to hundreds of civilian deaths as collateral damage,” said one source.

“Nothing happens by accident,” said another source. “When a 3-year-old girl is killed in a home in Gaza, it’s because someone in the army decided it wasn’t a big deal for her to be killed — that it was a price worth paying in order to hit [another] target. We are not Hamas. These are not random rockets. Everything is intentional. We know exactly how much collateral damage there is in every home.”

According to the investigation, another reason for the large number of targets, and the extensive harm to civilian life in Gaza, is the widespread use of a system called “Habsora” (“The Gospel”), which is largely built on artificial intelligence and can “generate” targets almost automatically at a rate that far exceeds what was previously possible. This AI system, as described by a former intelligence officer, essentially facilitates a “mass assassination factory.”

+972 Magazine – https://www.972mag.com/mass-assassination-factory-israel-calculated-bombing-gaza/

The popular phrase, artificial intelligence, a marketing term, really, since no such thing exists, is used to describe the Habsora system. This creates an exotic distance, as if a glowing black cube floats in space deciding who dies and how many deaths will occur.

The reality is more mundane, more familiar, even banal; the components of this machine are constantly in use around us. Here is a graphic that shows some of the likely elements:

As we use our phones, register our locations, fill in online forms for business and government services, interact on social media and so many other things, we unknowingly create threads and weave patterns, stored in databases. The same type of system that enables a credit card fraud detection algorithm to block your card if in-person store transactions are registered in two, geographically distant locations on the same day can be used to build a map of your activities and relations to find and kill you and those you know and love. This is what the IDF has done with Habsora. The distance separating the intrusive methods of Meta, Google and fellow travelers from this killing machine is not as great as it seems.

Before being driven from their homes by the IDF – homes that were destroyed under the most intensive bombing campaign of this and perhaps even the previous, hyper-violent century, Palestinians in Gaza were subject to a program of surveillance and control which put them completely at the mercy of the Israeli government. All data about their movements and activities passed through electronic infrastructure owned and controlled by Israeli entities. This infrastructure, and the data processing and analysis built upon it, have been assembled into a factory whose product is death – whether targeted or en masse.

The Thin Curtain

Surveillance. Control. Punishment. This is what the age of digital computation has brought on an unprecedented scale. For those of us who live in places where the bombs don’t yet fall, there are things like the following, excerpted from the Forbes article (Feb 23, 2024) ‘Dozens Of KFC, Taco Bell And Dairy Queen Franchises Are Using AI To Track Workers’ –

Like many restaurant owners, Andrew Valkanoff hands out bonuses to employees who’ve done a good job. But at five of his Dairy Queen franchises across North Carolina, those bonuses are determined by AI.

The AI system, called Riley, collects streams of video and audio data to assess workers’ performance, and then assigns bonuses to those who are able to sell more. Valkanoff installed the system, which is developed by Rochester-based surveillance company Hoptix, less than a year ago with the hopes that it would help increase sales at a time when margins were shrinking and food and labor costs were skyrocketing.

Forbes – https://www.forbes.com/sites/rashishrivastava/2024/02/23/dozens-of-kfc-taco-bell-and-dairy-queen-franchises-are-using-ai-to-track-workers/

Inside the zone of comparative safety but, deprivation for many and control imposed on all, there are systems like the IDF’s Habsora in service, employing the same computational techniques, which, instead of directing sniper rifle armed quadcopters and F-16s on deadly errands, deprive people of jobs, medical care and freedom.  Just as a rocket’s payload can be changed from peaceful to fatal ends, the intended outcomes of such systems can be altered to fit the goals of the states that employ them.

The Shadow

As I write this, approximately 1.4 million Palestinians have been violently pushed to Rafah, a city in the southern Gaza strip. There, they are facing starvation and incomprehensible cruelty. Meanwhile, southwest of the ruins of Gaza City, in what has come to be known as the Al Nabulsi massacre, over one hundred Palestians were killed by IDF fire while desperately trying to get flour.  These horrors were accelerated by the use of computationally driven killing systems. In the wake of Habsora’s use in what journalist Antony Loewenstein calls the Palestine Laboratory, we should expect similar techniques to be used elsewhere and to become a standard part of the arsenal of states (yes, even those we call democratic) in their efforts to impose their will on an ever more restless world that struggles for freedom.


References

Artificial intelligence and insurance, part 1: AI’s impact on the insurance value chain

https://www.milliman.com/en/insight/critical-point-50-artificial-intelligence-insurance-value-chain

Kinetic Military Action

https://en.wikipedia.org/wiki/Kinetic_military_action

A mass assassination factory’: Inside Israel’s calculated bombing of Gaza

https://www.972mag.com/mass-assassination-factory-israel-calculated-bombing-gaza

Report: Israel’s Gaza Bombing Campaign is the Most Destructive of this Century

https://english.aawsat.com/features/4760791-report-israels-gaza-bombing-campaign-most-destructive-century

‘Massacre’: Dozens killed by Israeli fire in Gaza while collecting food aid

https://www.aljazeera.com/news/2024/2/29/dozens-killed-injured-by-israeli-fire-in-gaza-while-collecting-food-aid

Dozens Of KFC, Taco Bell And Dairy Queen Franchises Are Using AI To Track Workers

https://www.forbes.com/sites/rashishrivastava/2024/02/23/dozens-of-kfc-taco-bell-and-dairy-queen-franchises-are-using-ai-to-track-workers

The Palestine Laboratory: How Israel Exports the Technology of Occupation Around the World

Examples of Other Algorithm Directed Targeting Systems

Project Maven

https://www.engadget.com/the-pentagon-used-project-maven-developed-ai-to-identify-air-strike-targets-103940709.html

Generative AI for Defence (marketing material from C3)

https://c3.ai/generative-ai-for-defense

Escape from Silicon Valley (alternative visions of computation)

Several years ago, there was a mini-trend of soft documentaries depicting what would happen to the built environment if humans somehow disappeared from the Earth. How long, for example, would untended skyscrapers punch against the sky before they collapsed in spectacular, downward cascading showers of steel and glass onto abandoned streets? These are the sorts of questions posed in these films.

As I watched these soothing depictions of a quieter world, I sometimes imagined a massive orbital tombstone, perhaps launched by the final rocket engineers, onto which was etched: Wasted Potential.


While I type these words, billions of dollars have been spent on and barely tabulated amounts of electrical power, water and human labor (barely tabulated, because deliberately obscured) have been devoted to large language model (LLM) systems such as ChatGPT. If you follow the AI critical space you’re familiar with the many problems produced by the use and promotion of these systems – including, on the hype end, the most recent gyration, a declaration of “existential risk” by a collection of tech luminaries (a category which, in a Venn diagram, overlaps with carnival barker).  This use of mountains of resources to enhance the profit objectives of Microsoft, Amazon and Google, among other firms not occupying their olympian perches, is wasted potential in frenetic action.

But what of alternative visions? They exist, all is not despair. The dangerous nonsense relentlessly spewing from the AI industry is overwhelming and countering it is a full time pursuit. But we can’t stay stuck, as if in amber, in a state of debunking and critique. There must be more.  I recommend the DAIR Institute and Logic(s) magazine as starting points for exploring other ways of thinking about applied computation.  Ideologically, AI doomerism is fueled in large measure by dystopian pop sci-fi such as Terminator. You know the story, which is a tale as old as the age of digital computers:  a malevolent supercomputer – Skynet (a name that sounds like a product) – launches, for some reason, a war on humanity, resulting in near extinction. The tech industry seems to love ripping dystopian yarns. Judging by the now almost completely forgotten metaverse push (a year ago, almost as distant as the pleistocene in hype cycle time), inspired by the less than sunny sci-fi novel Snow Crash, we can even say that dystopian storylines are a part of business plans (what is the idea of sitting for hours wearing VR goggles if not darkly funny?).

There are also less terrible, even hopeful, fictional visions, presented via pop science fiction such as Star Trek´s Library Computer Access/Retrieval System – LCARS.


In the Star Trek: The Next Generation episode, “Booby Trap” the starship Enterprise is caught in a trap, composed of energy sapping fields, that prevents it from using its most powerful mode of propulsion, warp drive. The ship’s chief engineer, Geordi LeForge, is given the urgent task of finding a solution. LeForge realizes that escaping this trap requires a re-configuration, perhaps even a new understanding, of the ship’s propulsion system. That’s the plot but most intriguing to me is the way LeForge goes about trying to find a solution.

The engineer uses the ship’s computer – the LCARS system – to do a retrieval and rapid parsing of the text of research and engineering papers going back centuries. He interacts with the computer via a combination of audio and keyboard/monitor. Eventually, LeForge resorts to a synthetic, holo mockup of the designer of the ship’s engines, Dr. Leah Brahms, raising all manner of ethical issues but we needn’t bother with that plot element.

I’ve created a high level visualisation of how this fictional system is portrayed in the episode:

The ability to identify text via search, to summarize and read contents (with just enough contextual capability to be useful) and to output relevant results is rather close, conceptually, to the potential of language models. The difference between what we actually have – competing and discrete systems owned by corporations – and LCARS (besides many orders of magnitude of greater sophistication in the fictional system) is that LCARS is presented as an integrated, holistic and scoped system. LCARS’ design is to be a library that enables access to knowledge and retrieves results based on queried criteria.

There is a potential, latent within language models and hybrid systems – indeed, within almost the entire menagerie of machine learning methods – to create a unified computational model for a universally useful platform. This potential is being wasted, indeed, suppressed as oceans of capital, talent and hardware is poured into privately owned things such as ChatGPT. There are hints of this potential found within corporate spaces; Meta’s LLaMA, which leaked online, shows one avenue. There are surely others.


Among a dizzying collection of falsehoods, the tech industry’s greatest lie is that it is building the future. Or perhaps, I should sharpen my description: the industry may indeed be building the future but contrary to its claims, it is not a future with human needs centered. It is possible however, to imagine and build a different computation and we needn’t turn to Silicon Valley’s well thumbed library of dystopian novels to find it.  Science fiction such as Star Trek (I’m sure there are others) provide more productive visions

Windows Vista as Neoliberal Instrument

Synopsis

Last night, before nodding off to sleep, a stray memory flitted across (or through?) my synapses; a posting I made to the Left Business Observer Listserv, titled “Windows Vista as Neoliberal Instrument”. This was, I think, my first attempt to merge my work in information technology with my (always in formation and never complete) Marxian approach to ways of thinking about that industry.

At the time of writing, over a decade ago, February of 2007, the release of Microsoft’s Windows Vista operating system was the source of a lot of debate and frustration. The OS wasn’t performing as hoped and techies were wondering why. It turned out that one of the key reasons was Microsoft’s attempt to enforce copyright via software. This proved to be a rich target for analysis and David Harvey’s ‘A Brief History of Neoliberalism’ provided a powerful analytical framework.  Also, this was a total flex.

Introduction

[content originally posted to LBOTalk February, 2007 some new formatting added]

In his most recent book, “A Brief History of Neoliberalism”, David Harvey analyzes the neoliberal turn that first Western, and later, practically every economy on Earth took to varying degrees of depth over the past 30 or so years. 

Several key features of neoliberalism are dissected:

1.) neoliberalism as a power restoration technique (i.e., restoring to capitalists the margin of power lost during the postwar years of high growth and detente with labor)

2.) neoliberalism as imperfect tool against stagnation and the problems of overproduction

and

3.) neoliberalism as a method for monetizing practices and spaces previously excluded from market concerns and controls

To properly understand the strategic concessions Microsoft made to the entertainment industry — concessions that led MSFT to deploy a software-based version of the Advanced Access Content System (AACS) in Windows Vista — you need to carefully consider that third aspect of neoliberalism.

What is AACS?

Briefly, the Advanced Access Content System is a platform, created at the behest of the entertainment industry, whose sole purpose is to enforce a (it is vainly hoped) completely uncrackable environment for “premium content” to flow through from player — device or software-based — to a display and/or audio output. Of course, the phrase “premium content” is a term of art inasmuch as the actual content might be anything from a slapdash teen sex comedy to the most subtle examples of musical or filmed art.

The motion picture and recording cartels have long been disturbed by the fact that people could record, remix and redistribute “content” at will. Over the years, many copy protection schemes have been tried; all have failed. Advances in computing power and storage capacity — moving in parallel with advances in cryptology — have finally made the old dream of an automated copyright enforcement system achievable.

Achievable, because under the AACS system, ‘intelligent’ hardware is constantly on the lookout for security breaches (for example, interceptions of the content data stream from player to output) and empowered, so to speak, to take action. What action? Well, action like actively preventing component outs from working if the HD-DVD or Blu ray disk you’re trying to view has been flagged as being compromised (or more specifically, if the cryptological “key” associated with the disk has been compromised, leading to your play privileges being ‘revoked’ by the key issuing authority).

All high definition hardware — players, digital sets, audio units — are designed to enforce this automated copyright infrastructure. Your HD-DVD or Blu Ray player will talk to your high def display over what are called High-Bandwidth Digital Content Protection compliant outputs. Together, they’ll ensure that RIAA and MPAA copyright concerns are being addressed wherever and whenever “premium content” is being viewed.

Rent Seeking via Operating System

Microsoft wanted Vista to be marketable as a media platform (and MSFT also wanted to create the de facto standard for software based AACS implementation) so they crafted a complex encryption/decryption methodology within the operating system that obeys — and then some — AACS rules. Doing so gave them negotiating space with the entertainment industry.

As any user of consumer electronics and Microsoft software knows, shit happens. The copyright enforcement, content monitoring and encryption/decryption technologies in next gen players and Vista are always on. This exacts a performance price from the devices (because our CPUs and memory are good, but not so good that they can effortlessly do both content presentation and advanced cryptological functions without exhibiting some problems at least some of the time) and especially from the software, which is very brittle and prone to malfunction.

But beyond the false piracy alarms, stuttering playbacks and other technical annoyances that are already being seen in the wild, there’s an overriding fact to keep in mind: AACS gives the entertainment industry the ability to treat the products you buy as leased objects, which can be (say, in a case of revocation resolution) the source for ever renewable revenue long after they were originally purchased.

It also creates a method for modularizing in unprecedented ways — and therefore monetizing — functions that were previously considered more or less all of a piece, such as playing and therefore viewing the disks you buy.

In order for this system to work as planned, all devices must comply with the AACS standard. The idea is to close all potential areas of escape. Eventually, perhaps after 5 to 15 years, the full magnitude of the lock-in will be in effect as older DVD and audio players are retired.

It’s been rumored that Hollywood and the RIAA are fully aware AACS is, despite all their efforts, eminently hackable, and that the true target of these new constraints are ordinary people who don’t have easy access to workarounds. 

The goal then, is to have a lever that can be pulled at any time to extract more income from “consumers”.

Links —-

A Brief History of Neoliberalism

https://global.oup.com/academic/product/a-brief-history-of-neoliberalism-9780199283279?cc=nl&lang=en&

High-Bandwidth Digital Content Protection

http://en.wikipedia.org/wiki/High-Bandwidth_Digital_Content_Protection

Advanced Access Content System

http://en.wikipedia.org/wiki/Advanced_Access_Content_System

A Cost Analysis of Windows Vista Content Protection

http://www.cs.auckland.ac.nz/~pgut001/pubs/vista_cost.html