Unknown's avatar

About Dwayne Monroe

Technologist, writer and other things which require quiet and time to do well. Sadly, we live in an age that grants us neither quiet nor time, alas.

From the Department of Self Promotion

There is neither gold nor glory to be found in challenging cultural assumptions. Particularly those shaped by propaganda.

This is all for the best. In these appearances, I chat with people I respect about the state of the ‘AI’ industry: what is happening and why.

Millennials Are Killing Capitalism: Propaganda Masked as Critique: Jacobin and ‘AI’

A Materialist Approach to the Tech Industry: From Household to Military Tech

Saturdays with Renee

July 31st, 2025 Edition of Doug Henwood’s Behind the News Radio Show:

A conversation with Mtume Gant for his ‘Within Our Gates’ podcast about ‘AI’, cinema and the history of atomic tests. There is a valence:

Command and Control: Capitalism and Computation

Capitalism, a system as inescapable as breathless news items about Trump, Musk and decay, came into its own during the age of steam power, telegraphs and colonialism (first edition, we’re witnessing the attempted redux), long before the invention of digital computers. The creation of computers, initially, a tool for military purposes (ENIAC, the first programmable digital computer was immediately put to work performing calculations for then still theoretical hydrogen bombs) eventually enabled capitalists, particularly at the commanding heights, to employ what, in military circles is known as command and control at a level of sophistication and intrusiveness previously only dreamed of. 

What is command and control?

Consider this excerpt from the essay, ‘Re-conceptualizing Command and Control‘, released in 2002 for the Canadian military and co-authored by Dr. Ross Pigeau and Carol McCann which provides a succinct definition:

“…controlling involves monitoring, carrying out and adjusting processes that have already been developed. Commanding involves creating new structures and processes (i.e., plans, SOPs, etc.), establishing the conditions for initiating and terminating action, and making unanticipated changes to plans. Most acts, including decision making, involve a sophisticated amalgam of both commanding and controlling.”

Everyone who has worked in a corporate enterprise, the land of key performance indicators (or, KPIs) and other metrics gathered and analyzed to determine profit and loss, and even, in some cases, who lives and dies, understands this definition in their bones; it captures the hierarchical structure of business, which is a form of tyranny (some of these fiefdoms have pleasant break-out rooms, decent coffee and declarations of workers being in a family until, of course, restructuring and endless re-orgs casts ‘family members’ onto the street).

From the birth of the corporate era, companies have pursued operational and logistics control to ensure profit, market share and high valuation. So-called scientific management, created and promoted by mechanical engineer and early managerial consultant Frederick Taylor in the late 19th century, was the first dedicated effort of the industrial era. Sears and Roebuck, a 19th century retail and mail order behemoth, the Amazon of the pre-digital computer age, employed an army of people, scientifically managed, to run its vast enterprise. There are commonalities between the Sears of old and Amazon:

Sears and Amazon Commonalities: Diagram by Author

The primary difference between Sears in the 19th century and Amazon today is the latter’s use of digital technology to enhance command and control techniques, enhancements that make it possible for Amazon to surveil delivery drivers on their routes, among other outrages.

From Brighter than a Thousand Suns to the Office Commute

Digital computation’s first assignment was performing the subtle calculations physicists such as Edward Teller and Stanislaw Ulam needed to bring the thermonuclear devices of their fevered dreams to irradiated life. From that beginning, brighter than a thousand suns, the age of command and control fully took shape with the creation of systems such as the US Air Force’s Semi-Automatic Ground Environment (SAGE) described in a Wikipedia article:

“The Semi-Automatic Ground Environment (SAGE) was a system of large computers and associated networking equipment that coordinated data from many radar sites and processed it to produce a single unified image of the airspace over a wide area. SAGE directed and controlled the NORAD response to a possible Soviet air attack, operating in this role from the late 1950s into the 1980s.”

SAGE System Console: Wikipedia

The SAGE system was built to create a method and infrastructure for gathering data from far flung sources and coordinating a response to what its numerous displays told people in Strategic Air Command facilities. This military purpose provided the foundation, metaphor and philosophy shaping the uses of systems that eventually came online such as commercial mainframe computers, client server architectures and what is known as ‘cloud computing.’

Note this image of SAGE system elements:

SAGE Diagram: Defense Visual Information Distribution Service

In design intent and philosophy, there is a link between the vision of computation as a means of commanding people and controlling events that shaped the SAGE system and corporate methods such as business intelligence described in this Wikipedia article:

“Business intelligence (BI) consists of strategies, methodologies, and technologies used by enterprises for data analysis and management of business information. Common functions of BI technologies include reporting, online analytical processing, analytics, dashboard development, data mining, process mining, complex event processing, business performance management, bench marking, text mining, predictive analytics, and prescriptive analytics.”

Microsoft, never one to miss an opportunity to simultaneously shape and profit from business requirements, real or imagined (does anyone recall the Metaverse? It disappeared, like youth, or money from your bank account) provides a visual of how a business intelligence platform can be built on their Azure platform:

Azure Business Intelligence Architecture: Microsoft

The common goal – the thematic bridge from SAGE to business intelligence – is data gathering and analysis which, as an objective in abstract, is not at all sinister. Every society and every social organization, no matter how large or small, needs to understand its environment, collect information and act upon what is learned. Just as SAGE applied that methodology to the task of nuclear war (which, outside of the insane circles running the world to ruin, is no one’s idea of a good use case) corporations apply it to maximizing profit. In the capitalist world, we are data points to be ingested, analyzed and optimized via something called KPIs.

Key Performance Indicators – the SAGE of Corporate Life

Key Performance Indicators or, KPIs, are the metric used to include our behavior and actions as workers, into a command and control schema. What, in the past, was directed without the aid of software (Taylorism being the first, formalized example of a pre software method) is now measured as data points stored in databases and spreadsheets. How ‘productive’ are you? KPIs, we’re told, are a way to ensure workers are on track from the perspective of owners. In a 2021 article titled ‘Why You Need Personal KPIs To Achieve Your Goals’, Forbes, a magazine once treated as scripture, advised ‘professionals’ (a word used to lobotomise that portion of one’s mind that is aware of your status as a precarious worker) to use KPIs to shape their careers:

“Peter Drucker famously said that “what is measured is managed, and what is managed gets improved.” Key Performance Indicators (KPI) are a staple of every business. It is the tool used to measure how effectively an organization is meeting vital business objectives. Teams, departments, and organizations initiate the KPIs so that it spreads to every level of an institution. If it’s such a prominent accountability measure in the business sector, why not use it for our professional success? Perhaps we should inculcate personal KPIs into our practice.”

This is good advice in a way not unlike the sort of contextually useful counsel you’d get on how to handle yourself in a bar fight or dealing with a cop who’s obsessed with demonstrating his authority; you contort yourself to survive. It’s useful, but its utility is a sign of a problem, of a system of artificially enforced limits whose boundaries serve others’ interests.

In his 2018 book, ‘Surveillance Valley’, journalist Yasha Levine details the links between the US’ intelligence agencies and Silicon Valley. From the beginning, Levine shows, companies such as Oracle and technologies we think sprung into existence on the sun blasted terrain of California like dreams were nurtured and even created by the US’ surveillance apparatus.

There is a similar link between the techniques used by the corporations who dominate our lives and the systems and thinking which shaped the US’ command and control fixated response to the Cold War. Our work lives exist in the long shadow of the computers used to determine if ICBMs should wing their way to targets.

Against Snobbery (or, on writing)

Years ago, a man I’ve known for decades via electronic networks, started a blog.

He apologized because, to that class of people who assume a byline in the New York Times (described by Gore Vidal as always being “at the very heart of malice”) or a PhD confer a kind of omniscient expertise, starting a blog was akin to driving a Volkswagen (back when they were much cheaper) when a Mercedes was preferable as a class marker.

His blog was, indeed is, good. He ably writes about what he knows, how capital markets function, a topic he understands deeply from the inside. I suppose we could wait for a book by an academic or a series by a Columbia Journalism School trained NYT staffer on capital markets – such work is part of the fabric of what people who choose doing violence to the English language call ‘knowledge making’ but surely there is a place for information from the trenches.

My friend’s unnecessary apology was inspired by snobbery. You know what I mean. It’s snobbery that causes people to dismiss Wikipedia, even as an introductory source. Is the Wikipedia entry on magnetohydrodynamics bad? Most of us don’t know but we’ve been told it’s in a bad neighborhood, far from the tree-lined campuses where police beat pro Palestinian students or Manhattan newsrooms (or what’s left of either). To participate in the game of snobbery, a game imposed on most of us by a few nervous elites and their minions, we must turn up our nose, as if detecting the scent of a pile of dog poop, carelessly left on a sidewalk.

This comes to mind because of the way Microsoft and Google, in their sales propaganda, have promoted large language models as the solution to the problem of writing. I wrote ‘problem,’ because for many of us, told that only a small group of people possess the ability to write, putting ideas to paper or screen is felt to be a problem.

Consider the way Microsoft describes its product, Copilot for Word:

Copilot in Word ushers in a new era of writing, leveraging the power of AI. It can help you go from a blank page to a complete draft in a fraction of the time it would take to compose text on your own. While it may write exactly what you need, sometimes it may be “usefully wrong” thus giving you some helpful inspiration.

The ‘problem’ solved by a machine that, as it bestows upon us a new era of writing, consumes, by some estimates, terawatts of electrical power. Writing, no matter how laborious, is a problem best solved by thought. Indeed, one of the critical aspects of writing – whether it’s fiction, non fiction or even a well considered social media post – is the application of thought to the process of organizing and recording your ideas and points of view.

Dependence on word assemblers such as ChatGPT and even our new silicon frenemy, DeepSeek, regardless of how cleverly architected, interrupts this process but so does snobbery. The snob industrial complex – which promotes the idea that good writing requires a university course or attachment to a media corporation – prepared the soil for the idea of replacing writing with machinery. Of course millions, harassed, short on time but also, purposely discouraged from writing, apologize for the blogs they should make to share their knowledge. Millions who are made to feel inferior when looking up a topic on Wikipedia, are, unsurprisingly, receptive to tech industry propaganda: never mind about thinking to write, we’ll do it for you.

Writing is a craft; putting one sentence after another to build a tale – sometimes true, or as near as one can come, sometimes fanciful. You hone your craft by reading and writing and, by assembling for yourself what a friend of mine calls a writer’s table. When writing about the tech industry, Raymond Chandler and Karl Marx are sitting at my writer’s table alongside others – living and dead – from whom I learn to sharpen my own, yes, voice. There is decades of experience – being in the data centers – and a love of writing that goes into the work.

There’s nothing stopping you from doing the same. I want to read from people who serve food in restaurants and pilots and nuclear plant workers and people who have been cast out of the world of work. I want to hear from everyone, not just the famous or celebrated writing about everyone. 

Having reached this point in the piece it’s typical to try to create something pithy that sums up what came before. In lieu of that, I’ll say, please write if you want to. Do not surrender your creativity to snobbery or machinery. If you need encouragement, I’m here to help.

We need as many voices reporting from the various fronts as we can get. 

The F-35 Maneuver

Bad ideas, like death, are inevitable and just as inescapable.

The US-based tech industry is a Pandora’s box of bad ideas, unleashed upon an unwilling and unwitting populace, and indeed world, with reckless abandon, scorching lives and the Earth itself. Never mind, they say, we’re building the future.

The latest bad idea to spread dark wings and take flight is that building a super massive data center for ‘AI’ called ‘Stargate’- a megamachine that will solve all our problems like a resource and real estate devouring Wizard of Oz – is not only good, but essential.

In an Associated Press article titled, ‘Trump highlights partnership investing $500 billion in AI‘ published Jan 23, 2025, the project is described:

WASHINGTON (AP) — President Donald Trump on Tuesday talked up a joint venture investing up to $500 billion for infrastructure tied to artificial intelligence by a new partnership formed by OpenAI, Oracle and SoftBank.

The new entity, Stargate, will start building out data centers and the electricity generation needed for the further development of the fast-evolving AI in Texas, according to the White House. The initial investment is expected to be $100 billion and could reach five times that sum.

“It’s big money and high quality people,” said Trump, adding that it’s “a resounding declaration of confidence in America’s potential” under his new administration.

[…]

It seems like only yesterday, or more precisely, several months ago, that the same ‘Stargate’, with a still astronomically large but comparatively smaller budget, was described in a Tom’s Hardware article of March 24, 2024 titled ‘OpenAI and Microsoft reportedly planning $100 billion datacenter project for an AI supercomputer‘ –

Microsoft and OpenAI are reportedly working on a massive datacenter to house an AI-focused supercomputer featuring millions of GPUs. The Information reports that the project could cost “in excess of $115 billion” and that the supercomputer, currently dubbed “Stargate” inside OpenAI, would be U.S.-based. 

The report says that Microsoft would foot the bill for the datacenter, which could be “100 times more costly” than some of the biggest operating centers today. Stargate would be the largest in a string of datacenter projects the two companies hope to build in the next six years, and executives hope to have it running by 2028.

[…]

Bad ideas are inevitable but also, apparently, subject to cost overruns.

There are many ways to think and talk about this project, which is certain to fail (and there is news of far less costly methods, making the Olympian spending even more obviously suspicious). For me, the clearest way to understand the Stargate project and in fact, the entire ‘AI’ land grab, is as an attempt to create guaranteed profit for those tech firms who’re at the commanding heights – Microsoft, OpenAI, Amazon, Oracle and co-conspirators. Capital will flow into these firms whether the system works as advertised or not – i.e. they are paid for both function (such as it is) and malfunction.

This isn’t a new technique. The US defense industry has a long history of stuffing its coffers with cash for delivering weapons systems that work… sometimes. The most infamous example is Lockheed’s F-35 fighter, a project that provides the company with funding for both delivery and correction as described in the US Government Accounting Office article, ‘F-35 Joint Strike Fighter: More Actions Needed to Explain Cost Growth and Support Engine Modernization Decision’ May 2023 –

The Department of Defense’s most expensive weapon system—the F-35 aircraft—is now more than a decade behind schedule and $183 billion over original cost estimates.

[…]

That’s a decade and 183 billion of sweet, steady profit, the sort of profit the tech industry has long sought. First there was ‘enterprise software’, then there was subscription-based cloud, both efforts to create ‘growth’ and dependable cash infusions. Now, with Stargate, the industry may have, at last, found its F-35. Unlike the troubled fighter plane, there won’t be any Tom Cruise films featuring the data center. Then again, perhaps there will be. Netflix, like the rest of the industry, is out of ideas.

State of Exception – Part Two: Assume Breach

In part one of this series, I proposed that Trump’s second term, which, as we’re seeing with the rush of executive orders, has, unlike his first, a coherent agenda (centered on the Heritage Foundation’s Project 2025 plan), would be a time of increased aggression against ostracized individuals and groups, a state of exception in which the pretence of bourgeois democracy melts away.

Because of this, we should change our relationship with the technologies we’re compelled to use; a naive belief in the good will or benign neglect of tech corporations and the state should be abandoned. The correct perspective is to assume breach.

In a April, 2023 published blog post for the network equipment company, F5, systems security expert Ken Arora, described the concept of assume breach: 

Plumbers, electricians, and other professionals who operate in the physical world have long internalized the true essence of “assume breach.” Because they are tasked with creating solutions that must be robust in tangible environments, they implicitly accept and incorporate the simple fact that failures occur within the scope of their work. They also understand that failures are not an indictment of their skills, nor a reason to forgo their services. Rather, it is only the most skilled who, understanding that their creations will eventually fail, incorporate learnings from past failures and are able to anticipate likely future failures.

[…]

For the purposes of this essay, the term, failure, is re-interpreted to mean the intrusion of hostile entities into the systems and devices you use. By adopting a technology praxis based on assumed breach, you can plan for intrusion by acknowledging the possibility that your systems have, or will be penetrated.

Primarily, there are five areas of concern:

  • Phones
  • Social Media
  • Personal computers
  • Workplace platforms, such as Microsoft 365 and Google’s G-Suite
  • Cloud’ platforms, such as Microsoft Azure, Amazon AWS and Google Cloud Platform

It’s reasonable to think that following security best practices for each technology (links in the references section) offers a degree of protection from intrusion. Although this may be true to some extent, when contending with non-state hostiles, such as black hat hackers, state entities have direct access to the ownership of these systems, giving them the ability to circumvent standard security measures via the exercise of political power.

Phones (and tablets)

Phones are surveillance devices. No communications that require security and which, if intercepted, could lead to state harassment or worse should be done via phones. This applies to iPhones, Android phones and even niche devices such as Linux phones. Phones are a threat in two ways:

  1.  Location tracking – phones connect to cellular networks and utilize unique identifiers that enable location and geospatial tracking. This data is used to create maps of activity and associations (a technique the IDF has used in its genocidal wars)
  2.  Data seizure – phones store data that, if seized by hostiles, can be used against you and your organization. Social media account data, notes, contacts and other information

Phone use must be avoided for secure communications. If you must use a phone for your activist work, consider adopting a secure Linux-based phone such as GrapheneOS which may be more resistant to cracking if seized but not to communication interception. As an alternative, consider using old school methods, such as paper messages conveyed via trusted courier within your group. This sounds extreme and may turn out to be unnecessary depending on how conditions mutate. It is best however, to be prepared should it become necessary.

Social Media

Social media platforms such as Twitter/X, Bluesky, Mastodon, Facebook/Meta and even less public systems such as Discord, which enables the creation of privately managed servers, should not be used for secure communication. Not only because of posts, but because direct messages are vulnerable to surveillance and can be used to obtain pattern and association data. A comparatively secure (though not foolproof) alternative is the use of the Signal messaging platform.  (Scratch that: Yasha Levine provides a full explantation of Signal as a government op here).

Personal Computers

Like phones, personal computers -laptops and Desktops – should not be considered secure. There are several sub-categories of vulnerability:

  • Vulnerabilities caused by security flaws in the operating system (for example, issues with Microsoft Windows or Apple MacOS)
  • Vulnerabilities designed into the operating systems by the companies developing, deploying and selling them for profit objectives (Windows CoPilot, is a known threat vector, for example)
  • Vulnerabilities exploited by state actors such as intelligence and law enforcement agencies (deliberate backdoors)
  • Data exposure if a computer is seized

Operating systems are the main threat vector – that is, opening to your data – when using a computer. In part one of this series, I suggested abandoning the use of Microsoft Windows, Google Chrome OS and Apple’s Mac OS for computer usage that requires security and using secure Debian Linux instead. This is covered in detail in part one.

Workplace Platforms such as Google G-Suite and Microsoft 365 and other ‘cloud’ platforms such Microsoft Azure and Amazon Web Services

Although convenient, and, in the case of Software as a Service offerings such as Google G-Suite and Microsoft 365, less technically demanding to manage than on-premises hosting, ‘cloud’ platforms should not be considered trustworthy for secure data storage or communications.

This is true, even when platform-specific security best practices are followed because such measures will be circumvented by the corporations that own these platforms when it suits their purposes – such as cooperating with state mandates to release customer data.

The challenge for organizations who’re concerned about state sanctioned breach is finding the equipment, technical talent, will and organizational skill (project management) to move away from these ‘cloud’ systems to on-premises platforms. This is not trivial and has so many complexities that it deserves a separate essay, which will be part three of this series.

The primary challenges are:

  • Inventorying the applications you use
  • Assessing where the organisation’s data is stored and the types of data
  • Assessing the types of communications and the levels of vulnerability (for example, how is email used? What about collaboration services such as SharePoint?)
  • Crafting an achievable strategy for moving applications, services and data off the vulnerable cloud service
  • Encrypting and deleting data

In part three of this series, I will describe moving your organisation’s data and applications off of cloud platforms: what are the challenges? What are the methods? What skills are required? I’ll talk about this and more.

References

Assume Breach

Project 2025

Security Best Practices – Google Workspace

Microsoft 365 Security Best Practices

Questions and Answers: Israeli Military’s Use of Digital Tools in Gaza

UK police raid home, seize devices of EI’s Asa Winstanley

Cellphone surveillance

GrapheneOS

Meta-provided Facebook chats led a woman to plead guilty to abortion-related charges

State of Exception: Part One

In his 2005 published book, State of Exception, Italian philosopher Giorgio Agamben (who, I feel moved to say, was an idiot on the topic of Covid 19, declaring the virus to be nonexistent) wrote:

The state of exception is the political point at which the juridical stops, and a sovereign unaccountability begins; it is where the dam of individual liberties breaks and a society is flooded with the sovereign power of the state.”

The (apparently, merely delayed by four years) re-election of Donald Trump is certain to usher in a sustained period of domestic emergency in the United States, a state of exception when even the pretense of bourgeois democracy is dropped and state power is exercised with few restraints.

What does this mean for information technology usage by activist groups or really, anyone?

In Feb of 2024, I published the essay, Information Technology for Activists – What is To Be Done? In this essay, I provided an overview of the current information technology landscape, with the needs and requirements of activist groups in mind. When conditions change, our understanding should keep pace. As we enter the state of exception, the information technology practices of groups who can expect harassment, or worse, from the US state should be radically updated for a more aggressively defensive posture.

Abandon Cloud

The computer and software technology industry is the command and control apparatus of corporate and state entities. As such, its products and services should be considered enemy territory. Under the capitalist system, we are compelled to operate on this territory to live. This harsh necessity should not be confused with acceptance and is certainly not a reason to celebrate, like dupes, the system that is killing the world. 

The use of operating systems and platforms from the tech industry’s primary powers – Microsoft, Amazon, Google, Meta, X/Twitter, Apple, Oracle – and lesser known entities, creates a threat vector through which identities, data and activities can be tracked and recorded. Moving off these platforms will be very difficult but is essential. What are the alternatives? 

There are three main areas of concern:

  • Services and platforms such as social media, cloud and related services
  • Personal computers (for example, laptops)
  • Phones

In this essay, cloud and computer usage are the focus.

By ‘cloud’, I’m referring to the platforms owned by Microsoft (Azure), Amazon (Amazon Web Services or, AWS) and Google (Google Cloud Platform or GCP) and services such as Microsoft 365 and Google’s G Suite. These services are not secure for the purposes of activist groups and individuals who can expect heightened surveillance and harassment from the state.  There are technical reasons (Azure, for example, is known for various vulnerabilities) but these are of a distant, secondary concern to the fact that, regardless of each platform’s infrastructural qualities or deficits, the corporations owning them are elements of the state apparatus.

Your data and communications are not secure. If you are using these platforms, your top priority should be abandoning usage and moving your computational resources to what are called on-premises facilities and use the Linux operating system, rather than MacOS or Microsoft Windows.  

On Computers

In brief, operating systems are a specialized type of software that makes computers useful. When you open Microsoft Excel on your computer, it’s the Microsoft Windows operating system that enables the Excel program to utilize computer hardware, such as memory and storage. You can learn more about operating systems by reading this Wikipedia article. This relationship – between software and computing machinery – applies to all the systems you use: whether it’s Windows, Mac or others.

Microsoft Windows (particularly the newest versions which include the insecure by design ‘Co-pilot plus PC’ feature) and Apple’s MacOS should be abandoned. Why? The tech industry, as outlined in Yasha Levine’s book, Surveillance Valley, works hand in glove with the surveillance state (and has done so since the industry’s infancy). If you or your organization are using computers for work that challenges the US state – for example, pro-Palestinian activism or indeed, work in support of any marginalized community, there is a possibility vital information will be compromised – either through seizure, or remote access that takes advantage of backdoors and vulnerabilities.

This was always a possibility (and for some, a harsh experience) but as the state’s apparatus is directed towards coordinated, targeted suppression, vague possibility turns into high probability (see, for example, UK police raid home, seize devices of EI’s Asa Winstanley).

The Linux operating system should be used instead, specifically, the Debian distribution, well known for its secure design. Secure by design does not mean invulnerable to attack; best practices such as those described in the article, Securing Debian Manual 3.19, on the Debian website, must be followed to make a machine a harder target.

Switching and Migration

Switching from Microsoft Windows to Debian Linux can be done in stages as described in the document ‘From Windows to Debian’. Replacing MacOS with Debian on Mac Pro computers is described in the document, ‘Macbook Pro’ on the Debian website. More recent Mac hardware (M1 Silicon) is being addressed via Debian’s Project Banana.

On software

If you’re using Microsoft Windows, it’s likely you’re also using the MS Office suite. You may also be using Microsoft’s cloud ‘productivity’ platform, Microsoft 365. Perhaps you’re using Google’s Workspace platform instead or in addition to Microsoft 365. In the section on ‘Services and Platforms’, I discuss the problems of these products from a security perspective. For now, let’s review replacements for commercial ‘productivity’ suites that are used to create documents, spreadsheets and other types of work files.


In the second installment of this essay series I will provide greater detail regarding each of the topics discussed and guidance about the use of phones which are spy devices and social media, which is insecure by design.

Not Mutual, But Assured

I came of age – emerging into young adulthood, liberated, it seemed, from teenaged concerns by entering my 20s – during what we were told was the Cold War’s end. A year before the Soviet Union fell in 1991, President George H.W. Bush, in a speech that was once infamous but is rarely discussed today, delivered to a joint session of the US Congress September 11, 1990 declared that a ‘new world order’ was born (Bush went on to repeat this phrase – a leitmotif of his foreign policy – during a speech at the UN in 1991). For those of us who grew up under the shadow of nuclear annihilation, what macabre war planners called Mutual Assured Destruction or MAD, this provided a form of comfort or at least, the prospect of release from modernity’s prime terror.

Fear inspires a variety of reactions, among them, real or pretended ignorance of danger or the opposite: a desire to know more, to feel, some sense of, not control, always an illusion, but awareness. If I was destined to die, vaporized by a luminous ball of atomic fire, at least there’d be a millisecond of knowing the infernal mechanism’s workings. Growing up in the latter stages of the MAD era, to get this sense of awareness, I studied nuclear weapons and nuclear war doctrines (at least, what was made public). If, on a given Sunday, during lunch, you wanted to know about hydrogen bombs and turned to me for an answer, I could take a sip of vodka and give a solid, well studied non-specialist’s reply.

In the collective imagination, there was a fixation on the scale of devastation. Whether in fictional depictions such as the Terminator films or grimly matter of fact Pentagon strategy documents, the total destruction of major cities – millions dead from blast, heat, radiation and fallout – was a common theme.  When anyone said, ‘nuclear war’, it meant the end of the world. What most of us did not know was that just as rust never sleeps, war planners do not cease working to sharpen their blades. New types of nuclear weapons were in the minds of designers, expressed via mathematics and simulated using computation. There is evidence these abstractions have recently taken solid form to be unleashed on Syria’s tortured soil.


On December 23, 2024, Swiss physicist Hans-Benjamin Braun posted the following to his Twitter account:

Nuclear attack in Tartus (Syria):

Radioactive fingerprint of nuke (Tartus) measured in Cyprus within ~16 hours after the attack.

[Note that the dose rate peak cannot be ascribed to precipitation as higher precipitation occurred on Dec 5 with no discernible radiation increase]

The post, first in series that read like urgent dispatches, was based on an analysis of several data points: seismic, radiation and blast effects, used to present a dark conclusion: a new class of nuclear weapon, called Fourth Generation Fusion Nuclear Weapons (FGNW) by US Air Force researcher James Denton in a report titled ‘The Third Nuclear Age: How I learned to Start Worrying About the Clean Bomb’ was deployed in Tartus, Syria.

As the days wore on, more evidence appeared. In a post made on December 26, Dr. Braun posted this update::

Tartus nuke:

DoD data yield for a 99.9% clean weapon (e.g. “Housatonic”) with 0.3kt yield at 110 miles a (max) fallout of 0.035 mR/h. With the obs. time decay at 15 to 20h this yields a dose rate of 9-12nSv/h.

This agrees with observation of 11.55 +/-1.27 nSv/h (>8 sigma signal). 

A great deal of technical terrain is covered in this brief post so let’s walk through it.

By “DoD data yield” Dr. Braun is referring to the calculations of nuclear weapon outcomes derived from a US Department of Defense document, ‘The Effects of Nuclear Weapons’ (originally published in 1977). Using these calculations, Braun determined that the Tartus detonation’s characteristics were in line with what was calculated for the last of the 31 test explosions in the 1962, Operation Dominic series, the “Housatonic” detonation of Oct 30, 1962. This explosion was declared 99.9% ‘clean’ , that is, the amount of radiation was significantly less than what is usually produced by nuclear explosions. The reduction of radiation, while retaining other nuclear effects, was the result of the use of a design approach called Ripple.

The impetus for the Ripple program is described in the document, ‘Ripple: An Investigation of the World’s Most Advanced High-Yield Thermonuclear Weapon’. Here is an excerpt:

Operation Redwing and “Clean” Weapons

To help explain the significance of the Ripple concept and the context in which it was devised, we begin with this 1955 letter from then Secretary of Defense Charles E. Wilson:

Until the CASTLE (1954) tests confirmed the feasibility of megaton yields at comparatively small cost, military economy in the atomic weapons field had been largely dominated by blast effects and means of maximizing these (effects) in relation to design and delivery costs. As important as these blast considerations still are, we are now confronted with perhaps even more important considerations in the radioactive by-products field. Stated broadly, the problem appears to be that of maximizing the military effect at the desired time and place, and minimizing such effects where they are not desired. While blast effects are essentially instantaneous and local, the radioactive effects may cover very large areas and may persist for very long periods ranging, in fact, from days in the local fallout effects to many years in atmospheric contamination effects. In other words, radioactive effects force us to bring time in as an additional dimension in dealing with this problem. Moreover, the areas subject to lethal radiation are so large, that in planning the use of these weapons we must carefully weigh the damage to friendly as well as enemy installations.

[…]

Stated broadly, the problem appears to be that of maximizing the military effect at the desired time and place, and minimizing such effects where they are not desired”.  

Unsurprisingly and appropriately, Dr. Braun has faced objections, which, when offered in a spirit of scientific inquiry, he seems to welcome. Social media is an arena, where attempts at conversation or debate are as likely to come to the attention of people who are uninformed, yet confident in their ignorance as a peer who knows what you’re talking about.  Among the informed challenges (as opposed to random objections and, potentially, IDF bots) were questions about the radiation levels; shouldn’t they be much higher? Dr. Braun’s answer, based on his understanding of the effects of more advanced designs – the ‘Housatonic’ class – is no, the Tartus detonation represents the first use of a new type of weapon. If he is right, we have entered a more dangerous phase of the nuclear era in which the use of nuclear weapons becomes more attractive because the goal of ‘maximizing the military effects while minimizing undesired effects’ has been achieved.


The March 6, 2022 edition of the BBC’s ‘Point of View’ radio program featured British novelist and essayist, Will Self, reading his work titled ‘Return of the Bomb’. Self used the Russian invasion of Ukraine, then only a month old, and statements President Putin made at the time about Russia’s readiness to use nuclear weapons to discuss what Self called the ‘60th year of the Arkhipov age’. Arkhipov, as in Vasily Arkhipov, the Soviet naval officer who, at a crucial moment in the Cuban Missile Crisis of 1962, prevented the firing of an atomic torpedo on US naval vessels, which surely would have led to a full nuclear war and an end to all things.  This decision, Self accurately tells us, earned Arkhipov a special place of honor (a place he has not been given, certainly not in ‘the West’).

Discussing the contradictions of the MAD doctrine we were told maintained a sort of nervous equilibrium, Self stated:

“One of the curious things about the doctrine [of mutually assured destruction] is that it assumes nation states, and even empires, behave as rational, self interested individuals, while the Arkhipov incident tells us that in fact, armageddon is often only averted by actual individuals who will rebel against groupthink. Another paradox of MAD besides its worrying acronym, is that it relies on hostile powers’ motivations and dispensations being transparent to one another. However, what we know from the record, is that both the possibility of nuclear war and its avoidance during the Cuban crisis were a function of ignorance and misreading of intelligence.”

If Dr. Braun is correct and, a precision type of nuclear weapon was used in Tartus, Syria, a productionized refinement of what was deployed in the ‘Housatonic’ test of 64 years ago, we have exited the MAD era (perhaps, as Self notes, we were never in it) and entered an age in which nuclear weapons become a regular part of military action.

New world order indeed.


References

George H.W. Bush New World Order Speech

https://bush41library.tamu.edu/archives/public-papers/2217

The Third Nuclear Age

The Effects of Nuclear Weapons

https://www.atomicarchive.com/resources/documents/effects/glasstone-dolan/index.html

Operation Dominic

https://en.wikipedia.org/wiki/Operation_Dominic

An Investigation of the World’s Most Advanced Nuclear Weapon

Will Self: The Return of the Bomb

https://www.bbc.co.uk/programmes/m0014xyd

Video of Tartus Explosion:

https://twitter.com/i/status/1872739489858371867

Dr. Braun’s Bio

https://www.geophysical-forensics.ch/about.html

Vasily Arkhipov

https://en.wikipedia.org/wiki/Vasily_Arkhipov

Cuban Missile Crisis

https://en.wikipedia.org/wiki/Cuban_Missile_Crisis

We Will Demand it For You. On AI and Nuclear Power

I vividly remember the Three Mile Island incident which, to-date, remains the most severe accident in US commercial nuclear plant history. The military’s own radiation soaked history, still mostly classified, surely includes even darker moments. At the time, I was a boy who, among other things, studied nuclear energy.  We all need hobbies, and learning about reactors was one of mine. Softball, lemonade and subcritical atomics; a good childhood, various things considered. Once the story broke on local news in Philadelphia on what I recall as a crisp, March day in 1979 that TMI, as it was known, was in trouble, the adults in my life – at church and school and in my family – aware of my interests, turned to me to explain what it all meant. Would it explode, like the warhead of a Titan II, meant for Moscow? Or would radiation creep down the Susquehanna River from TMI’s upstate Pennsylvania location, killing us softly? Unexpectedly, I had an audience for ad hoc lectures about failing coolant systems. 

What motivated those adults to listen to a child was unease, approaching terror. That was the dominant emotion. Quietly managed, ever present unease. It was appropriate. How close we came, we now know, to a full meltdown, a Chernobyl-level event.

***

TMI recently came back to my thoughts, like a suddenly remembered nightmare, because of news stories that Microsoft, claiming an acute need for electrical power to supply its ‘AI’ data centers, had signed an agreement with Constellation Energy, the plant’s owner, to re-open one of its reactors. 

Here’s an excerpt from the Financial Times article, ‘Microsoft in deal for Three Mile Island nuclear power to meet AI demand –

Constellation Energy will reopen the Three Mile Island nuclear plant in Pennsylvania to provide power to Microsoft as the tech giant scours for ways to satisfy its soaring energy demand while keeping its emissions in check.

The companies on Friday unveiled a 20-year power supply deal which will entail Constellation reopening Unit 1 of the nuclear facility which was shuttered in 2019, in what would be the second such reopening of a plant in the US.

Three Mile Island’s second unit, which was closed in 1979 after a partial meltdown that led to the most serious nuclear accident in US history, will remain closed.

“The decision here is the most powerful symbol of the rebirth of nuclear power as a clean and reliable energy source,” said Constellation chief executive Joe Dominguez on a call with investors.

[…]

As I began writing this essay, I tried to think of an appropriate introduction, perhaps a quote from Phillip K Dick, whose work is a meditation on technology and madness, leitmotifs of our barbarous era. In the end, I decided to let the situation’s dangerous absurdity speak for itself. 

Let’s then, state the absurd: Microsoft and its ‘hyper-scale’ competitors (more co-conspirators, at this point), Amazon and Google, are turning to nuclear power to provide energy for their generative AI data centers. Pause for a moment to reflect on that sentence which I wrote as plainly as possible, foregoing writerly effects. To some, it’s a dream materialized, the science fiction world they imagined, come to life. To more sober minds, it’s a nightmare; an indication of how detached the software wing of capitalism is from the work of providing anything related to the goods and services people and organizations need or want.

It also puts flesh on the bones of that old phrase, ‘late stage capitalism’.

***

No one asked for so-called ‘generative AI’, the marketing name for a collection of algorithmic methods that ingest text, images, sounds etc – primarily from the Internet, without permission or compensation – iteratively processed using statistics, adjusted by poorly paid workers, computationally kneaded, to produce plausible outputs, that are sold as products. No one asked for it, but as I’ve discussed in a previous essay, the US tech industry’s key players, like gamblers drunk on hubris and hope, have bet their futures on super profits, courtesy of ‘AI’.

And, like desperate gamblers who, as their streak of luck ends, insist everyone around them just believe, the tech industry uses its media leverage to push a story: there’s an urgent need for more electricity to power the ‘AI’ the world allegedly clamors for. We are told there is a demand so great that even old nuclear power plants, such as the Three Mile Island facility must be restarted.

“AI demand” is the theme, the leitmotif; a story that ‘demand’ (no numbers are offered) is extraordinary, requiring that an ancient and indeed, infamous, nuclear plant must be resurrected, rising unbidden, like Godzilla, patron saint of the atomic age, from Tokyo bay. In 1966, Phillip K Dick wrote a novelette titled ‘We Can Remember it for You Wholesale’, the basis for the 1990 action film, ‘Total Recall’. Today, looking around at our world, PKD might be inspired to write a sequel, ‘We Will Demand it, For You’.

But what, exactly, is being demanded? According to Microsoft, its products such as Copilot, the company’s rebranding of OpenAI’s suite of large language model based systems (ChatGPT is the best known example):

Microsoft Copilot is an AI-powered digital assistant designed to help people with a range of tasks and activities on their devices. It can create drafts of content, suggest different ways to word things you’ve written, suggest and insert images or banners, create PowerPoint presentations from Word documents and many other helpful things.

[…]

Our demand for creating automated drafts of documents is so incredible, Microsoft tells us, that it is running out of electricity to spark the data centers providing this vital service and nuclear power, even if supplied by a decades old plant, best known for being the site of a partial meltdown, is their, and we’re encouraged to think, our last, best hope to keep the document summaries flowing. In the science fiction stories I read as a boy, nuclear power took humanity to the stars and energized the glowing hearts of robots. In the world crafted by the tech giants, it helps us create pivot tables for spreadsheets the sales team must have, lest darkness fall.

***

As lies go, the tech industry’s promotion of the idea that we’re demanding it build more data centers, to host more computational equipment, to produce more ‘generative AI’, for more chatbots and variations thereof, ranks as among the most incredible and ridiculous. It seems however, that we live in an age in which danger, lies and absurdity walk arm and arm, dragging us straight into the abyss. This is the moment in a critical essay when it is expected that the author proposes solutions, an answer to the question, ‘what is to be done?’.

Instead of that I offer a warning: the tech industry cannot be regulated and ‘ethics’ is only a diversion. Instead of trying to reform this system, monstrous in conception and execution, our efforts would be better spent preparing to circumvent and eventually, replace it.

References

Three Mile Island Accident

https://en.wikipedia.org/wiki/Three_Mile_Island_accident?wprov=sfti1#

Microsoft’s AI Power Needs Prompt Revival of Three Mile Island Nuclear Plant

Bloomberg

https://www.bloomberg.com/news/articles/2024-09-20/microsoft-s-ai-power-needs-prompt-revival-of-three-mile-island-nuclear-plant?sref=vuYGislZ

Financial Times

https://www.ft.com/content/ddcb5ab6-965f-4034-96e1-7f668bad1801

Why data centers want to have their own nuclear reactors

https://english.elpais.com/technology/2024-04-30/why-data-centers-want-to-have-their-own-nuclear-reactors.html#

About Microsoft CoPiliot

https://www.microsoft.com/en-us/microsoft-copilot/learn?form=MA13FV

Oracle will use three small nuclear reactors to power new 1-gigawatt AI data center

https://www.tomshardware.com/tech-industry/oracle-will-use-three-small-nuclear-reactors-to-power-new-1-gigawatt-ai-data-center

Amazon Vies for Nuclear-Powered Data Center

https://spectrum.ieee.org/amazon-data-center-nuclear-power

How to Read AI Hype: References

In this video, I walk through the document, ‘The Decade Ahead‘ by Leopold Aschenbrenner published at the Situational Awareness dot ai website. In the document, Aschenbrenner makes the usual bold assertions about ‘AGI’ (artificial general intelligence) equalling and soon, exceeding human cognition. How do you critically read such hype? Let’s go through it.

References

SITUATIONAL AWARENESS: The Decade Ahead

How GPT-3 Works

‘It’s a Scam.’ Accusations of Mass Non-Payment Grow Against Scale AI’s Subsidiary, Outlier AI

Reclaiming AI as a theoretical tool for cognitive science

AI as Stagnation: On Tech Imperialism

Unless you’ve been under a rock, and probably, even if you have, you’ve noticed that ‘AI’ is being promoted as the solution to everything from climate change to making tacos. There’s an old joke: how do you know when a politician is lying? Their mouth is moving. Similarly, anytime businesses relentlessly push something, the first question that should come to mind is: how are they trying to make money?

Microsoft, in particular, has, as the saying goes, gone all in rebranding its implementation of OpenAI’s ChatGPT large language model based products as CoPilot, embedded across Microsoft’s catalog. Leaving aside, for the sake of this essay, the question of what so-called AI actually is, (hint: statistics)  considering this push, it’s reasonable to ask: what is going on?

Ideology certainly plays a role

That is, the belief (or at least, the assertion) of a loud segment of the tech industry that they are building Artificial General Intelligence – a successor to humanity, genuinely thinking machines

Ideology is an important factor but it’s more useful to place technology firms such as Microsoft back within capitalism in our thinking. This is a way to reject the diversions this sector uses to obscure that fact

To do this, let’s consider Vladimir Lenin’s theory of imperialism as expressed in his essay, ‘Imperialism the highest stage of capitalism’.

In January of 2023, I published an essay to my blog titled, ChatGPT: Super Rentier.

The thesis of that essay is that Microsoft’s partnership with, and investment in, OpenAI and the insertion of OpenAI’s large language model software, known as ChatGPT into Microsoft’s product catalog, was done to create a platform Microsoft would use to make it a kind of super rentier – or, super landlord – of AI systems. Others, sub-rentiers, would build their platforms using Microsoft’s platform as the backend making it the super rentier – the landlord of landlords.

With this in mind, let’s take a look at this visualization of Lenin’s concept of imperialism I cooked up:

For me, the key element is the relationship between the tendency towards monopoly which leads to stagnation (after all, what’s the incentive to stay sharp if you control a market?) and the expansion of capitalist activity to other, weaker territories to temporarily resolve this stagnation – this is the material motive for capitalist imperialism or as Lenin also phrased it, parasitism.

Let’s apply this theory to Microsoft and its push for AI everywhere:

Microsoft, as a software firm, once derived most of its profit from selling products such as SQL Server, Exchange Server and the Office Suite. 

This became a near monopoly for Microsoft as it dominated the corporate market for these and other types of what’s known as enterprise applications. 

This monopoly led to stagnation – how many different ways can you try to derive profit from Microsoft Office, for example? By stagnation, I don’t mean that Microsoft did not make money or profit from its dominance, but this dominance no longer supported the growth capitalists demand.

The answer, for a time, was the subscription model of the Microsoft 365 platform which moved corporations from a model in which products such as Exchange would be hosted in-house in corporate data centers and licensed, to one in which there was a recurring charge for access and guaranteed revenue stream for Microsoft.

No longer was it possible for a company to buy a copy of a product and use it even after licensing expired. Now, you have to pay up, routinely, to maintain access.

After a time, even this led to a near monopoly and the return of stagnation as the market for expansion was saturated

Into this situation, enter ‘AI’

By inserting AI – chatbots and image generators into every product and pushing for this to be used by its corporate customers, Microsoft is enacting a form of the imperialist expansion Lenin described – it is a colonization of business process, education, art, filmmaking science and more on an unprecedented scale

But what haunts the AI push is the very stagnation it is supposed to remedy

There is no escape from the stagnation caused by monopoly, only temporary fixes which merely serve to create the conditions for future decay and conflict.

References

ChatGPT

Microsoft Copilot

Imperialism the highest stage of capitalism by VI Lenin

ChatGPT – Super Rentier