Bad ideas, like death, are inevitable and just as inescapable.
The US-based tech industry is a Pandora’s box of bad ideas, unleashed upon an unwilling and unwitting populace, and indeed world, with reckless abandon, scorching lives and the Earth itself. Never mind, they say, we’re building the future.
The latest bad idea to spread dark wings and take flight is that building a super massive data center for ‘AI’ called ‘Stargate’- a megamachine that will solve all our problems like a resource and real estate devouring Wizard of Oz – is not only good, but essential.
WASHINGTON (AP) — President Donald Trump on Tuesday talked up a joint venture investing up to $500 billion for infrastructure tied to artificial intelligence by a new partnership formed by OpenAI, Oracle and SoftBank.
The new entity, Stargate, will start building out data centers and the electricity generation needed for the further development of the fast-evolving AI in Texas, according to the White House. The initial investment is expected to be $100 billion and could reach five times that sum.
“It’s big money and high quality people,” said Trump, adding that it’s “a resounding declaration of confidence in America’s potential” under his new administration.
Microsoft and OpenAI are reportedly working on a massive datacenter to house an AI-focused supercomputer featuring millions of GPUs. The Information reports that the project could cost “in excess of $115 billion” and that the supercomputer, currently dubbed “Stargate” inside OpenAI, would be U.S.-based.
The report says that Microsoft would foot the bill for the datacenter, which could be “100 times more costly” than some of the biggest operating centers today. Stargate would be the largest in a string of datacenter projects the two companies hope to build in the next six years, and executives hope to have it running by 2028.
[…]
Bad ideas are inevitable but also, apparently, subject to cost overruns.
There are many ways to think and talk about this project, which is certain to fail (and there is news of far less costly methods, making the Olympian spending even more obviously suspicious). For me, the clearest way to understand the Stargate project and in fact, the entire ‘AI’ land grab, is as an attempt to create guaranteed profit for those tech firms who’re at the commanding heights – Microsoft, OpenAI, Amazon, Oracle and co-conspirators. Capital will flow into these firms whether the system works as advertised or not – i.e. they are paid for both function (such as it is) and malfunction.
This isn’t a new technique. The US defense industry has a long history of stuffing its coffers with cash for delivering weapons systems that work… sometimes. The most infamous example is Lockheed’s F-35 fighter, a project that provides the company with funding for both delivery and correction as described in the US Government Accounting Office article, ‘F-35 Joint Strike Fighter: More Actions Needed to Explain Cost Growth and Support Engine Modernization Decision’ May 2023 –
The Department of Defense’s most expensive weapon system—the F-35 aircraft—is now more than a decade behind schedule and $183 billion over original cost estimates.
[…]
That’s a decade and 183 billion of sweet, steady profit, the sort of profit the tech industry has long sought. First there was ‘enterprise software’, then there was subscription-based cloud, both efforts to create ‘growth’ and dependable cash infusions. Now, with Stargate, the industry may have, at last, found its F-35. Unlike the troubled fighter plane, there won’t be any Tom Cruise films featuring the data center. Then again, perhaps there will be. Netflix, like the rest of the industry, is out of ideas.
I vividly remember the Three Mile Island incident which, to-date, remains the most severe accident in US commercial nuclear plant history. The military’s own radiation soaked history, still mostly classified, surely includes even darker moments. At the time, I was a boy who, among other things, studied nuclear energy. We all need hobbies, and learning about reactors was one of mine. Softball, lemonade and subcritical atomics; a good childhood, various things considered. Once the story broke on local news in Philadelphia on what I recall as a crisp, March day in 1979 that TMI, as it was known, was in trouble, the adults in my life – at church and school and in my family – aware of my interests, turned to me to explain what it all meant. Would it explode, like the warhead of a Titan II, meant for Moscow? Or would radiation creep down the Susquehanna River from TMI’s upstate Pennsylvania location, killing us softly? Unexpectedly, I had an audience for ad hoc lectures about failing coolant systems.
What motivated those adults to listen to a child was unease, approaching terror. That was the dominant emotion. Quietly managed, ever present unease. It was appropriate. How close we came, we now know, to a full meltdown, a Chernobyl-level event.
***
TMI recently came back to my thoughts, like a suddenly remembered nightmare, because of news stories that Microsoft, claiming an acute need for electrical power to supply its ‘AI’ data centers, had signed an agreement with Constellation Energy, the plant’s owner, to re-open one of its reactors.
Here’s an excerpt from the Financial Times article, ‘Microsoft in deal for Three Mile Island nuclear power to meet AI demand –
Constellation Energy will reopen the Three Mile Island nuclear plant in Pennsylvania to provide power to Microsoft as the tech giant scours for ways to satisfy its soaring energy demand while keeping its emissions in check.
The companies on Friday unveiled a 20-year power supply deal which will entail Constellation reopening Unit 1 of the nuclear facility which was shuttered in 2019, in what would be the second such reopening of a plant in the US.
Three Mile Island’s second unit, which was closed in 1979 after a partial meltdown that led to the most serious nuclear accident in US history, will remain closed.
“The decision here is the most powerful symbol of the rebirth of nuclear power as a clean and reliable energy source,” said Constellation chief executive Joe Dominguez on a call with investors.
[…]
As I began writing this essay, I tried to think of an appropriate introduction, perhaps a quote from Phillip K Dick, whose work is a meditation on technology and madness, leitmotifs of our barbarous era. In the end, I decided to let the situation’s dangerous absurdity speak for itself.
Let’s then, state the absurd: Microsoft and its ‘hyper-scale’ competitors (more co-conspirators, at this point), Amazon and Google, are turning to nuclear power to provide energy for their generative AI data centers. Pause for a moment to reflect on that sentence which I wrote as plainly as possible, foregoing writerly effects. To some, it’s a dream materialized, the science fiction world they imagined, come to life. To more sober minds, it’s a nightmare; an indication of how detached the software wing of capitalism is from the work of providing anything related to the goods and services people and organizations need or want.
It also puts flesh on the bones of that old phrase, ‘late stage capitalism’.
***
No one asked for so-called ‘generative AI’, the marketing name for a collection of algorithmic methods that ingest text, images, sounds etc – primarily from the Internet, without permission or compensation – iteratively processed using statistics, adjusted by poorly paid workers, computationally kneaded, to produce plausible outputs, that are sold as products. No one asked for it, but as I’ve discussed in a previous essay, the US tech industry’s key players, like gamblers drunk on hubris and hope, have bet their futures on super profits, courtesy of ‘AI’.
And, like desperate gamblers who, as their streak of luck ends, insist everyone around them just believe, the tech industry uses its media leverage to push a story: there’s an urgent need for more electricity to power the ‘AI’ the world allegedly clamors for. We are told there is a demand so great that even old nuclear power plants, such as the Three Mile Island facility must be restarted.
“AI demand” is the theme, the leitmotif; a story that ‘demand’ (no numbers are offered) is extraordinary, requiring that an ancient and indeed, infamous, nuclear plant must be resurrected, rising unbidden, like Godzilla, patron saint of the atomic age, from Tokyo bay. In 1966, Phillip K Dick wrote a novelette titled ‘We Can Remember it for You Wholesale’, the basis for the 1990 action film, ‘Total Recall’. Today, looking around at our world, PKD might be inspired to write a sequel, ‘We Will Demand it, For You’.
But what, exactly, is being demanded? According to Microsoft, its products such as Copilot, the company’s rebranding of OpenAI’s suite of large language model based systems (ChatGPT is the best known example):
Microsoft Copilot is an AI-powered digital assistant designed to help people with a range of tasks and activities on their devices. It can create drafts of content, suggest different ways to word things you’ve written, suggest and insert images or banners, create PowerPoint presentations from Word documents and many other helpful things.
[…]
Our demand for creating automated drafts of documents is so incredible, Microsoft tells us, that it is running out of electricity to spark the data centers providing this vital service and nuclear power, even if supplied by a decades old plant, best known for being the site of a partial meltdown, is their, and we’re encouraged to think, our last, best hope to keep the document summaries flowing. In the science fiction stories I read as a boy, nuclear power took humanity to the stars and energized the glowing hearts of robots. In the world crafted by the tech giants, it helps us create pivot tables for spreadsheets the sales team must have, lest darkness fall.
***
As lies go, the tech industry’s promotion of the idea that we’re demanding it build more data centers, to host more computational equipment, to produce more ‘generative AI’, for more chatbots and variations thereof, ranks as among the most incredible and ridiculous. It seems however, that we live in an age in which danger, lies and absurdity walk arm and arm, dragging us straight into the abyss. This is the moment in a critical essay when it is expected that the author proposes solutions, an answer to the question, ‘what is to be done?’.
Instead of that I offer a warning: the tech industry cannot be regulated and ‘ethics’ is only a diversion. Instead of trying to reform this system, monstrous in conception and execution, our efforts would be better spent preparing to circumvent and eventually, replace it.
In this video, I walk through the document, ‘The Decade Ahead‘ by Leopold Aschenbrenner published at the Situational Awareness dot ai website. In the document, Aschenbrenner makes the usual bold assertions about ‘AGI’ (artificial general intelligence) equalling and soon, exceeding human cognition. How do you critically read such hype? Let’s go through it.
Unless you’ve been under a rock, and probably, even if you have, you’ve noticed that ‘AI’ is being promoted as the solution to everything from climate change to making tacos. There’s an old joke: how do you know when a politician is lying? Their mouth is moving. Similarly, anytime businesses relentlessly push something, the first question that should come to mind is: how are they trying to make money?
Microsoft, in particular, has, as the saying goes, gone all in rebranding its implementation of OpenAI’s ChatGPT large language model based products as CoPilot, embedded across Microsoft’s catalog. Leaving aside, for the sake of this essay, the question of what so-called AI actually is, (hint: statistics) considering this push, it’s reasonable to ask: what is going on?
Ideology certainly plays a role
That is, the belief (or at least, the assertion) of a loud segment of the tech industry that they are building Artificial General Intelligence – a successor to humanity, genuinely thinking machines
Ideology is an important factor but it’s more useful to place technology firms such as Microsoft back within capitalism in our thinking. This is a way to reject the diversions this sector uses to obscure that fact
To do this, let’s consider Vladimir Lenin’s theory of imperialism as expressed in his essay, ‘Imperialism the highest stage of capitalism’.
In January of 2023, I published an essay to my blog titled, ChatGPT: Super Rentier.
The thesis of that essay is that Microsoft’s partnership with, and investment in, OpenAI and the insertion of OpenAI’s large language model software, known as ChatGPT into Microsoft’s product catalog, was done to create a platform Microsoft would use to make it a kind of super rentier – or, super landlord – of AI systems. Others, sub-rentiers, would build their platforms using Microsoft’s platform as the backend making it the super rentier – the landlord of landlords.
With this in mind, let’s take a look at this visualization of Lenin’s concept of imperialism I cooked up:
For me, the key element is the relationship between the tendency towards monopoly which leads to stagnation (after all, what’s the incentive to stay sharp if you control a market?) and the expansion of capitalist activity to other, weaker territories to temporarily resolve this stagnation – this is the material motive for capitalist imperialism or as Lenin also phrased it, parasitism.
Let’s apply this theory to Microsoft and its push for AI everywhere:
Microsoft, as a software firm, once derived most of its profit from selling products such as SQL Server, Exchange Server and the Office Suite.
This became a near monopoly for Microsoft as it dominated the corporate market for these and other types of what’s known as enterprise applications.
This monopoly led to stagnation – how many different ways can you try to derive profit from Microsoft Office, for example? By stagnation, I don’t mean that Microsoft did not make money or profit from its dominance, but this dominance no longer supported the growth capitalists demand.
The answer, for a time, was the subscription model of the Microsoft 365 platform which moved corporations from a model in which products such as Exchange would be hosted in-house in corporate data centers and licensed, to one in which there was a recurring charge for access and guaranteed revenue stream for Microsoft.
No longer was it possible for a company to buy a copy of a product and use it even after licensing expired. Now, you have to pay up, routinely, to maintain access.
After a time, even this led to a near monopoly and the return of stagnation as the market for expansion was saturated
Into this situation, enter ‘AI’
By inserting AI – chatbots and image generators into every product and pushing for this to be used by its corporate customers, Microsoft is enacting a form of the imperialist expansion Lenin described – it is a colonization of business process, education, art, filmmaking science and more on an unprecedented scale
But what haunts the AI push is the very stagnation it is supposed to remedy
There is no escape from the stagnation caused by monopoly, only temporary fixes which merely serve to create the conditions for future decay and conflict.
Since Oct 7, 2023, my taste for debunking tech industry hype has faded like morning mist, exposed to strong sunlight. Although the need for analysis and critique has never been more urgent – particularly of the linkages between the software cartel and state violence, counterinsurgency and related matters – my interest in dissecting false claims evaporated as my thoughts turned to algorithmic targeting platforms and armed quadcopter drones.
Microsoft’s recent announcement of a product named Recall for Copilot Plus PCs, which reportedly features built-in ‘AI’ hosted on a ‘Neural Processing Unit’, provides us with an opportunity to take a look at the political economy of the technology industry in the era of decline.
I say ‘decline’, because Recall, despite the hosannas we’re hearing from the tech press – Silicon Valley’s Pravda – does not represent an advance but a rearguard move to accomplish what I see as two goals:
Increase and guarantee Microsoft’s ‘AI’ related revenue stream by using its dominance of the PC operating system market (both consumer and corporate) to force a failing product on customers (Tesla’s so-called full self driving software provides another example)
Increase ‘AI’ related revenue by marketing Recall as a surveillance tool to governments and corporations
On point one: Despite a massive investment in OpenAI, including hosting and operating Azure data centers for the ChatGPT suite of resource destroying text calculators and embedding the large language model in flagship products Azure and Microsoft 365, it’s not clear Microsoft (or any company) has seen a return on its ‘AI’ investment. Quite the contrary. Recall creates a compelled revenue stream as corporations refresh their fleets of laptops. Microsoft has tried to recoup costs via high prices for products such as Github Copilot but this does not seem to be working as hoped; organizations can opt out.
On point two: In a Wall Street Journal interview, Microsoft CEO Satya Nadella described Recall’s capabilities as a “photographic memory” that is, recording every image and action on a PC, using an onboard neural processing unit to run this data (supposedly kept on the machine) through a model or models to enable more sophisticated, ‘AI’ enabled searching.
This seems like a lot of engineering effort to make it easier to find a photo you took at the beach a few years ago. Corporations don’t care about making anyone’s life easier so we must look for more adult, power-aware explanations for what we’re seeing here.
Consider the precedent of Windows Vista, released in 2006. Vista, which employed a complex method for enforcing corporate digital rights, was created by Microsoft to attract the attention of the film and music industries as the preferred way to exert command and control over our use of ‘content’. With Vista, Microsoft’s goal was to become the gatekeeper for the digital distribution of entertainment and derive profit from that position. This didn’t work out as planned but the effort is a key indicator of intent. I interpret Recall as being the ‘AI’ variant of the gatekeeper gambit.
We can safely ignore happy talk and promises of privacy to see what is right before us: a system for recording everything you do will be marketed to businesses and governments as a means of mass surveillance. What was once the description of malware has, in the age of ‘AI’ become a product. In its quest for profits, Microsoft is creating a difficult to escape, hardware based, globally distributed monitoring platform. We can be certain that its competitors, such as Apple, are making similar moves.
***
When thinking about the tech industry and its endless stream of product announcements, particularly about ‘AI’, a good rule of thumb is to ignore whatever glittering words are used to ask one question: how do they plan to make money? But not just ‘money’ in the abstract, profit. Looking at Recall for Windows 11, a follow the money approach leads directly to what Yasha Levine called ‘Surveillance Valley’.
This is a hotly debated topic in so-called ‘tech’ circles and the academic and media groups that orbit that world like one of Jupiter’s radiation blasted moons. I dropped the phrase, ‘can large language models reason’ into Google, (that rusting machine) and got this result:
This is only a small sample. According to Google there are “About 352.000.000 results.” We can safely conclude from this, and the back and forth that endlessly repeats on Twitter in groups that discuss ‘AI’ that there is a lot of interest in arguing the matter: pro and con. Is this debate, if indeed it can be called that, the least bit important? What is at stake?
***
According to ‘AI’ industry enthusiasts, nearly everything is at stake; a bold new world of thinking machines is upon us. What could be more important? To answer this question, let’s do another Google search, this time, for the phrase, Project Nimbus:
The first result returned was a Wikipedia article, which starts with this:
Project Nimbus (Hebrew: פרויקט נימבוס) is a cloud computing project of the Israeli government and its military. The Israeli Finance Ministry announced in April 2021, that the contract is to provide “the government, the defense establishment, and others with an all-encompassing cloud solution.” Under the contract, the companies will establish local cloud sites that will “keep information within Israel’s borders under strict security guidelines.”
What sorts of things does Israel do with the system described above? We don’t have precise details but there are clues such as what’s described in this excerpt from the +972 Magazine article, ‘A mass assassination factory’: Inside Israel’s calculated bombing of Gaza’ –
According to the [+972 Magazine] investigation, another reason for the large number of targets, and the extensive harm to civilian life in Gaza, is the widespread use of a system called “Habsora” (“The Gospel”), which is largely built on artificial intelligence and can “generate” targets almost automatically at a rate that far exceeds what was previously possible. This AI system, as described by a former intelligence officer, essentially facilitates a “mass assassination factory.”
I wrote about Habsora in this essay. When I think of the ways algorithmic systems are used – increasingly to not only control but also to kill, my interest in the LLM debate fades, like the memory of a dream. It seems light and airy, unsuitable for the age.
***
History, and legend tell us that in ancient Athens there was a place called the Lyceum, founded by Aristotle, where the techniques of the Peripatetic school were practiced. Peripatetic means, more or less, ‘walking about’ which reflects the method: philosophers and students, mingling freely, discussing ideas. There are centuries of accumulated hagiography about this school. No doubt it was nice for those not subject to the slave system of ancient Greece.
Similarly, debates about whether or not LLMs can reason are nice for those of us not subject to hellfire missiles, fired by Apache helicopters sent on their errands based on targeting algorithms. But, I am aware of the pain of people who are subject to those missiles. I can’t unsee the death facilitated by computation.
This is why I have to leave the debating square, the social media crafted lyceum. Do large language models reason? No. But even spending time debating the question offends me now. A more pressing question is what the people building the systems killing our fellow human beings are thinking. What is their reasoning?
The IDF assault on Nasser hospital in Southern Gaza joined a long and growing list of bloody infamies committed by Israel since Oct 7, 2023. During a Democracy Now interview, broadcast on Feb 15, 2024, Dr. Khaled Al Serr, who was later kidnapped by the IDF, described what he saw:
“Actually, the situation here in the hospital at this moment is in chaos. All of the patients, all the relatives, refugees and also the medical staff are afraid because of what happened. We could not imagine that at any time the Israeli army will bomb the hospital directly, and they will kill patients and medical personnel directly by bombing the hospital building. Yesterday also, Israeli snipers and Israeli quadcopters, which is a drone, carry on it an AR, and with a sniper, they shot all over the building. And they shot my colleague, Dr. Karam. He has a shrapnel inside his head. I can upload for you a CT for him. You can see, alhamdulillah, it was superficial, nothing serious. But a lot of bullets inside their bedroom and the restroom.”
The Israeli military is using quadcopters, armed with sniper rifles, as part of its assassination arsenal. These remote operated drones, which possess limited but still important automatic capabilities (flight stability, targeting persistence) are being used in the genocidal war in Gaza and the war between Russia and Ukraine to name two, prominent examples. They are likely to make an appearance near you in some form, soon enough.
I haven’t seen reporting on the type of quadcopter used but it’s probably the Smash Dragon, a model produced by the Israeli firm Smart Shooter which, on its website, describes its mission:
SMARTSHOOTER develops state-of-the-art Fire Control Systems for small arms that significantly increase weapon accuracy and lethality when engaging static and moving targets, on the ground and in the air, day and night.
Here is a promotional video for the Smash Dragon:
Smart Shooter’s product, and profit source are the application of computation to the tasks of increasing accuracy and automating weapon firing. One of their ‘solutions’ (solving, apparently, the ‘problem’ of people being alive) is a fixed position ‘weapon station’ called the Smash Hopper that enables a distant operator to target-lock the weapon on a person, initiating the firing of a constant stream of bullets. For some reason, the cartoonish word, ‘smash’ is popular with the Smart Shooter marketing team.
‘AI’, as used under the current global order, serves three primary purposes: control via sorting, anti-labor propaganda and obscuring culpability. Whenever a hospital deploys an algorithmic system, rather than healthcare worker judgment, to decide how long patients stay, sorting is being used as a means of control, for profit. Whenever a tech CEO tells you that ‘AI’ can replace artists, drivers, filmmakers, etc. the idea of artificial intelligence is employed as an anti-labor propaganda tool. And whenever someone tells you that the ‘AI’ has decided, well, anything, they are trying to hide the responsibility of the people behind the scenes, pushing algorithmic systems on the world.
The armed quadcopter brings all of these purposes together, wrapped in a blood stained ribbon. Who lives and who dies is decided via remote control while the fingers pulling the trigger, and the people directing them are hidden from view. These systems are marketed as using ‘AI’ implying machines are making life and death decisions rather than people.
In the introduction to his 2023 book, The Palestine Laboratory, which details Israel’s role in the global arms trade and use of the Palestinians as lethal examples, journalist Anthony Lowenstein describes a weapons demonstration video attended by Andrew Feinstein in 2009:
“Israel is admired as a nation that stands on its own and is unashamed in using extreme force to maintain it. [Andrew Feinstein is] a former South African politician. journalist, and author. He told me about attending the Paris Air Show in 2009, the world’s largest aerospace industry and air show exhibitions. [The Israel-based defense firm Elbit Systems] was showing a promotional video about killer drones, which have been used in Israel’s war against Gaza and over the West Bank.
The footage had been filmed a few months before and showed the reconnaissance of Palestinians in the occupied territories. A target was assassinated. […] Months later, Feinstein investigated the drone strike and discovered that the incident featured in the video had killed a number of innocent Palestinians, including children. This salient fact wasn’t featured at the Paris Air Show. “This was my introduction to the Israeli arms industry and the way it markets itself.”
The armed quadcopter drone, one of the fruits of an industry built on occupation and death, can be added to the long list of the harms of computation. ‘Keep watching the skies!’ someone said at the end of a 1950s science fiction film whose name escapes me. Never mind though, the advice stands.
Confirmed: Dr. Khaled Al Serr, who took a lead role in trying to inform western press of the consequences of Israel’s attack on Nasser Hospital, has been abducted. Statement: https://t.co/ETI2hbcegzpic.twitter.com/emMypy29bD
This is written in the spirit of the Request for Comments memorandums that shaped the early Internet. RFCs, as they are known, are submitted to propose a technology or methodology and gather comments/corrections from relevant and knowledgeable community members in the hope of becoming a widely accepted standard.
Purpose
This is a consideration of the information technology options for politically and socially active organizations. It’s also a high level overview of the technical landscape. The target audience is technical decision makers in groups whose political commitments challenge the prevailing order, focused on liberation. In this document, I will provide a brief history of past patterns and compare these to current choices, identifying the problems of various models and potential opportunities.
Alongside this blog post there is a living document posted for collaboration here. I invite a discussion of ideas, methods and technologies I may have missed or might be unaware of to improve accuracy and usefulness.
Being Intentional About Technology Choices
It is a truism that modern organizations require technology services. Less commonly discussed are the political, operational, cost and security implications of this dependence from the perspective of activists. It’s important to be intentional about technological choices and deployments with these and other factors in mind. The path of least resistance, such as choosing Microsoft 365 for collaboration rather than building on-premises systems, may be the best, or least terrible choice for an organization but the decision to use it should come after weighing the pros and cons of other options. What follows is not an exhaustive history; I am purposefully leaving out many granular details to get to the point as efficiently as possible.
A Brief History of Organizational Computing
By ‘organizational computing’ I’m referring to the use of digital computers arranged into service platforms by non-governmental and non-military organizations. In this section, there is a high level walk through of the patterns which have been utilized in this sector.
Mainframes
IBM 360 in Computer Room – mid 1960s
The first use of digital computing at-scale was the deployment of mainframe systems as centrally hosted resources. User access, limited to specialists, was provided via a time sharing method in which ‘dumb’ terminals displayed results of programs and enabled input (punch cards were also used for inputting program instructions). One of the most successful systems was the IBM 360 (operational from 1965 to 1978). Due to expense, the typical customer was large banks, universities and other organizations with deep pockets.
Client Server
Classic Client Server Architecture (Microsoft)
The introduction of personal computers in the 1980s created the raw material for the development of networked, smaller scale systems that could supplement mainframes and provide organizations with the ability to host relatively modest computing platforms that suited their requirements. By the 1990s, this became the dominant model used by organizations at all scales (mainframes remain in service but the usage profile became narrower – for example, to run applications requiring greater processing capability than what’s possible using PC servers).
The client server model era spawned a variety of software applications to meet organizational needs such as email servers (for example, Sendmail and Microsoft Exchange), database servers (for ex. Postgres and SQL Server), web servers such as Apache and so on. Companies such as Novell, Cisco, Dell and Microsoft rose to prominence during this time.
As the client server era matured and the need for computing power grew, companies like VMWare sold platforms that enabled the creation of virtual machines (software mimics of physical servers). Organizations that could not afford to own or rent large data centers could deploy the equivalent of hundreds or thousands of servers within a smaller number of more powerful (in terms of processing capacity and memory) computing systems running VMWare’s ESX software platform. Of course, the irony of this return to something like a mainframe was not lost on information technology workers whose careers spanned the mainframe to client server era.
Cloud computing
Cloud Pattern (Amazon Web Services)
Virtualization, combined with the improved Internet access of the early 2000s, gave rise to what is now called ‘cloud.’ Among information technology workers, it was popular to say ‘there is no cloud, it’s just someone else’s computer.’ Overconfident cloud enthusiasts considered this to be the complaint of a fading old guard but it is undeniably true.
The Cloud Model
There are four modes of cloud computing:
Infrastructure as a service – IaaS: (for example, building virtual machines on platforms such as Microsoft Azure, Amazon Web Services or Google Cloud Platform)
Platform as a service – PaaS: (for example, databases offered as a service utility eliminating the need to create a server as host)
Software as a Service – SaaS: (platforms like Microsoft 365 fall into this category)
Function as a Service – FaaS: (focused on deployment using software development – ‘code’ – alone with no infrastructural management responsibilities)
A combination of perceived (but rarely realized) convenience, marketing hype and mostly unfulfilled promises of lower running costs have made the cloud model the dominant mode of the 2020s. In the 1990s and early 2000s, an organization requiring an email system was compelled to acquire hardware and software to configure and host their own platform (the Microsoft Exchange email system running on Dell server or VMWare hardware was a common pattern). The availability of Office 365 (later, Microsoft 365) and Google’s G-Suite provided another, attractive option that eliminated the need to manage systems while providing the email function.
A Review of Current Options for Organizations
Although tech industry marketing presents new developments as replacing old, all of the pre-cloud patterns mentioned above still exist. The question is, what makes sense for your organization from the perspectives of:
Cost
Operational complexity
Maintenance complexity
Security and exposure to vulnerabilities
Availability of skilled workers (related to the ability to effectively manage all of the above)
We needn’t include mainframes in this section since they are cost prohibitive and today, intended for specialized, high performance applications.
Client Server (on-premises)
By ‘on-premises’ we are referring to systems that are not cloud-based. Before the cloud era, the client server model was the dominant pattern for organizations of all sizes. Servers can be hosted within a data center the organization owns or within rented space in a colocation facility (a business that provides rented space for the servers of various clients).
Using a client server model requires employing staff who can install, configure and maintain systems. These skills were once common, indeed standard, and salaries were within the reach of many mid-size organizations. The cloud era has made these skills harder to come by (although there are still many skilled and enthusiastic practitioners). A key question is, how much investment does your organization want to make in the time and effort required to build and manage its own system? Additional questions for consideration come from software licensing and software and hardware maintenance cycles.
Sub-categories of client server to consider
Virtualization and Hyper-converged hardware
As mentioned above, the use of virtualization systems, offered by companies such as VMWare, was one method that arose during the heyday of client server to address the need for more concentrated computing power in a smaller data center footprint.
Hyper-converged infrastructure (HCI) systems, combining compute, storage and networking into a single hardware chassis, is a further development of this method. HCI systems and virtualization reduce the required operational overhead. More about this later.
Hybrid architectures
A hybrid architecture uses a mixture of on-premises and off-site, typically ‘cloud’ based systems. For example, an organization’s data might be stored on-site but the applications using that data are hosted by a cloud provider.
Cloud
Software as a Service
Software as a Service platforms such as Microsoft 365 are the most popular cloud services used by firms of all types and sizes, including activist groups. The reasons are easy to understand:
Email services without the need to host an email server
Collaboration tools (SharePoint and MS Teams for example) built into the standard licensing schemes
Lower (but not zero) operational responsibility
Hardware maintenance and uptime are handled by the service provider
The convenience comes at a price, both financial, as licensing costs increase and operational inasmuch as organizations tend to place all of their data and workflows within these platforms, creating deep dependencies.
Build Platforms
The use of ‘build platforms’ like Azure and AWS is more complex than the consumption model of services such as Microsoft 365. Originally, these were designed to meet the needs of organizations that have development and infrastructure teams and host complex applications. More recently, the ‘AI’ hype push has made these platforms trojan horses for pushing hyperscale algorithmic platforms (note, as an example, Microsoft’s investment in and use of OpenAI’s Large Language Model kit) The most common pattern is a replication of large-scale on-premises architectures using virtual machines on a cloud platform.
Although marketed as superior to, and simpler than on-premises options, cloud platforms require as much, and often more technical expertise. Cost overruns are common; cloud platforms make it easy to deploy new things but each item generates a cost. Even small organizations can create very large bills. Security is another factor; configuration mistakes are common and there are many examples of data breaches produced by error.
Private Cloud
The potential key advantage of the cloud model is the ability to abstract technical complexity. Ideally, programmers are able to create applications that run on hardware without the requirement to manage operating systems (a topic outside of the scope of this document). Private cloud enables the staging of the necessary hardware on-premises. A well known example is Openstack which is very technically challenging. Commercial options include Microsoft’s Azure Stack which extends the Azure technology method to hyper converged infrastructure (HCI) hosted within an organization’s data center.
Information Technology for Activists – What is To Be Done?
In the recent past, the answer was simple: purchase hardware and software and install and configure it with the help of technically adept staff, volunteers or a mix. In the 1990s and early 2000s it was typical for small to midsize organizations to have a collection of networked personal computers connected to a shared printer within an office. Through the network (known as a local area network or LAN) these computers were connected to more powerful computers called servers that provide centralized storage and the means through which each individual computer could communicate in a coordinated manner and share resources. Organizations often hosted their own websites which were made available to the Internet via connections from telecommunications providers.
Changes in the technology market since the mid 2000s, pushed to increase the market dominance and profits of a small group of firms (primarily, Amazon, Microsoft and Google) have limited options even as these changes appear to offer greater convenience. How can these constraints be navigated?
Proposed Methodology and Doctrines
Earlier in this document, I mentioned the importance of being intentional about technology usage. In this section, more detail is provided.
Let’s divide this into high level operational doctrines and build a proposed architecture from that.
First Doctrine: Data Sovereignty
Organizational data should be stored on-premises using dedicated storage systems rather than in a SaaS such as Microsoft 365 or Google Workspace
Second Doctrine: Bias Towards Hybrid
By ‘hybrid’ I am referring to system architectures that utilize a combination of on-premises and ‘cloud’ assets
Third Doctrine: Bias Towards System Diversity
This might also be called the right tool for the right job doctrine. After consideration of relevant factors (cost, technical ability, etc) an organization may decide to use Microsoft 365 (for example) to provide some services but other options should be explored in the areas of:
Document management and related real time collaboration tooling
Online Meeting Platforms
Database platforms
Email platforms
Commercial platforms offer integration methods between platforms that make it possible to create an aggregated solution from disparate tools.
These doctrines can be applied as guidelines for designing an organizational system architecture:
The above is only one option. More are possible depending on the aforementioned factors of:
Cost
Operational complexity
Maintenance complexity
Security and exposure to vulnerabilities
Availability of skilled workers (related to the ability to effectively manage all of the above)
I invite others to add to this document to improve its content and sharpen the argument.
Activist Documents and Resources Regarding Alternative Methods
Counter Cloud Action Plan – The Institute for Technology In the Public Interest
On September 14, 2023, while touring Twitter the way you might survey the ruins of Pompey, I came across a series of posts responding to this statement from the EU Commission account:
Mitigating the risk of extinction from AI should be a global priority…
What attracted critical attention was the use of the phrase, ‘risk of extinction‘ a fear of which, as Dr. Timnit Gebru alerts us (among others, mostly women researchers I can’t help but notice) lies at the heart of what Gebru calls the ´TESCREAL Bundle.’ The acronym, TESCREAL, which brings together the terms Transhumanism, Extropianism, Singularitarianism, Cosmism, Rationalism, Effective Altruism and Longtermism, describes an interlocked and related group of ideologies that have one idea in common: techno-utopianism (with a generous helping of eugenics and racialized ideas of what ‘intelligence’ means mixed in to make everything old new again).
Risk of extinction. It sounds dramatic, doesn’t it? The sort of phrase you hear in a Marvel movie, Robert Downey Jr, as Iron Man stands in front of a green screen and turns to one of his costumed comrades as some yet to be added animated threat approaches and screams about the risk of extinction if the animated thing isn’t stopped. There are, of course, actual existential risks; asteroids come to mind and although climate change is certainly a risk to the lives of billions and the mode of life of the industrial capitalist age upon which we depend, it might not be ‘existential’ strictly speaking (though, that’s most likely a distinction without a difference as the seas consume the most celebrated cities and uncelebrated communities).
The idea that what is called ‘AI’ – which, when all the tech industry’s glittering makeup is removed, is revealed plainly to be software, running on computers, warehoused in data centers – poses a risk of extinction requires a special kind of gullibility, self interest, and, as Dr, Gebru reminds us, supremacist delusions about human intelligence to promote, let alone believe.
***
In the picture posted to X, Ursula von der Leyen, President of the European Commission, is standing at a podium before the assembled group of commissioners, presumably in the EU Commission building (the Berlaymont) in Brussels, a city I’ve visited quite a few times, regretfully. The building itself and the main hall for commissioners, are large and imposing, conveying, in glass, steel and stone, seriousness. Of course, between the idea and the act there usually falls a long shadow. How serious can this group be, I wondered, about a ‘risk of extinction’ from ‘AI’?
***
To find out, I decided to look at the document referenced and trumpeted in the post, the EU Artificial Intelligence Act. There’s a link to the act in the reference section below. My question was simple: is there a reference to ‘risk of extinction’ in this document? The word, ‘risk’, appears 71 times. It’s used in passages such as the following, from the overview:
The Commission proposes to establish a technology-neutral definition of AI systems in EU law and to lay down a classification for AI systems with different requirements and obligations tailored on a ‘risk-based approach’. Some AI systems presenting ‘unacceptable’ risks would be prohibited. A wide range of ‘high-risk’ AI systems would be authorised, but subject to a set of requirements and obligations to gain access to the EU market.
The emphasis is on a ‘risk based approach’ which seems sensible at first look but there are inevitable problems and objections. Some of the objections come from the corporate sector, claiming, with mind-deadening predictability, that any and all regulation hinders ‘innovation’ a word that is invoked like an incantation only not as intriguing or lyrical. More interesting critiques come from those who see risk (though, notably, not existential) and who agree something must be done but who view the EU’s act as not going far enough or going in the wrong direction.
Here is the listing of high-risk activities and areas for algorithmic systems in the EU Artificial Intelligence Act:
o Biometric identification and categorisation of natural persons
o Management and operation of critical infrastructure
o Education and vocational training
o Employment, worker management and access to self-employment
o Access to and enjoyment of essential private services and public services and benefits
o Law enforcement
o Migration, asylum and border control management
o Administration of justice and democratic processes
Missing from this list is the risk of extinction; which, putting aside the Act’s flaws, makes sense. Including it would have been as out of place in a consideration of real-world harms as adding a concern about time traveling bandits.. And so, now we must wonder, why include the phrase, “risk of extinction” in a social media post?
***
On March 22, 2023, the modestly named Future of Life Institute, an organization initially funded by the bathroom fixture toting Lord of X himself, Musk (a 10 million USD investment in 2015) whose board is as alabaster as the snows of Antarctica once were, kept afloat by donations from other tech besotted wealthies, published an open letter titled, ‘Pause Giant AI Experiments: An Open Letter.’ This letter was joined by similarly themed statements from OpenAI (‘Planning for AGI and beyond’) and Microsoft (‘Sparks of Artificial General Intelligence: Early experiments with GPT-4’).
Each of these documents has received strong criticism from people, such as yours truly, and others with more notoriety and for good reason: they promote the idea that the imprecisely defined Artificial General Intelligence (AGI) is not only possible, but inevitable. Critiques of this idea – whether based on a detailed analysis of mathematics (‘Reclaiming AI as a theoretical tool for cognitive science’) or of computational limits (The Computational Limits of Deep Learning) have the benefit of being firmly grounded in material reality.
But as Freud might have warned us, we live in a society shaped not only by our understanding of the world as it is but also, in no small part by dreams and fantasies. White supremacists harbor the self congratulating fantasy that any random white person (well, man) is an astounding genius when compared to those not in that club. This notion endures despite innumerable and daily examples to the contrary because it serves the interests of certain individuals and groups to persist in delusion and impose this delusion on the world. The ‘risk of extinction’ fantasy has caught on because it builds on decades of fiction, like the idea of an American Dream and adds spice to an otherwise deadly serious and grounded business: controlling the tech industry’s scope of action. Journalists who ignore the actual harms of algorithmic systems rush to write stories about a ‘risk of extinction’ which is far sexier than talking about the software now called ‘AI’ that is used to deny insurance benefits or determine criminal activity.
The European Union’s Artificial Intelligence Act does not explicitly reference ‘existential risk’ but the social media post using this idea is noteworthy. It shows that lurking in the background, the ideas promoted by the tech industry – by OpenAI and its paymaster Microsoft and innumerable camp followers – have seeped into the thinking of decision makers at the highest levels.
And how could it be otherwise? How flattering to think you’re rescuing the world from Skynet, the fictional, nuclear missile tossing system featured in the ‘Terminator’ franchise, rather than trying, at long last, to actually regulate Google.