The F-35 Maneuver

Bad ideas, like death, are inevitable and just as inescapable.

The US-based tech industry is a Pandora’s box of bad ideas, unleashed upon an unwilling and unwitting populace, and indeed world, with reckless abandon, scorching lives and the Earth itself. Never mind, they say, we’re building the future.

The latest bad idea to spread dark wings and take flight is that building a super massive data center for ‘AI’ called ‘Stargate’- a megamachine that will solve all our problems like a resource and real estate devouring Wizard of Oz – is not only good, but essential.

In an Associated Press article titled, ‘Trump highlights partnership investing $500 billion in AI‘ published Jan 23, 2025, the project is described:

WASHINGTON (AP) — President Donald Trump on Tuesday talked up a joint venture investing up to $500 billion for infrastructure tied to artificial intelligence by a new partnership formed by OpenAI, Oracle and SoftBank.

The new entity, Stargate, will start building out data centers and the electricity generation needed for the further development of the fast-evolving AI in Texas, according to the White House. The initial investment is expected to be $100 billion and could reach five times that sum.

“It’s big money and high quality people,” said Trump, adding that it’s “a resounding declaration of confidence in America’s potential” under his new administration.

[…]

It seems like only yesterday, or more precisely, several months ago, that the same ‘Stargate’, with a still astronomically large but comparatively smaller budget, was described in a Tom’s Hardware article of March 24, 2024 titled ‘OpenAI and Microsoft reportedly planning $100 billion datacenter project for an AI supercomputer‘ –

Microsoft and OpenAI are reportedly working on a massive datacenter to house an AI-focused supercomputer featuring millions of GPUs. The Information reports that the project could cost “in excess of $115 billion” and that the supercomputer, currently dubbed “Stargate” inside OpenAI, would be U.S.-based. 

The report says that Microsoft would foot the bill for the datacenter, which could be “100 times more costly” than some of the biggest operating centers today. Stargate would be the largest in a string of datacenter projects the two companies hope to build in the next six years, and executives hope to have it running by 2028.

[…]

Bad ideas are inevitable but also, apparently, subject to cost overruns.

There are many ways to think and talk about this project, which is certain to fail (and there is news of far less costly methods, making the Olympian spending even more obviously suspicious). For me, the clearest way to understand the Stargate project and in fact, the entire ‘AI’ land grab, is as an attempt to create guaranteed profit for those tech firms who’re at the commanding heights – Microsoft, OpenAI, Amazon, Oracle and co-conspirators. Capital will flow into these firms whether the system works as advertised or not – i.e. they are paid for both function (such as it is) and malfunction.

This isn’t a new technique. The US defense industry has a long history of stuffing its coffers with cash for delivering weapons systems that work… sometimes. The most infamous example is Lockheed’s F-35 fighter, a project that provides the company with funding for both delivery and correction as described in the US Government Accounting Office article, ‘F-35 Joint Strike Fighter: More Actions Needed to Explain Cost Growth and Support Engine Modernization Decision’ May 2023 –

The Department of Defense’s most expensive weapon system—the F-35 aircraft—is now more than a decade behind schedule and $183 billion over original cost estimates.

[…]

That’s a decade and 183 billion of sweet, steady profit, the sort of profit the tech industry has long sought. First there was ‘enterprise software’, then there was subscription-based cloud, both efforts to create ‘growth’ and dependable cash infusions. Now, with Stargate, the industry may have, at last, found its F-35. Unlike the troubled fighter plane, there won’t be any Tom Cruise films featuring the data center. Then again, perhaps there will be. Netflix, like the rest of the industry, is out of ideas.

State of Exception – Part Two: Assume Breach

In part one of this series, I proposed that Trump’s second term, which, as we’re seeing with the rush of executive orders, has, unlike his first, a coherent agenda (centered on the Heritage Foundation’s Project 2025 plan), would be a time of increased aggression against ostracized individuals and groups, a state of exception in which the pretence of bourgeois democracy melts away.

Because of this, we should change our relationship with the technologies we’re compelled to use; a naive belief in the good will or benign neglect of tech corporations and the state should be abandoned. The correct perspective is to assume breach.

In a April, 2023 published blog post for the network equipment company, F5, systems security expert Ken Arora, described the concept of assume breach: 

Plumbers, electricians, and other professionals who operate in the physical world have long internalized the true essence of “assume breach.” Because they are tasked with creating solutions that must be robust in tangible environments, they implicitly accept and incorporate the simple fact that failures occur within the scope of their work. They also understand that failures are not an indictment of their skills, nor a reason to forgo their services. Rather, it is only the most skilled who, understanding that their creations will eventually fail, incorporate learnings from past failures and are able to anticipate likely future failures.

[…]

For the purposes of this essay, the term, failure, is re-interpreted to mean the intrusion of hostile entities into the systems and devices you use. By adopting a technology praxis based on assumed breach, you can plan for intrusion by acknowledging the possibility that your systems have, or will be penetrated.

Primarily, there are five areas of concern:

  • Phones
  • Social Media
  • Personal computers
  • Workplace platforms, such as Microsoft 365 and Google’s G-Suite
  • Cloud’ platforms, such as Microsoft Azure, Amazon AWS and Google Cloud Platform

It’s reasonable to think that following security best practices for each technology (links in the references section) offers a degree of protection from intrusion. Although this may be true to some extent, when contending with non-state hostiles, such as black hat hackers, state entities have direct access to the ownership of these systems, giving them the ability to circumvent standard security measures via the exercise of political power.

Phones (and tablets)

Phones are surveillance devices. No communications that require security and which, if intercepted, could lead to state harassment or worse should be done via phones. This applies to iPhones, Android phones and even niche devices such as Linux phones. Phones are a threat in two ways:

  1.  Location tracking – phones connect to cellular networks and utilize unique identifiers that enable location and geospatial tracking. This data is used to create maps of activity and associations (a technique the IDF has used in its genocidal wars)
  2.  Data seizure – phones store data that, if seized by hostiles, can be used against you and your organization. Social media account data, notes, contacts and other information

Phone use must be avoided for secure communications. If you must use a phone for your activist work, consider adopting a secure Linux-based phone such as GrapheneOS which may be more resistant to cracking if seized but not to communication interception. As an alternative, consider using old school methods, such as paper messages conveyed via trusted courier within your group. This sounds extreme and may turn out to be unnecessary depending on how conditions mutate. It is best however, to be prepared should it become necessary.

Social Media

Social media platforms such as Twitter/X, Bluesky, Mastodon, Facebook/Meta and even less public systems such as Discord, which enables the creation of privately managed servers, should not be used for secure communication. Not only because of posts, but because direct messages are vulnerable to surveillance and can be used to obtain pattern and association data. A comparatively secure (though not foolproof) alternative is the use of the Signal messaging platform.  (Scratch that: Yasha Levine provides a full explantation of Signal as a government op here).

Personal Computers

Like phones, personal computers -laptops and Desktops – should not be considered secure. There are several sub-categories of vulnerability:

  • Vulnerabilities caused by security flaws in the operating system (for example, issues with Microsoft Windows or Apple MacOS)
  • Vulnerabilities designed into the operating systems by the companies developing, deploying and selling them for profit objectives (Windows CoPilot, is a known threat vector, for example)
  • Vulnerabilities exploited by state actors such as intelligence and law enforcement agencies (deliberate backdoors)
  • Data exposure if a computer is seized

Operating systems are the main threat vector – that is, opening to your data – when using a computer. In part one of this series, I suggested abandoning the use of Microsoft Windows, Google Chrome OS and Apple’s Mac OS for computer usage that requires security and using secure Debian Linux instead. This is covered in detail in part one.

Workplace Platforms such as Google G-Suite and Microsoft 365 and other ‘cloud’ platforms such Microsoft Azure and Amazon Web Services

Although convenient, and, in the case of Software as a Service offerings such as Google G-Suite and Microsoft 365, less technically demanding to manage than on-premises hosting, ‘cloud’ platforms should not be considered trustworthy for secure data storage or communications.

This is true, even when platform-specific security best practices are followed because such measures will be circumvented by the corporations that own these platforms when it suits their purposes – such as cooperating with state mandates to release customer data.

The challenge for organizations who’re concerned about state sanctioned breach is finding the equipment, technical talent, will and organizational skill (project management) to move away from these ‘cloud’ systems to on-premises platforms. This is not trivial and has so many complexities that it deserves a separate essay, which will be part three of this series.

The primary challenges are:

  • Inventorying the applications you use
  • Assessing where the organisation’s data is stored and the types of data
  • Assessing the types of communications and the levels of vulnerability (for example, how is email used? What about collaboration services such as SharePoint?)
  • Crafting an achievable strategy for moving applications, services and data off the vulnerable cloud service
  • Encrypting and deleting data

In part three of this series, I will describe moving your organisation’s data and applications off of cloud platforms: what are the challenges? What are the methods? What skills are required? I’ll talk about this and more.

References

Assume Breach

Project 2025

Security Best Practices – Google Workspace

Microsoft 365 Security Best Practices

Questions and Answers: Israeli Military’s Use of Digital Tools in Gaza

UK police raid home, seize devices of EI’s Asa Winstanley

Cellphone surveillance

GrapheneOS

Meta-provided Facebook chats led a woman to plead guilty to abortion-related charges

State of Exception: Part One

In his 2005 published book, State of Exception, Italian philosopher Giorgio Agamben (who, I feel moved to say, was an idiot on the topic of Covid 19, declaring the virus to be nonexistent) wrote:

The state of exception is the political point at which the juridical stops, and a sovereign unaccountability begins; it is where the dam of individual liberties breaks and a society is flooded with the sovereign power of the state.”

The (apparently, merely delayed by four years) re-election of Donald Trump is certain to usher in a sustained period of domestic emergency in the United States, a state of exception when even the pretense of bourgeois democracy is dropped and state power is exercised with few restraints.

What does this mean for information technology usage by activist groups or really, anyone?

In Feb of 2024, I published the essay, Information Technology for Activists – What is To Be Done? In this essay, I provided an overview of the current information technology landscape, with the needs and requirements of activist groups in mind. When conditions change, our understanding should keep pace. As we enter the state of exception, the information technology practices of groups who can expect harassment, or worse, from the US state should be radically updated for a more aggressively defensive posture.

Abandon Cloud

The computer and software technology industry is the command and control apparatus of corporate and state entities. As such, its products and services should be considered enemy territory. Under the capitalist system, we are compelled to operate on this territory to live. This harsh necessity should not be confused with acceptance and is certainly not a reason to celebrate, like dupes, the system that is killing the world. 

The use of operating systems and platforms from the tech industry’s primary powers – Microsoft, Amazon, Google, Meta, X/Twitter, Apple, Oracle – and lesser known entities, creates a threat vector through which identities, data and activities can be tracked and recorded. Moving off these platforms will be very difficult but is essential. What are the alternatives? 

There are three main areas of concern:

  • Services and platforms such as social media, cloud and related services
  • Personal computers (for example, laptops)
  • Phones

In this essay, cloud and computer usage are the focus.

By ‘cloud’, I’m referring to the platforms owned by Microsoft (Azure), Amazon (Amazon Web Services or, AWS) and Google (Google Cloud Platform or GCP) and services such as Microsoft 365 and Google’s G Suite. These services are not secure for the purposes of activist groups and individuals who can expect heightened surveillance and harassment from the state.  There are technical reasons (Azure, for example, is known for various vulnerabilities) but these are of a distant, secondary concern to the fact that, regardless of each platform’s infrastructural qualities or deficits, the corporations owning them are elements of the state apparatus.

Your data and communications are not secure. If you are using these platforms, your top priority should be abandoning usage and moving your computational resources to what are called on-premises facilities and use the Linux operating system, rather than MacOS or Microsoft Windows.  

On Computers

In brief, operating systems are a specialized type of software that makes computers useful. When you open Microsoft Excel on your computer, it’s the Microsoft Windows operating system that enables the Excel program to utilize computer hardware, such as memory and storage. You can learn more about operating systems by reading this Wikipedia article. This relationship – between software and computing machinery – applies to all the systems you use: whether it’s Windows, Mac or others.

Microsoft Windows (particularly the newest versions which include the insecure by design ‘Co-pilot plus PC’ feature) and Apple’s MacOS should be abandoned. Why? The tech industry, as outlined in Yasha Levine’s book, Surveillance Valley, works hand in glove with the surveillance state (and has done so since the industry’s infancy). If you or your organization are using computers for work that challenges the US state – for example, pro-Palestinian activism or indeed, work in support of any marginalized community, there is a possibility vital information will be compromised – either through seizure, or remote access that takes advantage of backdoors and vulnerabilities.

This was always a possibility (and for some, a harsh experience) but as the state’s apparatus is directed towards coordinated, targeted suppression, vague possibility turns into high probability (see, for example, UK police raid home, seize devices of EI’s Asa Winstanley).

The Linux operating system should be used instead, specifically, the Debian distribution, well known for its secure design. Secure by design does not mean invulnerable to attack; best practices such as those described in the article, Securing Debian Manual 3.19, on the Debian website, must be followed to make a machine a harder target.

Switching and Migration

Switching from Microsoft Windows to Debian Linux can be done in stages as described in the document ‘From Windows to Debian’. Replacing MacOS with Debian on Mac Pro computers is described in the document, ‘Macbook Pro’ on the Debian website. More recent Mac hardware (M1 Silicon) is being addressed via Debian’s Project Banana.

On software

If you’re using Microsoft Windows, it’s likely you’re also using the MS Office suite. You may also be using Microsoft’s cloud ‘productivity’ platform, Microsoft 365. Perhaps you’re using Google’s Workspace platform instead or in addition to Microsoft 365. In the section on ‘Services and Platforms’, I discuss the problems of these products from a security perspective. For now, let’s review replacements for commercial ‘productivity’ suites that are used to create documents, spreadsheets and other types of work files.


In the second installment of this essay series I will provide greater detail regarding each of the topics discussed and guidance about the use of phones which are spy devices and social media, which is insecure by design.

The Interpretation of Tech Dreams – On the EU Commission Post

On September 14, 2023, while touring Twitter the way you might survey the ruins of Pompey, I came across a series of posts responding to this statement from the EU Commission account:

Mitigating the risk of extinction from AI should be a global priority…

What attracted critical attention was the use of the phrase, ‘risk of extinction‘ a fear of which, as Dr. Timnit Gebru alerts us (among others, mostly women researchers I can’t help but notice) lies at the heart of what Gebru calls the ´TESCREAL Bundle.’ The acronym, TESCREAL, which brings together the terms Transhumanism, Extropianism, Singularitarianism, Cosmism, Rationalism, Effective Altruism and Longtermism, describes an interlocked and related group of ideologies that have one idea in common: techno-utopianism (with a generous helping of eugenics and racialized ideas of what ‘intelligence’ means mixed in to make everything old new again).

Risk of extinction. It sounds dramatic, doesn’t it? The sort of phrase you hear in a Marvel movie, Robert Downey Jr, as Iron Man stands in front of a green screen and turns to one of his costumed comrades as some yet to be added animated threat approaches and screams about the risk of extinction if the animated thing isn’t stopped. There are, of course, actual existential risks; asteroids come to mind and although climate change is certainly a risk to the lives of billions and the mode of life of the industrial capitalist age upon which we depend, it might not be ‘existential’ strictly speaking (though, that’s most likely a distinction without a difference as the seas consume the most celebrated cities and uncelebrated communities).

The idea that what is called ‘AI’ – which, when all the tech industry’s glittering makeup is removed, is revealed plainly to be software, running on computers, warehoused in data centers – poses a risk of extinction requires a special kind of gullibility, self interest, and, as Dr, Gebru reminds us, supremacist delusions about human intelligence to promote, let alone believe. 

***

In the picture posted to X, Ursula von der Leyen, President of the European Commission, is standing at a podium before the assembled group of commissioners, presumably in the EU Commission building (the Berlaymont) in Brussels, a city I’ve visited quite a few times, regretfully. The building itself and the main hall for commissioners, are large and imposing, conveying, in glass, steel and stone, seriousness. Of course, between the idea and the act there usually falls a long shadow. How serious can this group be, I wondered, about a ‘risk of extinction’ from ‘AI’?

***

To find out, I decided to look at the document referenced and trumpeted in the post, the EU Artificial Intelligence Act. There’s a link to the act in the reference section below. My question was simple: is there a reference to ‘risk of extinction’ in this document? The word, ‘risk’, appears 71 times. It’s used in passages such as the following, from the overview:

The Commission proposes to establish a technology-neutral definition of AI systems in EU law and to lay down a classification for AI systems with different requirements and obligations tailored on a ‘risk-based approach’. Some AI systems presenting ‘unacceptable’ risks would be prohibited. A wide range of ‘high-risk’ AI systems would be authorised, but subject to a set of requirements and obligations to gain access to the EU market.

The emphasis is on a ‘risk based approach’ which seems sensible at first look but there are inevitable problems and objections. Some of the objections come from the corporate sector, claiming, with mind-deadening predictability, that any and all regulation hinders ‘innovation’ a word that is invoked like an incantation only not as intriguing or lyrical. More interesting critiques come from those who see risk (though, notably, not existential) and who agree something must be done but who view the EU’s act as not going far enough or going in the wrong direction. 

Here is the listing of high-risk activities and areas for algorithmic systems in the EU Artificial Intelligence Act:

o Biometric identification and categorisation of natural persons

o Management and operation of critical infrastructure

o Education and vocational training

o Employment, worker management and access to self-employment

o Access to and enjoyment of essential private services and public services and benefits

o Law enforcement

o Migration, asylum and border control management

o Administration of justice and democratic processes

Missing from this list is the risk of extinction; which, putting aside the Act’s flaws, makes sense. Including it would have been as out of place in a consideration of real-world harms as adding a concern about time traveling bandits.. And so, now we must wonder, why include the phrase, “risk of extinction” in a social media post?

***

On March 22, 2023, the modestly named Future of Life Institute, an organization initially funded by the bathroom fixture toting Lord of X himself, Musk (a 10 million USD investment in 2015) whose board is as alabaster as the snows of Antarctica once were, kept afloat by donations from other tech besotted wealthies, published an open letter titled, ‘Pause Giant AI Experiments: An Open Letter.’ This letter was joined by similarly themed statements from OpenAI (‘Planning for AGI and beyond’) and Microsoft (‘Sparks of Artificial General Intelligence: Early experiments with GPT-4’).

Each of these documents has received strong criticism from people, such as yours truly, and others with more notoriety and for good reason: they promote the idea that the imprecisely defined Artificial General Intelligence (AGI) is not only possible, but inevitable.  Critiques of this idea – whether based on a detailed analysis of mathematics (‘Reclaiming AI as a theoretical tool for cognitive science’) or of computational limits (The Computational Limits of Deep Learning) have the benefit of being firmly grounded in material reality. 

But as Freud might have warned us, we live in a society shaped not only by our understanding of the world as it is but also, in no small part by dreams and fantasies. White supremacists harbor the self congratulating fantasy that any random white person (well, man) is an astounding genius when compared to those not in that club. This notion endures despite innumerable and daily examples to the contrary because it serves the interests of certain individuals and groups to persist in delusion and impose this delusion on the world. The ‘risk of extinction’ fantasy has caught on because it builds on decades of fiction, like the idea of an American Dream and adds spice to an otherwise deadly serious and grounded business: controlling the tech industry’s scope of action. Journalists who ignore the actual harms of algorithmic systems rush to write stories about a ‘risk of extinction’ which is far sexier than talking about the software now called ‘AI’ that is used to deny insurance benefits or determine criminal activity.

 The European Union’s Artificial Intelligence Act does not explicitly reference ‘existential risk’ but the social media post using this idea is noteworthy. It shows that lurking in the background, the ideas promoted by the tech industry – by OpenAI and its paymaster Microsoft and innumerable camp followers – have seeped into the thinking of decision makers at the highest levels.

And how could it be otherwise? How flattering to think you’re rescuing the world from Skynet, the fictional, nuclear missile tossing system featured in the ‘Terminator’ franchise, rather than trying, at long last, to actually regulate Google.

***

References

European Union

A European approach to artificial intelligence

EU Artificial Intelligence  Act

EU Post on X

Critique

Timnit Gebru on Tescreal (YouTube)

The Acronym Behind Our Wildest AI Dreams and Nightmares (on TESCREAL)

The EU still needs to get its AI Act together

Reclaiming AI as a theoretical tool for cognitive science

The Computational Limits of Deep Learning

Boosterism

Pause Giant AI Experiments: An Open Letter

Planning for AGI and beyond

Sparks of Artificial General Intelligence: Early experiments with GPT-4