Against Snobbery (or, on writing)

Years ago, a man I’ve known for decades via electronic networks, started a blog.

He apologized because, to that class of people who assume a byline in the New York Times (described by Gore Vidal as always being “at the very heart of malice”) or a PhD confer a kind of omniscient expertise, starting a blog was akin to driving a Volkswagen (back when they were much cheaper) when a Mercedes was preferable as a class marker.

His blog was, indeed is, good. He ably writes about what he knows, how capital markets function, a topic he understands deeply from the inside. I suppose we could wait for a book by an academic or a series by a Columbia Journalism School trained NYT staffer on capital markets – such work is part of the fabric of what people who choose doing violence to the English language call ‘knowledge making’ but surely there is a place for information from the trenches.

My friend’s unnecessary apology was inspired by snobbery. You know what I mean. It’s snobbery that causes people to dismiss Wikipedia, even as an introductory source. Is the Wikipedia entry on magnetohydrodynamics bad? Most of us don’t know but we’ve been told it’s in a bad neighborhood, far from the tree-lined campuses where police beat pro Palestinian students or Manhattan newsrooms (or what’s left of either). To participate in the game of snobbery, a game imposed on most of us by a few nervous elites and their minions, we must turn up our nose, as if detecting the scent of a pile of dog poop, carelessly left on a sidewalk.

This comes to mind because of the way Microsoft and Google, in their sales propaganda, have promoted large language models as the solution to the problem of writing. I wrote ‘problem,’ because for many of us, told that only a small group of people possess the ability to write, putting ideas to paper or screen is felt to be a problem.

Consider the way Microsoft describes its product, Copilot for Word:

Copilot in Word ushers in a new era of writing, leveraging the power of AI. It can help you go from a blank page to a complete draft in a fraction of the time it would take to compose text on your own. While it may write exactly what you need, sometimes it may be “usefully wrong” thus giving you some helpful inspiration.

The ‘problem’ solved by a machine that, as it bestows upon us a new era of writing, consumes, by some estimates, terawatts of electrical power. Writing, no matter how laborious, is a problem best solved by thought. Indeed, one of the critical aspects of writing – whether it’s fiction, non fiction or even a well considered social media post – is the application of thought to the process of organizing and recording your ideas and points of view.

Dependence on word assemblers such as ChatGPT and even our new silicon frenemy, DeepSeek, regardless of how cleverly architected, interrupts this process but so does snobbery. The snob industrial complex – which promotes the idea that good writing requires a university course or attachment to a media corporation – prepared the soil for the idea of replacing writing with machinery. Of course millions, harassed, short on time but also, purposely discouraged from writing, apologize for the blogs they should make to share their knowledge. Millions who are made to feel inferior when looking up a topic on Wikipedia, are, unsurprisingly, receptive to tech industry propaganda: never mind about thinking to write, we’ll do it for you.

Writing is a craft; putting one sentence after another to build a tale – sometimes true, or as near as one can come, sometimes fanciful. You hone your craft by reading and writing and, by assembling for yourself what a friend of mine calls a writer’s table. When writing about the tech industry, Raymond Chandler and Karl Marx are sitting at my writer’s table alongside others – living and dead – from whom I learn to sharpen my own, yes, voice. There is decades of experience – being in the data centers – and a love of writing that goes into the work.

There’s nothing stopping you from doing the same. I want to read from people who serve food in restaurants and pilots and nuclear plant workers and people who have been cast out of the world of work. I want to hear from everyone, not just the famous or celebrated writing about everyone. 

Having reached this point in the piece it’s typical to try to create something pithy that sums up what came before. In lieu of that, I’ll say, please write if you want to. Do not surrender your creativity to snobbery or machinery. If you need encouragement, I’m here to help.

We need as many voices reporting from the various fronts as we can get. 

The F-35 Maneuver

Bad ideas, like death, are inevitable and just as inescapable.

The US-based tech industry is a Pandora’s box of bad ideas, unleashed upon an unwilling and unwitting populace, and indeed world, with reckless abandon, scorching lives and the Earth itself. Never mind, they say, we’re building the future.

The latest bad idea to spread dark wings and take flight is that building a super massive data center for ‘AI’ called ‘Stargate’- a megamachine that will solve all our problems like a resource and real estate devouring Wizard of Oz – is not only good, but essential.

In an Associated Press article titled, ‘Trump highlights partnership investing $500 billion in AI‘ published Jan 23, 2025, the project is described:

WASHINGTON (AP) — President Donald Trump on Tuesday talked up a joint venture investing up to $500 billion for infrastructure tied to artificial intelligence by a new partnership formed by OpenAI, Oracle and SoftBank.

The new entity, Stargate, will start building out data centers and the electricity generation needed for the further development of the fast-evolving AI in Texas, according to the White House. The initial investment is expected to be $100 billion and could reach five times that sum.

“It’s big money and high quality people,” said Trump, adding that it’s “a resounding declaration of confidence in America’s potential” under his new administration.

[…]

It seems like only yesterday, or more precisely, several months ago, that the same ‘Stargate’, with a still astronomically large but comparatively smaller budget, was described in a Tom’s Hardware article of March 24, 2024 titled ‘OpenAI and Microsoft reportedly planning $100 billion datacenter project for an AI supercomputer‘ –

Microsoft and OpenAI are reportedly working on a massive datacenter to house an AI-focused supercomputer featuring millions of GPUs. The Information reports that the project could cost “in excess of $115 billion” and that the supercomputer, currently dubbed “Stargate” inside OpenAI, would be U.S.-based. 

The report says that Microsoft would foot the bill for the datacenter, which could be “100 times more costly” than some of the biggest operating centers today. Stargate would be the largest in a string of datacenter projects the two companies hope to build in the next six years, and executives hope to have it running by 2028.

[…]

Bad ideas are inevitable but also, apparently, subject to cost overruns.

There are many ways to think and talk about this project, which is certain to fail (and there is news of far less costly methods, making the Olympian spending even more obviously suspicious). For me, the clearest way to understand the Stargate project and in fact, the entire ‘AI’ land grab, is as an attempt to create guaranteed profit for those tech firms who’re at the commanding heights – Microsoft, OpenAI, Amazon, Oracle and co-conspirators. Capital will flow into these firms whether the system works as advertised or not – i.e. they are paid for both function (such as it is) and malfunction.

This isn’t a new technique. The US defense industry has a long history of stuffing its coffers with cash for delivering weapons systems that work… sometimes. The most infamous example is Lockheed’s F-35 fighter, a project that provides the company with funding for both delivery and correction as described in the US Government Accounting Office article, ‘F-35 Joint Strike Fighter: More Actions Needed to Explain Cost Growth and Support Engine Modernization Decision’ May 2023 –

The Department of Defense’s most expensive weapon system—the F-35 aircraft—is now more than a decade behind schedule and $183 billion over original cost estimates.

[…]

That’s a decade and 183 billion of sweet, steady profit, the sort of profit the tech industry has long sought. First there was ‘enterprise software’, then there was subscription-based cloud, both efforts to create ‘growth’ and dependable cash infusions. Now, with Stargate, the industry may have, at last, found its F-35. Unlike the troubled fighter plane, there won’t be any Tom Cruise films featuring the data center. Then again, perhaps there will be. Netflix, like the rest of the industry, is out of ideas.

State of Exception – Part Two: Assume Breach

In part one of this series, I proposed that Trump’s second term, which, as we’re seeing with the rush of executive orders, has, unlike his first, a coherent agenda (centered on the Heritage Foundation’s Project 2025 plan), would be a time of increased aggression against ostracized individuals and groups, a state of exception in which the pretence of bourgeois democracy melts away.

Because of this, we should change our relationship with the technologies we’re compelled to use; a naive belief in the good will or benign neglect of tech corporations and the state should be abandoned. The correct perspective is to assume breach.

In a April, 2023 published blog post for the network equipment company, F5, systems security expert Ken Arora, described the concept of assume breach: 

Plumbers, electricians, and other professionals who operate in the physical world have long internalized the true essence of “assume breach.” Because they are tasked with creating solutions that must be robust in tangible environments, they implicitly accept and incorporate the simple fact that failures occur within the scope of their work. They also understand that failures are not an indictment of their skills, nor a reason to forgo their services. Rather, it is only the most skilled who, understanding that their creations will eventually fail, incorporate learnings from past failures and are able to anticipate likely future failures.

[…]

For the purposes of this essay, the term, failure, is re-interpreted to mean the intrusion of hostile entities into the systems and devices you use. By adopting a technology praxis based on assumed breach, you can plan for intrusion by acknowledging the possibility that your systems have, or will be penetrated.

Primarily, there are five areas of concern:

  • Phones
  • Social Media
  • Personal computers
  • Workplace platforms, such as Microsoft 365 and Google’s G-Suite
  • Cloud’ platforms, such as Microsoft Azure, Amazon AWS and Google Cloud Platform

It’s reasonable to think that following security best practices for each technology (links in the references section) offers a degree of protection from intrusion. Although this may be true to some extent, when contending with non-state hostiles, such as black hat hackers, state entities have direct access to the ownership of these systems, giving them the ability to circumvent standard security measures via the exercise of political power.

Phones (and tablets)

Phones are surveillance devices. No communications that require security and which, if intercepted, could lead to state harassment or worse should be done via phones. This applies to iPhones, Android phones and even niche devices such as Linux phones. Phones are a threat in two ways:

  1.  Location tracking – phones connect to cellular networks and utilize unique identifiers that enable location and geospatial tracking. This data is used to create maps of activity and associations (a technique the IDF has used in its genocidal wars)
  2.  Data seizure – phones store data that, if seized by hostiles, can be used against you and your organization. Social media account data, notes, contacts and other information

Phone use must be avoided for secure communications. If you must use a phone for your activist work, consider adopting a secure Linux-based phone such as GrapheneOS which may be more resistant to cracking if seized but not to communication interception. As an alternative, consider using old school methods, such as paper messages conveyed via trusted courier within your group. This sounds extreme and may turn out to be unnecessary depending on how conditions mutate. It is best however, to be prepared should it become necessary.

Social Media

Social media platforms such as Twitter/X, Bluesky, Mastodon, Facebook/Meta and even less public systems such as Discord, which enables the creation of privately managed servers, should not be used for secure communication. Not only because of posts, but because direct messages are vulnerable to surveillance and can be used to obtain pattern and association data. A comparatively secure (though not foolproof) alternative is the use of the Signal messaging platform.  (Scratch that: Yasha Levine provides a full explantation of Signal as a government op here).

Personal Computers

Like phones, personal computers -laptops and Desktops – should not be considered secure. There are several sub-categories of vulnerability:

  • Vulnerabilities caused by security flaws in the operating system (for example, issues with Microsoft Windows or Apple MacOS)
  • Vulnerabilities designed into the operating systems by the companies developing, deploying and selling them for profit objectives (Windows CoPilot, is a known threat vector, for example)
  • Vulnerabilities exploited by state actors such as intelligence and law enforcement agencies (deliberate backdoors)
  • Data exposure if a computer is seized

Operating systems are the main threat vector – that is, opening to your data – when using a computer. In part one of this series, I suggested abandoning the use of Microsoft Windows, Google Chrome OS and Apple’s Mac OS for computer usage that requires security and using secure Debian Linux instead. This is covered in detail in part one.

Workplace Platforms such as Google G-Suite and Microsoft 365 and other ‘cloud’ platforms such Microsoft Azure and Amazon Web Services

Although convenient, and, in the case of Software as a Service offerings such as Google G-Suite and Microsoft 365, less technically demanding to manage than on-premises hosting, ‘cloud’ platforms should not be considered trustworthy for secure data storage or communications.

This is true, even when platform-specific security best practices are followed because such measures will be circumvented by the corporations that own these platforms when it suits their purposes – such as cooperating with state mandates to release customer data.

The challenge for organizations who’re concerned about state sanctioned breach is finding the equipment, technical talent, will and organizational skill (project management) to move away from these ‘cloud’ systems to on-premises platforms. This is not trivial and has so many complexities that it deserves a separate essay, which will be part three of this series.

The primary challenges are:

  • Inventorying the applications you use
  • Assessing where the organisation’s data is stored and the types of data
  • Assessing the types of communications and the levels of vulnerability (for example, how is email used? What about collaboration services such as SharePoint?)
  • Crafting an achievable strategy for moving applications, services and data off the vulnerable cloud service
  • Encrypting and deleting data

In part three of this series, I will describe moving your organisation’s data and applications off of cloud platforms: what are the challenges? What are the methods? What skills are required? I’ll talk about this and more.

References

Assume Breach

Project 2025

Security Best Practices – Google Workspace

Microsoft 365 Security Best Practices

Questions and Answers: Israeli Military’s Use of Digital Tools in Gaza

UK police raid home, seize devices of EI’s Asa Winstanley

Cellphone surveillance

GrapheneOS

Meta-provided Facebook chats led a woman to plead guilty to abortion-related charges

All Roads Lead to Surveillance Valley (on Windows 11 Recall)

Microsoft’s recent announcement of a product named Recall for Copilot Plus PCs, which reportedly features built-in ‘AI’ hosted on a ‘Neural Processing Unit’, provides us with an opportunity to take a look at the political economy of the technology industry in the era of decline.

I say ‘decline’, because Recall, despite the hosannas we’re hearing from the tech press – Silicon Valley’s Pravda – does not represent an advance but a rearguard move to accomplish what I see as two goals: 

  1. Increase and guarantee Microsoft’s ‘AI’ related revenue stream by using its dominance of the PC operating system market (both consumer and corporate) to force a failing product on customers (Tesla’s so-called full self driving software provides another example)
  2. Increase ‘AI’ related revenue by marketing Recall as a surveillance tool to governments and corporations

On point one: Despite a massive investment in OpenAI, including hosting and operating Azure data centers for the ChatGPT suite of resource destroying text calculators and embedding the large language model in flagship products Azure and Microsoft 365, it’s not clear Microsoft (or any company) has seen a return on its ‘AI’ investment. Quite the contrary. Recall creates a compelled revenue stream as corporations refresh their fleets of laptops. Microsoft has tried to recoup costs via high prices for products such as Github Copilot but this does not seem to be working as hoped; organizations can opt out. 

On point two: In a Wall Street Journal interview, Microsoft CEO Satya Nadella described Recall’s capabilities as a “photographic memory” that is, recording every image and action on a PC, using an onboard neural processing unit to run this data (supposedly kept on the machine) through a model or models to enable more sophisticated, ‘AI’ enabled searching. 

This seems like a lot of engineering effort to make it easier to find a photo you took at the beach a few years ago. Corporations don’t care about making anyone’s life easier so we must look for more adult, power-aware explanations for what we’re seeing here. 

Consider the precedent of Windows Vista, released in 2006. Vista, which employed a complex method for enforcing corporate digital rights, was created by Microsoft to attract the attention of the film and music industries as the preferred way to exert command and control over our use of ‘content’.  With Vista, Microsoft’s goal was to become the gatekeeper for the digital distribution of entertainment and derive profit from that position. This didn’t work out as planned but the effort is a key indicator of intent. I interpret Recall as being the ‘AI’ variant of the gatekeeper gambit.

We can safely ignore happy talk and promises of privacy to see what is right before us: a system for recording everything you do will be marketed to businesses and governments as a means of mass surveillance. What was once the description of malware has, in the age of ‘AI’ become a product. In its quest for profits, Microsoft is creating a difficult to escape, hardware based, globally distributed monitoring platform. We can be certain that its competitors, such as Apple, are making similar moves.

***

When thinking about the tech industry and its endless stream of product announcements, particularly about ‘AI’, a good rule of thumb is to ignore whatever glittering words are used to ask one question: how do they plan to make money? But not just ‘money’ in the abstract, profit. Looking at Recall for Windows 11, a follow the money approach leads directly to what Yasha Levine called ‘Surveillance Valley’.


References

Recall is Microsoft’s key to unlocking the future of PCs The Verge

ChatGPT costs $700,000 per day to run, which is why Microsoft wants to make its own AI chipsWindows Central

OpenAI and Microsoft Plan $100 Billion ‘Stargate’ Data Center in the U.S.Enterprise AI

A Cost Analysis of Windows Vista Content ProtectionPeter Gutmann

Surveillance Valley Yasha Levine

The Interpretation of Tech Dreams – On the EU Commission Post

On September 14, 2023, while touring Twitter the way you might survey the ruins of Pompey, I came across a series of posts responding to this statement from the EU Commission account:

Mitigating the risk of extinction from AI should be a global priority…

What attracted critical attention was the use of the phrase, ‘risk of extinction‘ a fear of which, as Dr. Timnit Gebru alerts us (among others, mostly women researchers I can’t help but notice) lies at the heart of what Gebru calls the ´TESCREAL Bundle.’ The acronym, TESCREAL, which brings together the terms Transhumanism, Extropianism, Singularitarianism, Cosmism, Rationalism, Effective Altruism and Longtermism, describes an interlocked and related group of ideologies that have one idea in common: techno-utopianism (with a generous helping of eugenics and racialized ideas of what ‘intelligence’ means mixed in to make everything old new again).

Risk of extinction. It sounds dramatic, doesn’t it? The sort of phrase you hear in a Marvel movie, Robert Downey Jr, as Iron Man stands in front of a green screen and turns to one of his costumed comrades as some yet to be added animated threat approaches and screams about the risk of extinction if the animated thing isn’t stopped. There are, of course, actual existential risks; asteroids come to mind and although climate change is certainly a risk to the lives of billions and the mode of life of the industrial capitalist age upon which we depend, it might not be ‘existential’ strictly speaking (though, that’s most likely a distinction without a difference as the seas consume the most celebrated cities and uncelebrated communities).

The idea that what is called ‘AI’ – which, when all the tech industry’s glittering makeup is removed, is revealed plainly to be software, running on computers, warehoused in data centers – poses a risk of extinction requires a special kind of gullibility, self interest, and, as Dr, Gebru reminds us, supremacist delusions about human intelligence to promote, let alone believe. 

***

In the picture posted to X, Ursula von der Leyen, President of the European Commission, is standing at a podium before the assembled group of commissioners, presumably in the EU Commission building (the Berlaymont) in Brussels, a city I’ve visited quite a few times, regretfully. The building itself and the main hall for commissioners, are large and imposing, conveying, in glass, steel and stone, seriousness. Of course, between the idea and the act there usually falls a long shadow. How serious can this group be, I wondered, about a ‘risk of extinction’ from ‘AI’?

***

To find out, I decided to look at the document referenced and trumpeted in the post, the EU Artificial Intelligence Act. There’s a link to the act in the reference section below. My question was simple: is there a reference to ‘risk of extinction’ in this document? The word, ‘risk’, appears 71 times. It’s used in passages such as the following, from the overview:

The Commission proposes to establish a technology-neutral definition of AI systems in EU law and to lay down a classification for AI systems with different requirements and obligations tailored on a ‘risk-based approach’. Some AI systems presenting ‘unacceptable’ risks would be prohibited. A wide range of ‘high-risk’ AI systems would be authorised, but subject to a set of requirements and obligations to gain access to the EU market.

The emphasis is on a ‘risk based approach’ which seems sensible at first look but there are inevitable problems and objections. Some of the objections come from the corporate sector, claiming, with mind-deadening predictability, that any and all regulation hinders ‘innovation’ a word that is invoked like an incantation only not as intriguing or lyrical. More interesting critiques come from those who see risk (though, notably, not existential) and who agree something must be done but who view the EU’s act as not going far enough or going in the wrong direction. 

Here is the listing of high-risk activities and areas for algorithmic systems in the EU Artificial Intelligence Act:

o Biometric identification and categorisation of natural persons

o Management and operation of critical infrastructure

o Education and vocational training

o Employment, worker management and access to self-employment

o Access to and enjoyment of essential private services and public services and benefits

o Law enforcement

o Migration, asylum and border control management

o Administration of justice and democratic processes

Missing from this list is the risk of extinction; which, putting aside the Act’s flaws, makes sense. Including it would have been as out of place in a consideration of real-world harms as adding a concern about time traveling bandits.. And so, now we must wonder, why include the phrase, “risk of extinction” in a social media post?

***

On March 22, 2023, the modestly named Future of Life Institute, an organization initially funded by the bathroom fixture toting Lord of X himself, Musk (a 10 million USD investment in 2015) whose board is as alabaster as the snows of Antarctica once were, kept afloat by donations from other tech besotted wealthies, published an open letter titled, ‘Pause Giant AI Experiments: An Open Letter.’ This letter was joined by similarly themed statements from OpenAI (‘Planning for AGI and beyond’) and Microsoft (‘Sparks of Artificial General Intelligence: Early experiments with GPT-4’).

Each of these documents has received strong criticism from people, such as yours truly, and others with more notoriety and for good reason: they promote the idea that the imprecisely defined Artificial General Intelligence (AGI) is not only possible, but inevitable.  Critiques of this idea – whether based on a detailed analysis of mathematics (‘Reclaiming AI as a theoretical tool for cognitive science’) or of computational limits (The Computational Limits of Deep Learning) have the benefit of being firmly grounded in material reality. 

But as Freud might have warned us, we live in a society shaped not only by our understanding of the world as it is but also, in no small part by dreams and fantasies. White supremacists harbor the self congratulating fantasy that any random white person (well, man) is an astounding genius when compared to those not in that club. This notion endures despite innumerable and daily examples to the contrary because it serves the interests of certain individuals and groups to persist in delusion and impose this delusion on the world. The ‘risk of extinction’ fantasy has caught on because it builds on decades of fiction, like the idea of an American Dream and adds spice to an otherwise deadly serious and grounded business: controlling the tech industry’s scope of action. Journalists who ignore the actual harms of algorithmic systems rush to write stories about a ‘risk of extinction’ which is far sexier than talking about the software now called ‘AI’ that is used to deny insurance benefits or determine criminal activity.

 The European Union’s Artificial Intelligence Act does not explicitly reference ‘existential risk’ but the social media post using this idea is noteworthy. It shows that lurking in the background, the ideas promoted by the tech industry – by OpenAI and its paymaster Microsoft and innumerable camp followers – have seeped into the thinking of decision makers at the highest levels.

And how could it be otherwise? How flattering to think you’re rescuing the world from Skynet, the fictional, nuclear missile tossing system featured in the ‘Terminator’ franchise, rather than trying, at long last, to actually regulate Google.

***

References

European Union

A European approach to artificial intelligence

EU Artificial Intelligence  Act

EU Post on X

Critique

Timnit Gebru on Tescreal (YouTube)

The Acronym Behind Our Wildest AI Dreams and Nightmares (on TESCREAL)

The EU still needs to get its AI Act together

Reclaiming AI as a theoretical tool for cognitive science

The Computational Limits of Deep Learning

Boosterism

Pause Giant AI Experiments: An Open Letter

Planning for AGI and beyond

Sparks of Artificial General Intelligence: Early experiments with GPT-4

Microsoft: A Materialist Approach

When we think about the tech industry, images of smoothly functioning machines, moving the world inexorably towards a brilliant future, may dance across your mind. This is no accident; the industry, since its birth in the 1990s (in its present form, deriving profits from software and the proliferation of software methods as broadly as possible) has cultivated and encouraged this view with the help of an uncritical tech press.

What’s lacking is a consideration and acknowledgement of the materialist aspects of the industry. By ‘materialist’ I’m referring to the nuts and bolts of how things work: the actual business of software and its place within political economy. Although the tech industry, with its flair for presentation and compliant press coverage, has successfully sold itself as fundamentally different from other economic sectors (say, coal mining) what it shares with all other forms of business activity within capitalism is an emphasis on profit as the only true goal. Once we re-center an understanding of profit as the objective, things that seem inexplicable or against a corporation’s ‘culture’ come into focus.

Which brings me to Microsoft and my new podcast.

For decades – almost since the company hit its near monopoly stride as an arbiter of desktop software used by companies large and small and consumers – I have worked with Microsoft technologies at what, in the industry, is called ‘at-scale’ for multinational companies across the globe. This has provided me with an understanding of two sides of a coin: how Microsoft works and how its software and other products are used by its corporate customers. From SQL Server databases for banks to Azure cloud hosted machine learning APIs used by so called AI start-ups, I have seen, and continue to see, if not all, a very broad swath.

This is the basis for an analysis of Microsoft from a materialist perspective. Capitalism, from this view, is not taken as a given but as a system which developed over time and was imposed upon the world. In this podcast, we will use Microsoft as the focal point for a review of the software aspect of this system in its present form. I hope you come along.


Spotify

RSS

Soundcloud

Website