Information Technology for Activists – What is To Be Done?

Introduction

This is written in the spirit of the Request for Comments memorandums that shaped the early Internet. RFCs, as they are known, are submitted to propose a technology or methodology and gather comments/corrections from relevant and knowledgeable community members in the hope of becoming a widely accepted standard.

Purpose

This is a consideration of the information technology options for politically and socially active organizations. It’s also a high level overview of the technical landscape. The target audience is technical decision makers in groups whose political commitments challenge the prevailing order, focused on liberation. In this document, I will provide a brief history of past patterns and compare these to current choices, identifying the problems of various models and potential opportunities.

Alongside this blog post there is a living document posted for collaboration here. I invite a discussion of ideas, methods and technologies I may have missed or might be unaware of to improve accuracy and usefulness.

Being Intentional About Technology Choices

It is a truism that modern organizations require technology services. Less commonly discussed are the political, operational, cost and security implications of this dependence from the perspective of activists. It’s important to be intentional about technological choices and deployments with these and other factors in mind. The path of least resistance, such as choosing Microsoft 365 for collaboration rather than building on-premises systems, may be the best, or least terrible choice for an organization but the decision to use it should come after weighing the pros and cons of other options. What follows is not an exhaustive history; I am purposefully leaving out many granular details to get to the point as efficiently as possible.

A Brief History of Organizational Computing

By ‘organizational computing’ I’m referring to the use of digital computers arranged into service platforms by non-governmental and non-military organizations. In this section, there is a high level walk through of the patterns which have been utilized in this sector.

Mainframes

IBM 360 in Computer Room – mid 1960s

The first use of digital computing at-scale was the deployment of mainframe systems as centrally hosted resources. User access, limited to specialists, was provided via a time sharing method in which ‘dumb’ terminals displayed results of programs and enabled input (punch cards were also used for inputting program instructions). One of the most successful systems was the IBM 360 (operational from 1965 to 1978). Due to expense, the typical customer was large banks, universities and other organizations with deep pockets.

Client Server

Classic Client Server Architecture (Microsoft)

The introduction of personal computers in the 1980s created the raw material for the development of networked, smaller scale systems that could supplement mainframes and provide organizations with the ability to host relatively modest computing platforms that suited their requirements. By the 1990s, this became the dominant model used by organizations at all scales (mainframes remain in service but the usage profile became narrower – for example, to run applications requiring greater processing capability than what’s possible using PC servers).

The client server model era spawned a variety of software applications to meet organizational needs such as email servers (for example, Sendmail and Microsoft Exchange), database servers (for ex. Postgres and SQL Server), web servers such as Apache and so on. Companies such as Novell, Cisco, Dell and Microsoft rose to prominence during this time.

As the client server era matured and the need for computing power grew, companies like VMWare sold platforms that enabled the creation of virtual machines (software mimics of physical servers). Organizations that could not afford to own or rent large data centers could deploy the equivalent of hundreds or thousands of servers within a smaller number of more powerful (in terms of processing capacity and memory) computing systems running VMWare’s ESX software platform. Of course, the irony of this return to something like a mainframe was not lost on information technology workers whose careers spanned the mainframe to client server era.

Cloud computing

Cloud Pattern (Amazon Web Services)

Virtualization, combined with the improved Internet access of the early 2000s, gave rise to what is now called ‘cloud.’ Among information technology workers, it was popular to say ‘there is no cloud, it’s just someone else’s computer.’ Overconfident cloud enthusiasts considered this to be the complaint of a fading old guard but it is undeniably true.

The Cloud Model

There are four modes of cloud computing:

  • Infrastructure as a service – IaaS: (for example, building virtual machines on platforms such as Microsoft Azure, Amazon Web Services or Google Cloud Platform)
  • Platform as a service – PaaS:  (for example, databases offered as a service utility eliminating the need to create a server as host)
  • Software as a Service – SaaS: (platforms like Microsoft 365 fall into this category)
  • Function as a Service – FaaS:  (focused on deployment using software development – ‘code’ – alone with no infrastructural management responsibilities)

A combination of perceived (but rarely realized) convenience, marketing hype and mostly unfulfilled promises of lower running costs have made the cloud model the dominant mode of the 2020s. In the 1990s and early 2000s, an organization requiring an email system was compelled to acquire hardware and software to configure and host their own platform (the Microsoft Exchange email system running on Dell server or VMWare hardware was a common pattern). The availability of Office 365 (later, Microsoft 365) and Google’s G-Suite provided another, attractive option that eliminated the need to manage systems while providing the email function.

A Review of Current Options for Organizations

Although tech industry marketing presents new developments as replacing old, all of the pre-cloud patterns mentioned above still exist. The question is, what makes sense for your organization from the perspectives of:

  • Cost
  • Operational complexity
  • Maintenance complexity
  • Security and exposure to vulnerabilities
  • Availability of skilled workers (related to the ability to effectively manage all of the above)

We needn’t include mainframes in this section since they are cost prohibitive and today, intended for specialized, high performance applications.

Client Server (on-premises)

By ‘on-premises’ we are referring to systems that are not cloud-based. Before the cloud era, the client server model was the dominant pattern for organizations of all sizes. Servers can be hosted within a data center the organization owns or within rented space in a colocation facility (a business that provides rented space for the servers of various clients).

Using a client server model requires employing staff who can install, configure and maintain systems. These skills were once common, indeed standard, and salaries were within the reach of many mid-size organizations. The cloud era has made these skills harder to come by (although there are still many skilled and enthusiastic practitioners). A key question is, how much investment does your organization want to make in the time and effort required to build and manage its own system? Additional questions for consideration come from software licensing and software and hardware maintenance cycles.

Sub-categories of client server to consider

Virtualization and Hyper-converged hardware

As mentioned above, the use of virtualization systems, offered by companies such as VMWare, was one method that arose during the heyday of client server to address the need for more concentrated computing power in a smaller data center footprint.

Hyper-converged infrastructure (HCI) systems, combining compute, storage and networking into a single hardware chassis, is a further development of this method. HCI systems and virtualization reduce the required operational overhead. More about this later.

Hybrid architectures

A hybrid architecture uses a mixture of on-premises and off-site, typically ‘cloud’ based systems. For example, an organization’s data might be stored on-site but the applications using that data are hosted by a cloud provider.

Cloud

Software as a Service

Software as a Service platforms such as Microsoft 365 are the most popular cloud services used by firms of all types and sizes, including activist groups. The reasons are easy to understand:

  • Email services without the need to host an email server
  • Collaboration tools (SharePoint and MS Teams for example) built into the standard licensing schemes
  • Lower (but not zero) operational responsibility
  • Hardware maintenance and uptime are handled by the service provider

The convenience comes at a price, both financial, as licensing costs increase and operational inasmuch as organizations tend to place all of their data and workflows within these platforms, creating deep dependencies.

Build Platforms

The use of ‘build platforms’ like Azure and AWS is more complex than the consumption model of services such as Microsoft 365. Originally, these were designed to meet the needs of organizations that have development and infrastructure teams and host complex applications. More recently, the ‘AI’ hype push has made these platforms trojan horses for pushing hyperscale algorithmic platforms (note, as an example, Microsoft’s investment in and use of OpenAI’s Large Language Model kit) The most common pattern is a replication of large-scale on-premises architectures using virtual machines on a cloud platform. 

Although marketed as superior to, and simpler than on-premises options, cloud platforms require as much, and often more technical expertise. Cost overruns are common; cloud platforms make it easy to deploy new things but each item generates a cost. Even small organizations can create very large bills. Security is another factor; configuration mistakes are common and there are many examples of data breaches produced by error.

Private Cloud

The potential key advantage of the cloud model is the ability to abstract technical complexity. Ideally, programmers are able to create applications that run on hardware without the requirement to manage operating systems (a topic outside of the scope of this document). Private cloud enables the staging of the necessary hardware on-premises. A well known example is Openstack which is very technically challenging. Commercial options include Microsoft’s Azure Stack which extends the Azure technology method to hyper converged infrastructure (HCI) hosted within an organization’s data center.


Information Technology for Activists – What is To Be Done?

In the recent past, the answer was simple: purchase hardware and software and install and configure it with the help of technically adept staff, volunteers or a mix. In the 1990s and early 2000s it was typical for small to midsize organizations to have a collection of networked personal computers connected to a shared printer within an office. Through the network (known as a local area network or LAN) these computers were connected to more powerful computers called servers that provide centralized storage and the means through which each individual computer could communicate in a coordinated manner and share resources.  Organizations often hosted their own websites which were made available to the Internet via connections from telecommunications providers.

Changes in the technology market since the mid 2000s, pushed to increase the market dominance and profits of a small group of firms (primarily, Amazon, Microsoft and Google) have limited options even as these changes appear to offer greater convenience. How can these constraints be navigated?

Proposed Methodology and Doctrines

Earlier in this document, I mentioned the importance of being intentional about technology usage. In this section, more detail is provided.

Let’s divide this into high level operational doctrines and build a proposed architecture from that.

First Doctrine: Data Sovereignty

Organizational data should be stored on-premises using dedicated storage systems rather than in a SaaS such as Microsoft 365 or Google Workspace

Second Doctrine: Bias Towards Hybrid

By ‘hybrid’ I am referring to system architectures that utilize a combination of on-premises and ‘cloud’ assets

Third Doctrine: Bias Towards System Diversity

This might also be called the right tool for the right job doctrine. After consideration of relevant factors (cost, technical ability, etc) an organization may decide to use Microsoft 365 (for example) to provide some services but other options should be explored in the areas of:

  • Document management and related real time collaboration tooling
  • Online Meeting Platforms
  • Database platforms
  • Email platforms

Commercial platforms offer integration methods between platforms that make it possible to create an aggregated solution from disparate tools.

These doctrines can be applied as guidelines for designing an organizational system architecture:

The above is only one option. More are possible depending on the aforementioned factors of:

  • Cost
  • Operational complexity
  • Maintenance complexity
  • Security and exposure to vulnerabilities
  • Availability of skilled workers (related to the ability to effectively manage all of the above)

I invite others to add to this document to improve its content and sharpen the argument.


Activist Documents and Resources Regarding Alternative Methods

Counter Cloud Action Plan – The Institute for Technology In the Public Interest

https://titipi.org/pub/Counter_Cloud_Action_Plan.pdf

Measurement Network

“measurement.network provides non-profit network measurement support to academic researchers”

https://measurement.network

Crisis, Ethics, Reliability & a measurement.network by Tobias Fiebig Max-Planck-Institut für Informatik Saarbrücken, Germany

https://dl.acm.org/doi/pdf/10.1145/3606464.3606483

Tobias Fiebig Max-Planck-Institut für Informatik and Doris Aschenbrenner Aalen University

https://dl.acm.org/doi/pdf/10.1145/3538395.3545312

Decentralized Internet Infrastructure Research Group Session Video

“Oh yes! over-preparing for meetings is my jam :)”:The Gendered Experiences of System Administrators

https://dl.acm.org/doi/pdf/10.1145/3579617

Revolutionary Technology: The Political Economy of Left-Wing Digital Infrastructure by Michael Nolan

https://osf.io/hva2y/


References in the Post

RFC

https://en.wikipedia.org/wiki/Request_for_Comments

Openstack

https://en.wikipedia.org/wiki/OpenStack

Self Hosted Document Management Systems

https://noted.lol/self-hosted-dms-applications/

Overview

https://noted.lol/self-hosted-dms-applications/

Teedy

https://teedy.io/?ref=noted.lol#!/

Only Office

https://www.onlyoffice.com/desktop.aspx

Digital Ocean

https://www.digitalocean.com/

IBM 360 Architecture

https://www.researchgate.net/figure/BM-System-360-architectural-layers_fig2_228974972

Client Server Model

https://en.wikipedia.org/wiki/Client–server_model

Mainframe

https://en.wikipedia.org/wiki/Mainframe_computer

Virtual Machine

https://en.wikipedia.org/wiki/Virtual_machine

Server Colocation

https://www.techopedia.com/definition/29868/server-colocation

What is server virtualization

https://www.techtarget.com/searchitoperations/definition/What-is-server-virtualization-The-ultimate-guide

Letter to an AI Researcher

[In this post, I imagine that I’m writing to a researcher who, disappointed, and perhaps confused by the seemingly unstoppable corporate direction their field is taking, needs a bit of, well, not cheering up precisely but, something to help them understand what it all means and how to resist]

My friend,

Listen, I know you’ve been thrown by the way things have been going for the past few years – really, the past decade; a step by step privatization of the field you love and education pursued at significant financial cost (you’re not a trust funder) because of your desire to understand cognition and just maybe, build systems that, through their cognitive dexterity, aid humanity (vainglorious but, why not aim high?) You thought of people such as McCarthy, Weizenbaum, Minsky and Shannon and hoped to blaze trails, as they did.


When OpenAI hit the scene in 2015, with the promise – in its very name – to be an open home for advanced research, you celebrated. Over wine, we argued (that’s too strong, more like warmly debated with increasing heat as the wine flowed) about the participation of sinister figures such as Musk and Thiel. At the time, Musk was something of a hero to you and Thiel? Well, he was just a quirky VC with deep pockets and an overlooked penchant for ideas that are a bit Goebbels-esque.  “Form follows function,” I said, “and the function of these people is to find ways to generate profit and pretend they’re gods.” But we let that drop over glasses of chardonnay.

Here we are, in 2023… which for you, or more pointedly your dreams, has become an annus horribilis, a horrible year. OpenAI is now married to Microsoft and the much anticipated release of GPT-4 is, in its operational and environmental impact details, shrouded in deliberate mystery. AI ethics teams are discarded like used tissues – there is an air of defeat as the idea of the field you thought you had joined dies the death of a thousand cuts.

Now is the time to look around and remember what I told you all those years ago: science and engineering (and your field contains both these things) do not exist outside of the world but are very much in it and are subject to a reality described by the phrase you’ve heard me say a million times: political economy.  Our political economy – or, I should say, the political economy (the interrelations of law, production, custom and more) we’re subject to, is capitalist. What does this mean for your field?

It means that the marriage between OpenAI and Microsoft,  the integration of large language models with the Azure cloud and the M365 SaaS platforms, the elimination of ethics teams whose work might challenge or impede marketing efforts, the reckless proliferation of algorithmically enacted harms is all because the real goal is profit, which is at the heart of capitalist political economy.

And we needn’t stop with Microsoft; there is no island to run to, no place that is outside of this political economy. No, not even if your team and leadership are quite lovely. This is a totalitarian (or, if you’re uncomfortable with that word, hegemonic) system which covers the globe in its harsh logic.

Oh but now you’re inclined to debate again and it’s too early for wine. I can hear you saying, ‘We can create an ethical AI; it’s possible. We can return to the research effort of years past’ I won’t say it’s impossible, stranger things have presumably happened in the winding history of humanity,  but taking the whole fetid situation into account – yes, the relationship between access to computation and socio-technical power, the political economy, it’s not probable. So long as you continue believing in something that the structure of the society we live in does not support, you will continue to be disappointed. 

Unless, that is, that structure is changed.


What is to be done?

I don’t expect you to become a Marxist (though it would be nice, we could compare obscure notes about historical materialism) but what I’m encouraging you to consider is that the world we grew up in and, quite naturally take for granted as immutable – the world of capitalist social relations, the world which, among other less than fragrant things, has all but completely absorbed your field into its profit engine is not the only way to organize human society.

Once you accept that, we can begin to talk about what might come next.

ChatGPT: Super Rentier

I have avoided writing about ChatGPT as one might hurriedly walk past a group of co-workers, gathered around a box of donuts who’re talking about a popular movie or show; to avoid being drawn into the inevitable.

In some circles, certainly the circles I travel in, ChatGPT is the relentless talk of the town. Everyone from LinkedIn hucksters who claimed to be making millions from the platform, only moments after it was released, to the usual ‘AI’ enthusiasts who take any opportunity to sweatily declare a new era of machine intelligence upon us – and of course, a scattering of people carefully analyzing the actually existing nuts and bolts – everyone seems to be promoting, debating and shouting about ChatGPT.

You can imagine me, dear reader, in the midst of this drama, quietly sitting in a timeworn leather chair, slowly sipping a glass of wine while a stream of text, video and audio, all about ChatGPT, that silicon, would-be Golem, washes over me

What roused me from my torpor was the news Microsoft was investing 10 billion dollars in OpenAI, the organization behind ChatGPT and other ballyhooed large language model systems (see: “Microsoft’s $10bn bet on ChatGPT developer marks new era of AI”). Even for Microsoft, that’s a lot of money. Behind all this, is Microsoft’s significant investment in what it calls purpose built, AI supercomputers such as VOYAGER-EUS2 to train and host platforms such as ChatGPT. Although tender minded naifs believe corporations are using large scale computation to advance humanity, more sober minds are inclined to ask fundamental questions such as, why?

The answer came from the Microsoft article, “General availability of Azure OpenAI Service expands access to large, advanced AI models with added enterprise benefits.” Note that phrase, enterprise benefits.’ The audience for this article is surely techie and techie adjacent (and here, I must raise my hand) but even if neither of these categories describes you I suggest giving it a read.  There’s also an introductory video, providing a walkthrough of using the OpenAI tooling that’s mediated via the Microsoft Azure cloud platform.

Microsoft Video on OpenAI Platforms, Integrated with Azure

As I watched this video, the purpose of all those billions and the hardware it bought became clear to me; Microsoft and its chief competitors, Amazon and an apparently panicked Google (plus, less well known organizations) are seeking to extend the rentier model of cloud computing, which turns computation, storage and database services into a rented utility and recurring revenue source for the cloud firm that maintains the hardware – even for the largest corporate customers – into the ‘AI’ space, creating super rentier platforms which will spawn subordinate, sub-rentier platforms:

Imagine the following…

A San Francisco based startup, let’s give it a terrible name, Talkist, announces it has developed a remarkable, groundbreaking chat application (and by the way, ‘groundbreaking’ is required alongside ‘next generation’) which will enable companies around the world to replace customer service personnel with Talkist’s ‘intelligent’, ‘ethical’ system. Talkist, which only consists of a few people (mostly men) and a stereotypical, ‘visionary’ leader, probably wearing a thousand dollar t-shirt, doesn’t have the capital, or the desire to build the computational infrastructure required to host such a system.

This is where the Azure/OpenAI complex of systems comes to the rescue of our plucky band of well-funded San Franciscans. Instead of diverting precious venture capital into purchasing data center space and the computers to fill it, that money can be poured into creating applications which utilize Microsoft/OpenAI cloud services. Microsoft/OpenAI rent ‘AI’ capabilities to Talkist who in turn, rent ‘AI’ capabilities to other companies who think they can replace people with text generating, pattern matching systems (ironically, OpenAI itself is dependent on exploited labor as the Time Magazine article, “OpenAI Used Kenyan Workers on Less Than $2 Per Hour to Make ChatGPT Less Toxic” shows).

What a time to be alive.

Of course, the uses (and from the perspective of profit-driven organizations, cost savings) don’t end with chatty software. We can imagine magazines and other publications, weary of having to employ troublesome human beings with their demands for salaries, health care and decent lives (The gall! Are there no workhouses? Are there no prisons?) rushing to use these systems to ‘write’ – or perhaps we should say, mechanistically assemble,  articles and news stories, reducing the need for writers who are an annoying class (I wink at you dear reader for I am the opposite of annoying – being a delightful mixture of cologne, Bordeaux and dialectical analysis). Unsurprisingly, and let’s indulge our desire for a bit of the old schadenfreude, amusingly there are problems such as those detailed in the articles “CNET Is Reviewing the Accuracy of All Its AI-Written Articles After Multiple Major Corrections. and, “CNET’s AI Journalist Appears to Have Committed Extensive Plagiarism.”

Of all the empires that have stalked the Earth, the tech imperium is, perhaps, the bullshitiest. The Romans derived their power from myths, yes, but also, roads, aqueducts and organized violence – real things in a real world.  The US empire has its own set of myths, such as a belief that sitting in a car, in traffic, is the pinnacle of freedom and in meritocracy (a notion wielded by the most mediocre minds to explain their comforts). Once again however, real things, such as possessing the world’s reserve currency and the capacity for ultra-violence lurk behind the curtain.

The tech empire, by contrast, is built, using the Monorail maneuver detailed in this Simpsons episode, on false claims prettily presented. It has inserted itself between us and the things we need – information, memories, creativity. The tech industry has hijacked a variety of commons and then rents us access to what should be open. In its ‘AI’ incarnation, the tech industry attempts to replace human reason with computer power, a fool’s errand, which computer scientist Joseph Weizenbaum dissected almost 50 years ago,  but a goal motivated by a desire to increase the rate of profit in an era of creeping stagnation by reducing the need for labor.

Rather than being a refutation of Marx and Engel’s analysis as some, such as Yanis Varoufakis with his ‘cloudalist’ hypothesis bafflingly claim, we are indeed, still very much dealing with the human grinding workings of capitalist logics, wearing a prop, science fiction film costume, claiming to have come in peace.

ChatGPT isn’t a research platform or the herald of a new age of computation; it is the embodiment of the revenue stream dreams of the tech industry, the super-rentier.

Magic is an Industrial Process, Belching Smoke and Fire: On GPUs

AT THE END of ´The Wizard of OZ´, Metro-Goldwyn-Mayer´s 1939-released, surrealist musical fantasy, our heroine Dorothy and her loyal comrades complete a long, arduous (but song filled) journey, finally reaching the fabled city of OZ. In OZ, according to a tunefully stated legend, there’s a wizard who possesses the power to grant any wish, no matter how outlandish. Dorothy, marooned in OZ, only wishes to return home and for her friends to receive their various hearts´ desire.

Who Dares Approach Silicon Valley!

As they cautiously approach the Wizard’s chamber, Dorothy and her friends are met with a display of light, flame and sound; ¨who dares!?¨ a deafening voice demands. It’s quite a show of apparent fury but illusion crumbles when it’s revealed (by Dorothy´s dog, Toto) that behind it all is a rather ordinary man, hidden on the other side of a velvet curtain, frantically pulling levers and spinning dials to keep the machinery powering the illusion going while shouting, “pay no attention to that man behind the curtain!

Behind the appearance of magic, there was a noisy industrial process, belching smoke. Instead of following the Wizard’s advice to pay no attention, let’s pay very close attention indeed to what lies behind appearances.


THERE’S AN INESCAPABLE MATERIALITY behind what’s called ‘AI’ deliberately obscured under a mountain of hype, flashy images and claims of impending ‘artificial general intelligence’ or ‘AGI’ as it’s known in sales brochures disguised as scientific papers.

At the heart of the success of techniques such as large language models, starting in the latter 2010s, is the graphics processing unit or GPU (in this essay about Meta´s OPT-175B, I provide an example of how GPUs are used). These devices use a parallel architecture, which enables greater performance than the general purpose processors used for your laptop; this vastly greater capability is the reason GPUs are commonly used for demanding applications such as games and now, the hyper-scale pattern matching behind so-called ´AI´ systems.

Typical GPU Architecture – ResearchGate

All of the celebrated feats of ‘AI’ – platforms such as Dall-E, GPT-3 and so on, are completely dependent on the use of some form of GPU, most likely provided by NVIDIA, the leading company in this space. OpenAI, a Microsoft partner, uses that company’s Azure cloud but within those ´cloud´ data centers, there are thousands upon thousands of GPUs, consuming power and requiring near constant monitoring to replace failed units.

GPUs are constructed as the result of a long and complex supply chain involving resource extraction, manufacturing, shipping and distribution; even a sales team.  ‘AI’ luminaries and their camp followers, the army of bloggers, podcasters and researchers who promote the field, routinely and self-indulgently debate a variety of esoteric topics (if you follow the ´AI´ topic on Twitter, for example, odds are you have observed and perhaps participated in these discussions about vague topics such as, ´the nature of intelligence´) but it’s GPUs and their dependencies all the way down

GPU raw and processed material inputs are aluminum, copper, clad laminates, glass, fibers, thermal silica gel, tantalum and tungsten. Every time an industry partisan tries to ‘AI’-splain the field, declaring it to be a form of magic, ignore their over-determination and confusion of feedback loops with cognition and think of those raw materials, ripped from the ground.

Aluminum mining

The ‘AI’ industrial complex is beset by two self-serving fantasies: 

1.) We are building intelligence 

2.) The supply chain feeding the industry is infinite and can ‘scale is all you need’ its way forever to a brave new world. 

For now, this industry has been able to keep the levers and dials moving,  but the amount of effort required will only grow as the uses to which this technology is put expand (Amazon alone seems determined to find as many ways to consume computational infrastructure as possible with a devil take the hindmost disregard for consequences), the need for processors grows and global supply chains are stressed by factors such as climate change, and geopolitical fragmentation.

The Wizards, out of tricks, curtains pulled, will be revealed as the ordinary (mostly) men they are. What comes next, will be up to us.

Some Key References:

Wizard of Oz

Dall-E

GPT-3

GPU Supply Chain

NIVIDIA

A Materialist Approach to the Tech Industry

[In this post, Monroe thinks aloud about his approach to analyzing the tech industry, a term which, annoyingly, is almost exclusively used to describe Silicon Valley based companies that use software to create rentier platforms and not, say, aerospace and materials science firms. The key concept is materialism.]


Few industries are as shrouded by mystification as the tech sector, defined as that segment of the industrial and economic system whose wealth and power have been built by acting as the unavoidable foundation of all other activity, by building rentier software-based platforms, shielded by copyright, that are difficult, indeed, impossible, to circumvent (an early example is the method Microsoft used to extract, via its monopoly position in corporate desktop software, what was called the ‘Microsoft or Windows tax‘).

Consider, as a contrasting example, a paper clip company: if it was named something self-consciously clever, such as Phase Metallics, it wouldn’t take long for most of us to see through this vainglory to say: ‘calm down, you make paper clips’.

An instinctual grounding of opinion, shaped and informed by the irrefutable physicality of things like paper clips, is lacking when we assess the claims of ‘tech’ companies. The reason is because the industry has successfully obscured, with a great deal of help from the tech press and media generally, the material basis of its activities. We use computers but do not see the supply chains that enable their production as machines. We use software but are encouraged to view software developers (or ‘engineers’, or ‘coders’) as akin to wizards and not people creating instruction sets.

Computers and software development are complex artifacts and tasks but not more complex than physics or civil engineering. We admire the architects, engineers and construction workers who design and build towering structures but, even though most of us don’t understand the details, we know these achievements have a physical, material basis and face limitations imposed by nature and our ability to work within natural constraints.

The tech sector presents itself as being outside of these limitations and most people, intimidated by insider jargon, the glamour of wealth and the twin delusions of techno-determinism (which posits a technological development as inevitable) and techno-optimism (which asserts there’s no limit to what can be achieved) are unable to effectively counter the dominant narrative.

Lithium Mine – extracting a key element used in computing

The tech industry effectively deploys a degraded form of Platonic idealism (which places greater emphasis on our ideas of the world than the actually existing structure of the world itself). This idealism prevents us from thinking clearly about the industry’s activities and its role in, and impact on, global political economy (the interrelation of economic activity with social custom, legal frameworks, government, and power relations). One of the consequences of this idealist preoccupation is that, when we’re analyzing a press account of tech activities, for example, stories about autonomous cars, instead of interrogating the assumption that driverless vehicles are possible and inevitable, we base our analysis on an idealist claim, thereby going astray and inadvertently allowing our class adversaries to define the boundaries of discussion.

The answer to this idealism, and the propaganda crafted using it, is a materialist approach to tech industry analysis.

Materialism (also known as physicalism)

Let’s take a quote from the Stanford Encyclopedia of Philosophy

Physicalism is, in slogan form, the thesis that everything is physical. The thesis is usually intended as a metaphysical thesis, parallel to the thesis attributed to the ancient Greek philosopher Thales, that everything is water, or the idealism of the 18th Century philosopher Berkeley, that everything is mental. The general idea is that the nature of the actual world (i.e. the universe and everything in it) conforms to a certain condition, the condition of being physical. Of course, physicalists don’t deny that the world might contain many items that at first glance don’t seem physical — items of a biological, or psychological, or moral, or social, or mathematical nature. But they insist nevertheless that at the end of the day such items are physical, or at least bear an important relation to the physical.

Stanford Encyclopedia of Philosophy – https://plato.stanford.edu/entries/physicalism/

This blog is dedicated to ruthlessly rejecting tech industry idealism in favor of tracking the hard physicality and real-world impacts of computation in all of its flavors. In this sense, the focus is materialist. Key concerns include:

  • Investigating the functional, computational foundation of platforms, such as Apple’s walled garden and Facebook
  • Exploring the physical inputs into the computational layer and the associated costs (in ecological, political economy and societal impact terms)
  • Asking who, and what factors shape the creation and deployment of software at-scale – i.e., what is the relationship between software and power

This blog’s analytical foundation is unequivocally Marxist and seeks to employ Marx and Engel’s grounding of Hegelian dialectics (an ongoing project, subject to endless refinement as understanding improves):

Marx’s criticism of Hegel asserts that Hegel’s dialectics go astray by dealing with ideas, with the human mind. Hegel’s dialectic, Marx says, inappropriately concerns “the process of the human brain”; it focuses on ideas. Hegel’s thought is in fact sometimes called dialectical idealism, and Hegel himself is counted among a number of other philosophers known as the German idealists. Marx, on the contrary, believed that dialectics should deal not with the mental world of ideas but with “the material world”, the world of production and other economic activity.[19] For Marx, a contradiction can be solved by a desperate struggle to change the social world. This was a very important transformation because it allowed him to move dialectics out of the contextual subject of philosophy and into the study of social relations based on the material world.

Wikipedia “Dialectical Materialism” – https://en.wikipedia.org/wiki/Dialectical_materialism

This blog is, therefore, dedicated to finding ways to apply the Marx/Engels conceptualization of materialism to the tech industry.

Conclusion

When I started my technology career, almost 20 years ago, like most of my colleagues, I was an excited idealist (in both the gee whiz and philosophical senses of the term) who viewed this burgeoning industry as breaking old power structures and creating newer, freer relationships (many of us, for example, really thought Linux was going to shatter corporate power just as some today think ‘AI’ is a liberatory research program).

This was an understandable delusion, the result of youthful enthusiasm but also, the hegemonic ideas of that time. These ideas – of freedom, ‘innovation’ and creativity are still deployed today but like crumbling Roman ruins, are only a shadow of their former glory.

The loss of dreams can lead to despair, but, to paraphrase Einstein, if we look deeply into the structures of things as they are, instead of as we want them to be, instead of despair, we can feel a new type of invigoration, the falling away of childlike notions and a proper identification of enemies and friends.

A materialist approach to the tech industry removes the blinders from one’s eyes and reveals the full landscape.

Cloud Technology: A Quick(ish) Guide for the Left

[I’m writing a longer piece about cloud computing from a personal and political economy perspective – because I was there at the start and remain in the thick of it as a ‘cloud architect’ so I have both thoughts and experience…in this short piece, I use the recent Amazon Web Services disruption to discuss the role of power in ‘cloud’ or what should more precisely be called utility computing]


The 7 Dec 2021 Amazon Web Services (or, AWS) ‘outage’ has brought the use of cloud computing generally, and the role of Amazon in the cloud computing market specifically, to the attention of a general, non-technical audience [btw, outage is in single quotes to appease the techies who’ll shout: it’s a global platform, it didn’t go down, there was a regional issue! and so on]

Outage, in the total sense, or not, the event impacted a large number of companies, many of which are global content providers such as Disney and Netflix, services such as Ring and even Amazon’s internal processes that utilize their computational infrastructure.

Before the cloud era, each of these companies might have made large investments in maintaining their own data centers to host the computers, storage and networking equipment required to host a Disney+ or HBOMAX platform. In the second decade of the 2000s (really gaining momentum around 2016) the use of at first, Amazon Web Services and then Microsoft’s Azure and Google’s Cloud Platform offered companies the ability to reduce – or even eliminate – the need to support a large technological infrastructure to fulfill the command and control functions computation provides for capitalist enterprises.

Computation, storage and database – the three building blocks of all complex platforms – are now available as a utility, consumable in a way, not entirely different from the consumption of electricity or water (an imperfect analogy since, depending on the type of cloud service used, more or less technical effort is required to tailor the utility portfolio to an organization’s needs).


What is Cloud Computing? What is it’s Political Economy? What are the Power Dynamics?

Popular Critical Meme from Earlier in the Cloud Era

A full consideration of the technical aspects of cloud computing would make this piece go from short(ish) to a full position paper (something I’ll address in that bigger essay I’m working on which I mentioned at the top). So, let’s answer the ‘what’ question by referring to what’s considered the urtext within the industry: the NIST definition of cloud computing

Cloud computing is a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction. This cloud model is composed of five essential characteristics, three service models, and four deployment models.

https://csrc.nist.gov/publications/detail/sp/800-145/final

The NIST document goes on to define the foundational service types and behaviors:

  • SaaSSoftware as a Service (think Microsoft 365 or any of the other web-based, subscription services that stop working if your credit card is rejected)
  • PaaSPlatform as a Service (popular industry examples are databases such as Amazon’s DynamoDB, Azure SQL or Google Cloud SQL)
  • IaaSInfrastructure as a Service (commonly used to create what are called virtual machines – servers – on a cloud platform instead of within a system hosted by a company in their own data center)
  • On-demand Self Service (which means, instead of having to get on the phone to Amazon saying, ‘hey, can you create a database for me’ you can do it yourself using the tools available on the platform
  • Reserve Pooling – (basically, there are always resources available for you to use – this is a big deal because running out of available resources is a common problem for companies that roll their own systems)
  • Rapid Elasticity – (have you ever connected to a website, maybe for a bank and have it slow to a crawl or become unresponsive? That system is probably stressed by demand beyond its ability to respond. Elasticity is designed to solve this problem and it’s one of the key advantages of cloud platforms)
  • Measured Service – (usage determines cost which is a new development in information technology. Finance geeks – and moi! – call this OPEX or operational expense and you better believe that beyond providing a link I’m not getting into that now)

To provide a nice picture which I’m happy to describe in detail if you want (hit me up on Twitter) here’s what a cloud architecture looks like (from the AWS reference architecture library):

AWS Content Analysis Reference Architecture

There are a lot of icons and technical terms in that visual which we don’t need to get into now (if you’re curious, here’s a link to the service catalog). The main takeaway is that with a cloud platform – in this case AWS but this is equally true of its competitors – it’s possible to assemble service elements into an architecture that performs a function (or many functions). Before the cloud era, this would have required ordering servers, installing them in data centers, keeping those systems cool and various other maintenance tasks that still occasionally give me nightmares from my glorious past.

Check out this picture of a data center from Wikipedia. I know these spaces very well indeed:

Data Center (from Wikipedia)

And to be clear, just because these reference architectures exist (and can be deployed – or, installed ) that does not mean an organization is restricted to specific designs. There’s a toolbox from which you can pull what you need, designing custom solutions.

So, perhaps now you can understand why Disney, for example, when deciding to build a content delivery platform, chose to create it using a cloud platform – which enables rapid deployment and elastic response instead of creating their own infrastructure which they’d have to manage.

Of course, this comes with a price (and I’m not just talking about cash money).

Computer Power is Power and the Concentration of that Power is Hyper Power

Now we get to the meat of the argument which I’ll bullet point for clarity:

  • Computer power is power (indeed, it is one of the critical command and control elements of modern capitalist activity)
  • The concentration of computer power into fewer hands has both operational and political consequences (the operational consequences were on display during the 8 December AWS outage – yeah, I’m calling it an outage cloud partisans, deal)
  • The political consequences of the concentration of computer power is the creation of critical infrastructure in private hands – a super structure of technical capability that surrounds the power of other elements of capitalist relationships.

To illustrate what I mean, consider this simple diagram which shows how computer capacity has traditionally been distributed:

Note how every company, with its own data center, is a self-contained world of computing power. The cloud era introduces this situation:

Note the common dependency on a service provider. The cloud savvy in the audience will now shout, in near unison: ‘but if organizations follow good architectural principles and distribute their workloads across regions within the same cloud provider for resiliency and fault tolerance (yes, we talk this way) there wouldn’t be an outage!’

What they’re referring to is this:

AWS Global Infrastructure Map Showing (approximate) Data Center Locations

From a purely technical perspective, the possibility of minimizing (or perhaps even avoiding) service disruption by designing an application – for example, a streaming service – to come from a variety of infrastructural locations, while true, entirely misses the point…

Which is that the cloud era represents the shift of a key element of power from a broadly distributed collection of organizations to, increasingly, a few North American cloud providers.

This has broader implications which I’ll explore in greater detail in my upcoming piece.

UPDATE 11 Dec

Amazon has posted an explanation (which, in the industry is known as a root cause analysis) explaining the outage. I’ll be digging into this in detail soon.