Microsoft: A Materialist Approach

When we think about the tech industry, images of smoothly functioning machines, moving the world inexorably towards a brilliant future, may dance across your mind. This is no accident; the industry, since its birth in the 1990s (in its present form, deriving profits from software and the proliferation of software methods as broadly as possible) has cultivated and encouraged this view with the help of an uncritical tech press.

What’s lacking is a consideration and acknowledgement of the materialist aspects of the industry. By ‘materialist’ I’m referring to the nuts and bolts of how things work: the actual business of software and its place within political economy. Although the tech industry, with its flair for presentation and compliant press coverage, has successfully sold itself as fundamentally different from other economic sectors (say, coal mining) what it shares with all other forms of business activity within capitalism is an emphasis on profit as the only true goal. Once we re-center an understanding of profit as the objective, things that seem inexplicable or against a corporation’s ‘culture’ come into focus.

Which brings me to Microsoft and my new podcast.

For decades – almost since the company hit its near monopoly stride as an arbiter of desktop software used by companies large and small and consumers – I have worked with Microsoft technologies at what, in the industry, is called ‘at-scale’ for multinational companies across the globe. This has provided me with an understanding of two sides of a coin: how Microsoft works and how its software and other products are used by its corporate customers. From SQL Server databases for banks to Azure cloud hosted machine learning APIs used by so called AI start-ups, I have seen, and continue to see, if not all, a very broad swath.

This is the basis for an analysis of Microsoft from a materialist perspective. Capitalism, from this view, is not taken as a given but as a system which developed over time and was imposed upon the world. In this podcast, we will use Microsoft as the focal point for a review of the software aspect of this system in its present form. I hope you come along.


Spotify

RSS

Soundcloud

Website

Escape from Silicon Valley (alternative visions of computation)

Several years ago, there was a mini-trend of soft documentaries depicting what would happen to the built environment if humans somehow disappeared from the Earth. How long, for example, would untended skyscrapers punch against the sky before they collapsed in spectacular, downward cascading showers of steel and glass onto abandoned streets? These are the sorts of questions posed in these films.

As I watched these soothing depictions of a quieter world, I sometimes imagined a massive orbital tombstone, perhaps launched by the final rocket engineers, onto which was etched: Wasted Potential.


While I type these words, billions of dollars have been spent on and barely tabulated amounts of electrical power, water and human labor (barely tabulated, because deliberately obscured) have been devoted to large language model (LLM) systems such as ChatGPT. If you follow the AI critical space you’re familiar with the many problems produced by the use and promotion of these systems – including, on the hype end, the most recent gyration, a declaration of “existential risk” by a collection of tech luminaries (a category which, in a Venn diagram, overlaps with carnival barker).  This use of mountains of resources to enhance the profit objectives of Microsoft, Amazon and Google, among other firms not occupying their olympian perches, is wasted potential in frenetic action.

But what of alternative visions? They exist, all is not despair. The dangerous nonsense relentlessly spewing from the AI industry is overwhelming and countering it is a full time pursuit. But we can’t stay stuck, as if in amber, in a state of debunking and critique. There must be more.  I recommend the DAIR Institute and Logic(s) magazine as starting points for exploring other ways of thinking about applied computation.  Ideologically, AI doomerism is fueled in large measure by dystopian pop sci-fi such as Terminator. You know the story, which is a tale as old as the age of digital computers:  a malevolent supercomputer – Skynet (a name that sounds like a product) – launches, for some reason, a war on humanity, resulting in near extinction. The tech industry seems to love ripping dystopian yarns. Judging by the now almost completely forgotten metaverse push (a year ago, almost as distant as the pleistocene in hype cycle time), inspired by the less than sunny sci-fi novel Snow Crash, we can even say that dystopian storylines are a part of business plans (what is the idea of sitting for hours wearing VR goggles if not darkly funny?).

There are also less terrible, even hopeful, fictional visions, presented via pop science fiction such as Star Trek´s Library Computer Access/Retrieval System – LCARS.


In the Star Trek: The Next Generation episode, “Booby Trap” the starship Enterprise is caught in a trap, composed of energy sapping fields, that prevents it from using its most powerful mode of propulsion, warp drive. The ship’s chief engineer, Geordi LeForge, is given the urgent task of finding a solution. LeForge realizes that escaping this trap requires a re-configuration, perhaps even a new understanding, of the ship’s propulsion system. That’s the plot but most intriguing to me is the way LeForge goes about trying to find a solution.

The engineer uses the ship’s computer – the LCARS system – to do a retrieval and rapid parsing of the text of research and engineering papers going back centuries. He interacts with the computer via a combination of audio and keyboard/monitor. Eventually, LeForge resorts to a synthetic, holo mockup of the designer of the ship’s engines, Dr. Leah Brahms, raising all manner of ethical issues but we needn’t bother with that plot element.

I’ve created a high level visualisation of how this fictional system is portrayed in the episode:

The ability to identify text via search, to summarize and read contents (with just enough contextual capability to be useful) and to output relevant results is rather close, conceptually, to the potential of language models. The difference between what we actually have – competing and discrete systems owned by corporations – and LCARS (besides many orders of magnitude of greater sophistication in the fictional system) is that LCARS is presented as an integrated, holistic and scoped system. LCARS’ design is to be a library that enables access to knowledge and retrieves results based on queried criteria.

There is a potential, latent within language models and hybrid systems – indeed, within almost the entire menagerie of machine learning methods – to create a unified computational model for a universally useful platform. This potential is being wasted, indeed, suppressed as oceans of capital, talent and hardware is poured into privately owned things such as ChatGPT. There are hints of this potential found within corporate spaces; Meta’s LLaMA, which leaked online, shows one avenue. There are surely others.


Among a dizzying collection of falsehoods, the tech industry’s greatest lie is that it is building the future. Or perhaps, I should sharpen my description: the industry may indeed be building the future but contrary to its claims, it is not a future with human needs centered. It is possible however, to imagine and build a different computation and we needn’t turn to Silicon Valley’s well thumbed library of dystopian novels to find it.  Science fiction such as Star Trek (I’m sure there are others) provide more productive visions