The Future Circles the Drain

There’s a story we tell ourselves, a lullaby, really, which is that science fiction is a predictor of the terrain of that magical land, always just over the horizon, ‘the future.’ This story is deeply embedded in the consciousness of US’ians, (no, I’m not calling people from the US alone ‘Americans’ as if the rest of the Americas is in another hemisphere) even by people who don’t care for stories about spacecraft, robots and malevolent AI (always malevolent, for some reason, a sign of some aspect of US thinking requiring psychoanalytic investigation).

The evidence for this tendency is all around us; every ‘Black Mirror’ episode, for example, is treated as if it’s a prognostication from Nostradamus; the same tired tales of out of control AI, murderous machines and derelict space colonies cycled again and again, each time treated like a bold revelation of Things to Come.

Of course, there is real technological change; we have mobile, computer radio phones with glass screens and ICBMs, things our great grandparents would have found miraculous for a little while before the phone bills came due and the nuclear missiles, patiently waiting in their silos, were forgotten to aid sleep. It’s undeniable that we live in a world shaped by applied scientific inquiry and technological modification. These things have a social impact and fashion our political economy, driven by profit motivations. That’s the reality; the idea there’s a feedback loop between science fiction and what someone will breathlessly shout to be ‘science fact!’ is not entirely bankrupt, but there’s a mustiness to it, it smells like mouldy bread, slathered in butter and presented as still fresh.

All of which brings me to an essay published in the Atlantic “When Sci-Fi Anticipates Reality.” There’s a laziness to this piece which may not be the author – Lora Kelley’s fault – after all, the topic itself is weary.

Here’s an excerpt:

Reading about this news, [Meta adding legs to avatars] I told my editor—mostly as a joke—that the metaverse users interested in accessing alternative realities and stepping into other lives should consider simply reading a novel. I stand by that cranky opinion, but it also got me thinking about the fact that the metaverse actually owes a lot to the novel. The term metaverse was coined in a 1992 science-fiction novel titled Snow Crash. (The book also helped popularize the term avatar, to refer to digital selves.) And when you start to look for them, you can find links between science fiction and real-world tech all over.

https://www.theatlantic.com/newsletters/archive/2023/08/science-fiction-technology/675206/

The word “cranky” is used and I admit to feeling a bit cranky myself after reading this attempt to link a product Meta is struggling to make viable (using actual computers requiring power and labor) with a term from a novel as old as someone with credit problems. There’s about as much of a connection between the ‘metaverse’ nightmaringly imagined in Snow Crash and what Meta is capable of as between a piece of paper upon which someone has written the word, ‘laser’ and an actual laser.

A bit later in the piece, another favorite of the science fiction to fact genre gets its time in the sun, ‘anticipation’ –

Ross Andersen, an Atlantic writer who covers science and technology, also told me he suspects that “a messy feedback loop” operates between sci-fi and real-world tech. Both technologists and writers who have come up with fresh ideas, he said, “might have simply been responding to the same preexisting human desires: to explore the deep ocean and outer space, or to connect with anyone on Earth instantaneously.” Citing examples such as Jules Verne’s novels and Isaac Asimov’s stories, Ross added that “whether or not science fiction influenced technology, it certainly anticipated a lot of it.”

https://www.theatlantic.com/newsletters/archive/2023/08/science-fiction-technology/675206/

Leaving aside the question of whether there is indeed a “preexisting human desire” to explore outer space (and thus far, almost all of our examples of ‘exploration’ have been for exploitation so one wonders if other desires were being met) there’s an ironic assertion that ‘fresh ideas’ are what’s on offer. Fresh ideas, like a warmed over Second Life platform based, in name if not experienced reality, on a decades old novel. 

2023 is not the year of bold new visions, brought to life by intrepid scientists and technologists inspired by science fiction (it’s always warmed over cyberpunk and Asimov, never Stanislaw Lem, I note). It’s the year in which the industry runs, like a rat in flames, from one thing to another – crypto, web3, metaverse, AI, generative AI and chatbots for every task. This isn’t evidence of a ‘messy feedback loop’ but of an emptiness, a void. The bag of tricks is almost empty. Where will the new profits come from?

Perhaps there is a feedback loop after all, from stale idea to stale implementation, all wrapped in a marketing bow and sold as new when it’s as old as a Jules Verne novel. 

Escape from Silicon Valley (alternative visions of computation)

Several years ago, there was a mini-trend of soft documentaries depicting what would happen to the built environment if humans somehow disappeared from the Earth. How long, for example, would untended skyscrapers punch against the sky before they collapsed in spectacular, downward cascading showers of steel and glass onto abandoned streets? These are the sorts of questions posed in these films.

As I watched these soothing depictions of a quieter world, I sometimes imagined a massive orbital tombstone, perhaps launched by the final rocket engineers, onto which was etched: Wasted Potential.


While I type these words, billions of dollars have been spent on and barely tabulated amounts of electrical power, water and human labor (barely tabulated, because deliberately obscured) have been devoted to large language model (LLM) systems such as ChatGPT. If you follow the AI critical space you’re familiar with the many problems produced by the use and promotion of these systems – including, on the hype end, the most recent gyration, a declaration of “existential risk” by a collection of tech luminaries (a category which, in a Venn diagram, overlaps with carnival barker).  This use of mountains of resources to enhance the profit objectives of Microsoft, Amazon and Google, among other firms not occupying their olympian perches, is wasted potential in frenetic action.

But what of alternative visions? They exist, all is not despair. The dangerous nonsense relentlessly spewing from the AI industry is overwhelming and countering it is a full time pursuit. But we can’t stay stuck, as if in amber, in a state of debunking and critique. There must be more.  I recommend the DAIR Institute and Logic(s) magazine as starting points for exploring other ways of thinking about applied computation.  Ideologically, AI doomerism is fueled in large measure by dystopian pop sci-fi such as Terminator. You know the story, which is a tale as old as the age of digital computers:  a malevolent supercomputer – Skynet (a name that sounds like a product) – launches, for some reason, a war on humanity, resulting in near extinction. The tech industry seems to love ripping dystopian yarns. Judging by the now almost completely forgotten metaverse push (a year ago, almost as distant as the pleistocene in hype cycle time), inspired by the less than sunny sci-fi novel Snow Crash, we can even say that dystopian storylines are a part of business plans (what is the idea of sitting for hours wearing VR goggles if not darkly funny?).

There are also less terrible, even hopeful, fictional visions, presented via pop science fiction such as Star Trek´s Library Computer Access/Retrieval System – LCARS.


In the Star Trek: The Next Generation episode, “Booby Trap” the starship Enterprise is caught in a trap, composed of energy sapping fields, that prevents it from using its most powerful mode of propulsion, warp drive. The ship’s chief engineer, Geordi LeForge, is given the urgent task of finding a solution. LeForge realizes that escaping this trap requires a re-configuration, perhaps even a new understanding, of the ship’s propulsion system. That’s the plot but most intriguing to me is the way LeForge goes about trying to find a solution.

The engineer uses the ship’s computer – the LCARS system – to do a retrieval and rapid parsing of the text of research and engineering papers going back centuries. He interacts with the computer via a combination of audio and keyboard/monitor. Eventually, LeForge resorts to a synthetic, holo mockup of the designer of the ship’s engines, Dr. Leah Brahms, raising all manner of ethical issues but we needn’t bother with that plot element.

I’ve created a high level visualisation of how this fictional system is portrayed in the episode:

The ability to identify text via search, to summarize and read contents (with just enough contextual capability to be useful) and to output relevant results is rather close, conceptually, to the potential of language models. The difference between what we actually have – competing and discrete systems owned by corporations – and LCARS (besides many orders of magnitude of greater sophistication in the fictional system) is that LCARS is presented as an integrated, holistic and scoped system. LCARS’ design is to be a library that enables access to knowledge and retrieves results based on queried criteria.

There is a potential, latent within language models and hybrid systems – indeed, within almost the entire menagerie of machine learning methods – to create a unified computational model for a universally useful platform. This potential is being wasted, indeed, suppressed as oceans of capital, talent and hardware is poured into privately owned things such as ChatGPT. There are hints of this potential found within corporate spaces; Meta’s LLaMA, which leaked online, shows one avenue. There are surely others.


Among a dizzying collection of falsehoods, the tech industry’s greatest lie is that it is building the future. Or perhaps, I should sharpen my description: the industry may indeed be building the future but contrary to its claims, it is not a future with human needs centered. It is possible however, to imagine and build a different computation and we needn’t turn to Silicon Valley’s well thumbed library of dystopian novels to find it.  Science fiction such as Star Trek (I’m sure there are others) provide more productive visions

Star Trek’s Concept of AI is Better Than Ours

Introduction

The fictional world of Star Trek, which depicts fanciful technologies such as warp drive, replicators and transporters, presents a surprisingly more realistic view of the potential uses for, and evolution of, advanced computation than the press releases of Google etc. and supportively breathless media accounts. 

I say more realistic, because, with notable exceptions (typically used to prove a larger point or create dramatic tension), computers in Star Trek are understood by in-world characters to be mindless, despite exhibiting capabilities which, by our standards, would be considered astounding achievements and irrefutable signs of intelligence and intent.

Artificial Intelligence, an aspirational term that does not describe any existing technology or collection of technologies, is, as a business endeavor, riddled with hype. Consider the article, ‘A robot wrote this entire article. Are you scared yet, human?’ published in the Guardian, 8 September, 2020. The article, assembled by cherry picking output from GPT-3, was, at the time of its publication, promoted as evidence of GPT-3 being a significant step up the ladder towards what’s sometimes called Artificial General Intelligence or AGI. After pushback and critique, Guardian’s editors added a bit more context, admitting that an AI did not, in fact, write the article: “We cut lines and paragraphs, and rearranged the order of them in some places. Overall, it took less time to edit than many human op-eds.” (the bit of face saving at the end is hilarious).

This hype requires, indeed, demands, a variety of counterpoint arguments. Hopefully this essay and the ones to follow will make a contribution.

In a series of three posts, I’ll present three in-show situations (from both the original and Next Generation series) :

  • The Original Series Episode “The Ultimate Computer
  • The Next Generation Episode “The Measure of a Man
  • The Next Generation Episode “Boobytrap

I’ll use these episodes to illustrate Star Trek’s thematic treatment of computer power – as a tool, not to be confused with the complexity and nuance of living minds. Furthermore, I’ll argue that Star Trek posits that the power of minds comes, perhaps paradoxically, from incompleteness (about which, more later).

This may seem trivial or of only academic interest. My argument is that the presentation of computational systems as possessing intelligence is a propaganda project, intended to demobilize workers and obscure the true sources of harm. Each of us who knows better has a responsibility to shine a light on this propaganda in a variety of ways.

This is a part of that effort.

The Ultimate Computer

Dr. Daystrom explains M5

The Ultimate Computer” is the twenty-fourth episode of the second season of the American science fiction television series Star Trek. Written by D.C. Fontana (based on a story by Laurence N. Wolfe) and directed by John Meredyth Lucas, it was first broadcast on March 8, 1968.”


In “The Ultimate Computer” the viewer is presented with a clear line of separation between the starship Enterprise’s sophisticated library computer system (known as LCARS in the Next Generation series) – which possesses interactive voice response, large language and text synthesis capacities and extensive command and control capabilities – and a thinking machine, the M5, created by Dr. Richard Daystrom (the scientist who designed standard starship computer systems). The M5, patterned after Daystrom’s mind,  is able to reason and indeed, exhibits the ability to think in basic ethical terms during a critical scene, when it’s forced to confront the fact its actions resulted in death. Despite these remarkable capabilities, the machine lacks nuance and could be said to operate on the level of an extraordinarily well-informed child.

For me however, the remarkable thing about this episode is the fact in-world characters such as Spock, Kirk and McCoy collectively express astonishment that the machine is able to think at all.  

In their experience, there’s a common understanding of what thinking beings do and what sophisticated computers are capable of. There is, in other words, no confusion between the act of rapid, statistical pattern matching, text parsing and data synthesis via sensors and what they, as people, do from moment to moment.

Consider this scene, when Kirk and Spock debate Dr. Daystrom about just what M5 is:

Spock: (to Daystrom, while examining the M5): I am not familiar with these instruments Dr. You are using an entirely new type of control mechanism. However, it appears to me this unit is drawing more power than before.

Daystrom: Quite right! As the unit is called upon to do more work, it pulls more power to enable it to do what is required of it just as a human body draws more energy to run than to stand still.

Spock: Dr, this unit is not a human body. A computer can process information, but only the information that is fed into it.

Kirk (to Daystrom): Granted, it can work a thousand…a million times faster than the human brain but it can’t make a value judgement, it hasn’t intuition, it can’t think.

Daystrom (smiling like a Cheshire Cat – then, waxing poetic) : Can’t you understand? The Multitronic unit is a revolution in computer science. I designed the duotronic elements you use in your ship right now and I know they are as archaic as dinosaurs compared to the M5…a whole…new approach!

[…]

Later, in a tense scene, after M5 has fired weapons on unprotected starships (misinterpreting an exercise for real combat), wounding and killing many, Daystrom tries to reason with it to stop:

Daystrom reasons with M5

Daystrom (to M5 via audio interface): M5 tie-in

M5 (to Daystrom, via ship audio): M5

Daystrom (stressed, trying to calm his voice): This is…this is Daystrom

M5: Daystrom, acknowledged

Daystrom: M5, do you know me?

M5: Daystrom, Richard, originator of comtronic/duotronic systems born…

Daystrom: Stop. M5, your attack on the starships is wrong. You must break it off.

McCoy (to Kirk): I don’t like the sound of him Jim.

Kirk: You’d better pray the M5 listens to the sound of him.

M5 (still responding to Daystrom): Programming includes protection against attack. Enemy vessels must be neutralized

Daystrom: But these are not enemy vessels! These are federation starships. You’re killing…we’re killing…murdering…human beings, beings of our own kind. You were not…created for that purpose. You’re my greatest creation. The ‘unit to save men’ – you must not destroy men.

M5: This unit must survive.

Daystrom: Survive! Yes! Protect yourself! But, not murder. You must not die, men must not die. To kill, is a breaking of civil and moral laws we’ve lived by for thousands of years. You’ve murdered hundreds of people…we’ve murdered…how can we repay that?

M5: They attacked this unit…

Kirk (whispering to Spock while M5 is still replying to Daystrom): The M5 is not responding to him, it’s talking to him.

Spock: I am most impressed with the technology Captain. Dr. Daystrom has created a mirror image of his own mind.

It’s talking to him” Kirk observes. For him, and everyone else in this world, a clear distinction is made between programmatic response, and actual conversation. This profound difference is purposely obscured by the current discourse which encourages us to view audio response technologies such as Amazon Alexa, Siri and GPT-3 as being capable of conversation.

M5 in motion

In the end, M5, built to create a new class of autonomous computers, intended to replace crewed space vessels, is shown to be deeply inadequate for the task. 


This episode establishes what I’ll describe as the pop sci-fi epistemological framework of Star Trek on the question of what Joseph Weizenbaum defined as “Computer Power and Human Reason” (the difference between judgement and calculation). In Star Trek, Computers, as a rule, are unable to reason and incapable of judgement. Outliers and exceptions, such as M5, illustrate this principle via their existence as outliers (which can’t be productionized).

In the next post, I’ll explore how the question of computer power and human reason is addressed in the ‘Next Generation’ episode, “The Measure of a Man“.