Techno-Skepticism: A Tactical Skill

Techno-skepticism is a vital and necessary response to a world awash in self-promoting boosterism and the capitalist utilized ideologies of techno-optimism and techno-determinism.

To define terms, techno-optimism is the belief any proposed technology is possible and good. Optimists look to past examples of things that were once impossible which became possible – such as machine flight – and infer this tendency is universal.

Techno-determinism (which can be considered a species of determinism) builds on tech-optimism’s ideological framework by asserting not just possibility, but inevitability.

For example, a techno-optimist views a development such as ‘robot’ kitchens as being both positive and possible as presented – determinists assert there’s nothing to stop such a development: it’s inevitable and beyond resistance, like gravity.

Robotic Chef Marketing Video

Skepticism, correctly practiced, isn’t the denial of technological change or the reality of, or potential for, benefits from such change. Skepticism is remembering to ask three questions:

  • How does this work? A technical inspection
  • Is it possible as described? A feasibility interrogation

Consider, for example, Amazon’s failed drone delivery service, which Cory Doctorow analyzed here – As Doctorow describes, this idea was inexplicably taken seriously:

When Amazon announced “Prime Air,” a forthcoming drone delivery service, in 2016, there was a curious willingness on the part of the press – even the tech press – to take the promise of a sky full of delivery drones at face value.

This despite the obvious problems with such a scheme: the consequences of midair collisions, short battery life, overhead congestion, regulatory hurdles and more. Also despite the fact that delivery drones, like jetpacks, are really only practical as sfx in an sf movie.”

At the time this proposed service was announced, I read detailed analyses and excited Tweet threads about the supposed meaning of a bold new age of drone delivery. I noticed however, that simple questions regarding feasibility were rarely asked – optimism and determinism (with a good amount of self-interested boosterism in the mix) prevented a skeptical response

When you read about a technological system, such as delivery via drone, remembering to ask questions about function (the how), benefit (who’s promoting this and why) and feasibility (can this be done at all or as the promoters describe?) is a reliable way to avoid being fooled and knocked from delusion to delusion.

The Metaverse: A Brief Inquiry

Facebook’s plan to become a ‘Metaverse company‘ (and indeed, completely rebrand the company around this concept) has attracted a lot of comment in tech media and social media spaces.

This is unsurprising; both because the idea seems futuristic (being based on a science fiction confection introduced in Neal Stephenson’s dystopian 1992 novel ‘Snow Crash‘) and also, because the tech media space reports anything announced by a so-called FAANG company as if it’s marvelous and inevitable.

Let’s apply a bit of real-ness to this and use a materialist analysis to interrogate the idea of the ‘Metaverse’ (this is similar in theme to my inquiry into Boston Dynamics).


Light Detective Work and Logical Inference

Tech companies create an air of secrecy around projects such as FB’s Metaverse effort for competitive reasons but also, I’d argue, to obfuscate what is often merely the assembly of already existing elements into platforms. Mariana Mazzucato analyzes this tendency using the iPhone in her book, ‘The Entrepreneurial State‘.

Here’s how the iPhone’s elements are dissected in Mazzucato’s book:

A similar method can be applied to an analysis of FB’s Metaverse.

The Oculus platform and Facebook’s Ray Ban stories glasses provide sufficient information for some light detective work. No matter how secretive a company tries to be, its job postings, properly interpreted and supported by experience, provide a rich source of evidence for what an organization is doing.

Working on the assumption that the Metaverse will primarily consist of repurposed elements (and the fact everything depends on, and leads to data centers), I examined Oculus job postings and dissected their contents.

The main technical themes were:

  • Optics
  • Haptics
  • Tracking
  • Display
  • Computer vision
  • User experience
  • Audio
  • Perceptual psychology
  • Research Science
  • Mechanical Engineering
  • Electrical Engineering
  • Software Engineering
  • Networking
  • Server operations

Let’s visualize this:

Now let’s place these elements in a context:

Of course, it’s impossible to know the precise details of FB’s system topology without a reference architecture but experience leads me to think this is a solid approximation (and the data center dependency is an absolute certainty no matter what else may be going on).

What can we infer from this?


How Sustainable and Realizable Is the Metaverse Concept?

Although the tech press treats every industry pronouncement as an irrefutable prediction there’s precedent of lots of smoke but little to no fire (recall Amazon’s supposedly brilliant drone delivery service). According to some estimates, Facebook has over 2 billion active users. An effort to move all, or even a statistically significant portion of this user base to a platform that generates a virtual reality environment for, and ingests audio/visual data from, hundreds of millions of people means a massive investment in physical infrastructure – computers, network infrastructure, cooling systems and real estate to host this and other relevant equipment (to get a sense of the industrial and extractive elements of what’s called ‘the cloud’ I suggest Nathan Ensmenger’s essay ‘The Cloud is a Factory‘).

It also means an increase in demands for data transfer over Internet. It’s easy to project system crashes, bad connections and other problems caused by scalability challenges. It’s fair to ask if, despite the hype, any of this is actually possible as described and if so, how reliable will it be?

Conclusion

There’s abundant evidence Facebook (or whatever it’ll call itself in a week) is a problem. The company’s role in a variety of destructive activities is well documented. For that reason alone, the ‘Metaverse’ push is immediately suspect. I think we can also conclude however, that it might not be achievable as advertised and may turn out to be, like so much else that emerges from Silicon Valley, an elaborate grift, dressed up as a bold vision of the future.

We should recall that in the novel that gave the project its name, the ‘Metaverse’ is the last refuge for people living in a collapsed world. In this case, we might get the collapse without even the warped comforts a virtual world is supposed to offer.

UPDATE (29 Oct)

On 28 October, Facebook announced it was rebranding as ‘Meta’ to reflect its focus on being a ‘metaverse’ company.

The keynote video presented a vision (such as it is) for what the ‘metaverse’ is supposed to be…eventually. Zuckerberg walks within a fully virtual environment, uses a virtual pop-up menu and zooms (virtually) into an environment creatively named “Space Room”.

Rebranding the company formerly known as Facebook as Meta is, in part, surely intended to breathe new life into a moribund platform and distract attention away from the many negative associations Facebook has earned. Even so, we can predict that within the company, there will be efforts to make as much of this notion real as possible – despite the fact promoted elements (such as an environment you can walk through as if it’s real) are thoroughly impossible and likely to remain so for quite some time – indeed, some would require a multitude of breakthroughs in foundational sciences such as physics.

This means that the situation for Meta workers will become more difficult as they’re pushed to do things that simply cannot be achieved.


UPDATE (16 DEC)

On 14 December, Intel’s Senior vice president, General manager of the Accelerated Computing Systems and Graphics Group, Raja Koduri, published this editorial which adds supports my assertion that the ‘Metaverse’ (it pains me to use that term, which describes nothing and is made of hype) will require orders of magnitude more computing capacity than currently available.

Here’s a key quote:

Consider what is required to put two individuals in a social setting in an entirely virtual environment: convincing and detailed avatars with realistic clothing, hair and skin tones – all rendered in real time and based on sensor data capturing real world 3D objects, gestures, audio and much more; data transfer at super high bandwidths and extremely low latencies; and a persistent model of the environment, which may contain both real and simulated elements. Now, imagine solving this problem at scale – for hundreds of millions of users simultaneously – and you will quickly realize that our computing, storage and networking infrastructure today is simply not enough to enable this vision.

We need several orders of magnitude more powerful computing capability, accessible at much lower latencies across a multitude of device form factors. To enable these capabilities at scale, the entire plumbing of the internet will need major upgrades. Intel’s building blocks for metaverses can be summarized into three layers and we have been hard at work in several critical areas.

Intel: https://www.intel.com/content/www/us/en/newsroom/opinion/powering-metaverse.html#gs.iywlla

Of course, this can be interpreted as self-serving for Intel which stands to benefit (to say the least) from a massive investment in new computing gear. That doesn’t negate the insight, which is based on hard material reality.

What’s Behind the Explosion of AI?

Synopsis

The spread of AI (algorithmic) harms such as automated recidivism and benefits determination systems has been accelerated by the cloud era which has made the proliferation of algorithmic automation possible; indeed, the companies providing cloud services promote their role as accelerators. 

Background 

We are witnessing a significant change in the way computing power is used and engineered by public and private organizations. The material basis of this change is the availability of utility services such as on-demand compute, storage and database offered primarily by Amazon (with its Amazon Web Services platform), Microsoft (Azure) and Google (Google Cloud Platform). There are other platforms, such as Alibaba, based in the PRC but those three Silicon Valley giants dominate the space. This has come to be known as ‘public cloud’ to distinguish it as a category from private data centers. The term is misleading; ‘public cloud’ is a privately owned service, sold to customers via the public Internet. 

 ‘Public Cloud’ services make it possible for government agencies and businesses to reduce – or eliminate – the work of hosting and maintaining their own computational infrastructure within expensive data centers. Although the advantages seem obvious (for example, reduced overhead and the ability to focus on the use of computer power for business and government goals rather than the costly, complex, time-consuming and often error-prone task of systems engineering) there are also serious new challenges which are having an impact on US, and global, political economy. 

Impact

The rise of unregulated ‘public cloud’ has made the broad and rapid spread of algorithmic harms possible – via, for example, platform machine learning services such as Amazon Sagemaker and Microsoft Cognitive Services.  

The relationship can be visualized:

There’s a potent combination of: 

  • The lack of regulation 
  • The lowered barrier to entry made possible by ‘public cloud’ algorithmic utility services 
  • The marketing value (supported by AI hype) of creating and promoting a product and/or service as based on ‘AI’ (as labor reducing, or even eliminating, automation) 

This combination is producing an explosion of algorithmic platforms which are having a direct, negative impact on the lives of millions – notably the poor and people of color but rapidly spreading to all sectors of the population. My position is that this expansion is materially supported by cloud platforms and a lack of public oversight. 

Pointillistic But Useful: A Machine Learning Object Lesson

I devote a lot of time to understanding, critiquing and criticizing the AI Industrial Complex. Although much – perhaps most- of this sector’s output is absurd, or dangerous (AI reading emotions and automated benefits fraud determination being two such examples) there are examples of uses that are neither which we can learn from.

This post briefly reviews one such case.

During dinner with friends a few weeks ago, the topic of AI came up. No, it wasn’t shoehorned into an otherwise tech-free situation; one of the guests works with large-scale engineering systems and had some intriguing things to say about solid, real world, non-harmful uses for algorithmic ‘learning’ methods.

Specifically, he mentioned Siemens’ use of machine vision to automate the inspection of wind turbine blades via a platform called Hermes. This was a project he was significantly involved in and justifiably proud of. It provides an object lesson for the types of applications which can benefit people, rather than making life more difficult through algorithm.

You can view a (fluffy, but still informative) video about the system below:

Hermes System Promotional Video

A Productive Use of Machine Learning

The solution Siemens employed has several features which make it an ideal object lesson:

1.) It applies a ‘learning’ algorithm to a bounded problem

Siemens engineers know what a safely operating blade looks like; this provides a baseline against which variances can be found.

2.) It applies algorithms to a bounded problem area that generates a stream of dynamic, inbound data

The type of problem is within the narrow limits of what an algorithmic system can reasonably and safely handle and benefits from a robust stream of training data that can improve performance

3.) It’s modest in its goal but nonetheless important

Blade inspection is a critical task and very time consuming and tedious. Utilizing automation to increase accuracy and offload repeatable tasks is a perfect scenario.


How Is This Different from AI Hype?

AI hype is used to convince customers – and society as a whole – that algorithmic systems match, or exceed the capabilities of humans and other animals. Attempts to proctor students via machine vision to flag cheating, predict emotions or fully automate driving are examples of overreach (and the use of ‘AI’ as a behavioral control tool). I use ‘overreach‘ because current systems are, to quote Gary Marcus in his paper The Next Decade in AI: Four Steps Towards Robust Artificial Intelligence‘, “pointillistic” – often quite good in narrow or ‘bounded’ situations (such as playing chess) but brittle and untrustworthy when applied to completely unbounded, real world circumstances such as driving, which is a series of ‘edge cases’.

Visualization of Marcus’ Critique of Current AI Systems

The Siemens example provides us with some of the building blocks of a solid doctrine to use when evaluating ‘AI’ systems (and claims about those systems) and a lesson that can be transferred to non-corporate uses.