Leaving the Lyceum

Can large language models – known by the acronym LLM – reason? 

This is a hotly debated topic in so-called ‘tech’ circles and the academic and media groups that orbit that world like one of Jupiter’s radiation blasted moons.  I dropped the phrase, ‘can large language models reason’ into Google, (that rusting machine) and got this result:

This is only a small sample. According to Google there are “About 352.000.000 results.” We can safely conclude from this, and the back and forth that endlessly repeats on Twitter in groups that discuss ‘AI’ that there is a lot of interest in arguing the matter: pro and con. Is this debate, if indeed it can be called that, the least bit important? What is at stake?

***

According to ‘AI’ industry enthusiasts, nearly everything is at stake; a bold new world of thinking machines is upon us. What could be more important?  To answer this question, let’s do another Google search, this time, for the phrase, Project Nimbus:

The first result returned was a Wikipedia article, which starts with this:

Project Nimbus (Hebrew: פרויקט נימבוס) is a cloud computing project of the Israeli government and its military. The Israeli Finance Ministry announced in April 2021, that the contract is to provide “the government, the defense establishment, and others with an all-encompassing cloud solution.” Under the contract, the companies will establish local cloud sites that will “keep information within Israel’s borders under strict security guidelines.”

Wikipedia: https://en.wikipedia.org/wiki/Project_Nimbus

What sorts of things does Israel do with the system described above? We don’t have precise details but there are clues such as what’s described in this excerpt from the +972 Magazine article, ‘A mass assassination factory’: Inside Israel’s calculated bombing of Gaza’ –

According to the [+972 Magazine] investigation, another reason for the large number of targets, and the extensive harm to civilian life in Gaza, is the widespread use of a system called “Habsora” (“The Gospel”), which is largely built on artificial intelligence and can “generate” targets almost automatically at a rate that far exceeds what was previously possible. This AI system, as described by a former intelligence officer, essentially facilitates a “mass assassination factory.”

+972: https://www.972mag.com/mass-assassination-factory-israel-calculated-bombing-gaza/

***

History, and legend tell us that in ancient Athens there was a place called the Lyceum, founded by Aristotle, where the techniques of the Peripatetic school were practiced. Peripatetic means, more or less, ‘walking about’ which reflects the method: philosophers and students, mingling freely, discussing ideas. There are centuries of accumulated hagiography about this school. No doubt it was nice for those not subject to the slave system of ancient Greece.

Similarly, debates about whether or not LLMs can reason are nice for those of us not subject to hellfire missiles, fired by Apache helicopters sent on their errands based on targeting algorithms. But, I am aware of the pain of people who are subject to those missiles. I can’t unsee the death facilitated by computation.

This is why I have to leave the debating square, the social media crafted lyceum. Do large language models reason? No. But even spending time debating the question offends me now. A more pressing question is what the people building the systems killing our fellow human beings are thinking. What is their reasoning?

For My Sins, The Gods Made Me A Technology Consultant

Cutting to the chase, if your activist organization needs technical advisory I’m offering my expertise, built over decades and still in play. The Internet is enemy territory so I won’t post an email in the wild, so to speak, for every poorly adjusted fool to use but if you follow me on Twitter, Bluesky or Mastodon reach out or direct your friends and colleagues to this post.

What’s being offered?

In a previous essay, I thought aloud – worked through, perhaps we could say – how an activist organization which lacks the deep pockets of NGOs (and certainly of a multinational) and which wants to minimize the vulnerabilities and ethical issues that arise from using the usual corporate platforms (hyperscalers such as AWS and Azure and ‘productivity’ platforms like Microsoft 365) might navigate available options and create a method for the effective use of computation.

This received some notice but I think the plot was lost; the point wasn’t Yet Another Debate but an offer to contribute.

This is a variation, I’m imagining, of what I’ve done for massive corporations for many years to pay the bills but tailored to the needs and requirements of activist organizations. 

That’s enough preamble, let’s discuss specifics.

Consultation

To corporate technology departments, consultation is marketed as a way to achieve a goal (let’s say, ‘cloud modernization’ a popular buzz term before ‘AI’ was ushered onstage half dressed and without a script) using the skills of people who are specialists. There are other forms of consulting, such as the management advisory work of McKinsey, a firm so sinister, Lucifer himself might think twice about hiring them. Technical consultation, though as full of politics and prejudices as any other aspect of this life, is usually centered around getting something done.

The consultation I’m offering (I think of it as an open statement of work, to use another term of art from the field) is to help your organization sort through options to hopefully, make the best possible technology choices in a world of artificially constrained possibilities (certainly fewer than existed a decade or so ago). Do you have questions about email systems, collaboration tools, databases, storage the ins and outs of so-called ‘cloud’ and how to coherently knit this and more together? I’m your guy; maybe. Let’s get into the maybe part next.

Who will I Help?

Sure, I moved to Europe, drink scotch, wear cool boots and smoke the occasional cigar like a Bond villain but I’m from Philadelphia and, like most of my city kin, believe in speaking directly and plainly, this is why the language and point of view of Film Noir appeals to me. I’m not interested in helping left media types who bloviate on Youtube (a plague of opinions) or groups of leftoids who argue about obscure aspects of the 18th Brumaire. Dante, were he resurrected, would include all this in a level of Hades.

I’m making myself available to publishers and organizations who are focused on and peopled by marginalized and indigenous folk. We are at war and you need a tech savvy wartime consigliere.

Closer

Well, that’s it. I’m here, the door is open. Reach out via the means I mentioned above if you have the need and fit the profile. Of course, I’ll share email and Discord server details with any serious takers. Ciao.

Kinetic Harm

I write about the information technology industry.

I’ve written about other topics, such as the copaganda of Young Turks’ host Ana Kasparian and Zizek, whose work, to quote John Bellamy Foster, has become “a carnival of irrationalism.” In the main, however, the technology industry generally, and its so-called ‘AI’ sub-category, specifically, are my topics. This isn’t random; I’ve worked in this industry for decades and know its dark heart. Honest tech journalism (rather than the boosterism we mostly get) and scholarly examinations are important but, who better to tell a war story than someone in the trenches?

Because I focus on harm and not the fantasy of progress, this isn’t a pursuit that brings wealth or notoriety. There have been a few podcast appearances (a type of sub-micro celebrity, as fleeting as a lightning flash) and opportunities to be published in respected magazines. That’s nice, as far as it goes. It’s important however, to see clearly and be honest with yourself; it’s a sisyphean task with few rewards; motivations must be found within and from a community of like minded people.

Originally, my motivation was to pierce the curtain. If you’ve seen the 1939 MGM film, ‘Wizard of Oz’ you know my meaning: there’s a moment when the supposed wizard, granter of dreams, is revealed to be a sweaty, nervous man, hidden by a curtain, frantically pulling levers and spinning dials to keep the machinery of delusion functioning. This was my guiding metaphor for the tech industry, which claims its products defy the limits of material reality and surpass human thought.

As you learn more, your understanding should change. Parting the curtain, or, debunking was an acceptable way to start but it’s insufficient; the promotion of so-called ‘AI’ is producing real-world harms. From automated recidivism decision systems to facial recognition based arrests and innumerable other intrusions. A technology sold as bringing about a bright future is being deployed to limit possibilities. Digital computation began as a means of enacting a command and control methodology on the world for various purposes (military applications being among the first) and is, in our age, reaching its apotheosis.

Kinetic Harm

Reporting on these harms, as deadly as they often are, fails to tell the entire story of computation in this era of growing instability. The same technologies and methods used to, for example, automate actuarial decision making in the insurance industry can also be used for other, more directly violent aims. The US military, which is known for applying euphemisms to terrible things like a thin coat of paint over rust, calls warfare – that is, killing – kinetic military action. We can call forms of applied computation deliberately intended to produce death and destruction kinetic harm.

Consider the IDF’s Habsora system, described in the +972 Magazine article, ‘A mass assassination factory’: Inside Israel’s calculated bombing of Gaza’ –

In one case discussed by the sources, the Israeli military command knowingly approved the killing of hundreds of Palestinian civilians in an attempt to assassinate a single top Hamas military commander. “The numbers increased from dozens of civilian deaths [permitted] as collateral damage as part of an attack on a senior official in previous operations, to hundreds of civilian deaths as collateral damage,” said one source.

“Nothing happens by accident,” said another source. “When a 3-year-old girl is killed in a home in Gaza, it’s because someone in the army decided it wasn’t a big deal for her to be killed — that it was a price worth paying in order to hit [another] target. We are not Hamas. These are not random rockets. Everything is intentional. We know exactly how much collateral damage there is in every home.”

According to the investigation, another reason for the large number of targets, and the extensive harm to civilian life in Gaza, is the widespread use of a system called “Habsora” (“The Gospel”), which is largely built on artificial intelligence and can “generate” targets almost automatically at a rate that far exceeds what was previously possible. This AI system, as described by a former intelligence officer, essentially facilitates a “mass assassination factory.”

+972 Magazine – https://www.972mag.com/mass-assassination-factory-israel-calculated-bombing-gaza/

The popular phrase, artificial intelligence, a marketing term, really, since no such thing exists, is used to describe the Habsora system. This creates an exotic distance, as if a glowing black cube floats in space deciding who dies and how many deaths will occur.

The reality is more mundane, more familiar, even banal; the components of this machine are constantly in use around us. Here is a graphic that shows some of the likely elements:

As we use our phones, register our locations, fill in online forms for business and government services, interact on social media and so many other things, we unknowingly create threads and weave patterns, stored in databases. The same type of system that enables a credit card fraud detection algorithm to block your card if in-person store transactions are registered in two, geographically distant locations on the same day can be used to build a map of your activities and relations to find and kill you and those you know and love. This is what the IDF has done with Habsora. The distance separating the intrusive methods of Meta, Google and fellow travelers from this killing machine is not as great as it seems.

Before being driven from their homes by the IDF – homes that were destroyed under the most intensive bombing campaign of this and perhaps even the previous, hyper-violent century, Palestinians in Gaza were subject to a program of surveillance and control which put them completely at the mercy of the Israeli government. All data about their movements and activities passed through electronic infrastructure owned and controlled by Israeli entities. This infrastructure, and the data processing and analysis built upon it, have been assembled into a factory whose product is death – whether targeted or en masse.

The Thin Curtain

Surveillance. Control. Punishment. This is what the age of digital computation has brought on an unprecedented scale. For those of us who live in places where the bombs don’t yet fall, there are things like the following, excerpted from the Forbes article (Feb 23, 2024) ‘Dozens Of KFC, Taco Bell And Dairy Queen Franchises Are Using AI To Track Workers’ –

Like many restaurant owners, Andrew Valkanoff hands out bonuses to employees who’ve done a good job. But at five of his Dairy Queen franchises across North Carolina, those bonuses are determined by AI.

The AI system, called Riley, collects streams of video and audio data to assess workers’ performance, and then assigns bonuses to those who are able to sell more. Valkanoff installed the system, which is developed by Rochester-based surveillance company Hoptix, less than a year ago with the hopes that it would help increase sales at a time when margins were shrinking and food and labor costs were skyrocketing.

Forbes – https://www.forbes.com/sites/rashishrivastava/2024/02/23/dozens-of-kfc-taco-bell-and-dairy-queen-franchises-are-using-ai-to-track-workers/

Inside the zone of comparative safety but, deprivation for many and control imposed on all, there are systems like the IDF’s Habsora in service, employing the same computational techniques, which, instead of directing sniper rifle armed quadcopters and F-16s on deadly errands, deprive people of jobs, medical care and freedom.  Just as a rocket’s payload can be changed from peaceful to fatal ends, the intended outcomes of such systems can be altered to fit the goals of the states that employ them.

The Shadow

As I write this, approximately 1.4 million Palestinians have been violently pushed to Rafah, a city in the southern Gaza strip. There, they are facing starvation and incomprehensible cruelty. Meanwhile, southwest of the ruins of Gaza City, in what has come to be known as the Al Nabulsi massacre, over one hundred Palestians were killed by IDF fire while desperately trying to get flour.  These horrors were accelerated by the use of computationally driven killing systems. In the wake of Habsora’s use in what journalist Antony Loewenstein calls the Palestine Laboratory, we should expect similar techniques to be used elsewhere and to become a standard part of the arsenal of states (yes, even those we call democratic) in their efforts to impose their will on an ever more restless world that struggles for freedom.


References

Artificial intelligence and insurance, part 1: AI’s impact on the insurance value chain

https://www.milliman.com/en/insight/critical-point-50-artificial-intelligence-insurance-value-chain

Kinetic Military Action

https://en.wikipedia.org/wiki/Kinetic_military_action

A mass assassination factory’: Inside Israel’s calculated bombing of Gaza

https://www.972mag.com/mass-assassination-factory-israel-calculated-bombing-gaza

Report: Israel’s Gaza Bombing Campaign is the Most Destructive of this Century

https://english.aawsat.com/features/4760791-report-israels-gaza-bombing-campaign-most-destructive-century

‘Massacre’: Dozens killed by Israeli fire in Gaza while collecting food aid

https://www.aljazeera.com/news/2024/2/29/dozens-killed-injured-by-israeli-fire-in-gaza-while-collecting-food-aid

Dozens Of KFC, Taco Bell And Dairy Queen Franchises Are Using AI To Track Workers

https://www.forbes.com/sites/rashishrivastava/2024/02/23/dozens-of-kfc-taco-bell-and-dairy-queen-franchises-are-using-ai-to-track-workers

The Palestine Laboratory: How Israel Exports the Technology of Occupation Around the World

Examples of Other Algorithm Directed Targeting Systems

Project Maven

https://www.engadget.com/the-pentagon-used-project-maven-developed-ai-to-identify-air-strike-targets-103940709.html

Generative AI for Defence (marketing material from C3)

https://c3.ai/generative-ai-for-defense