Leaving the Lyceum

Can large language models – known by the acronym LLM – reason? 

This is a hotly debated topic in so-called ‘tech’ circles and the academic and media groups that orbit that world like one of Jupiter’s radiation blasted moons.  I dropped the phrase, ‘can large language models reason’ into Google, (that rusting machine) and got this result:

This is only a small sample. According to Google there are “About 352.000.000 results.” We can safely conclude from this, and the back and forth that endlessly repeats on Twitter in groups that discuss ‘AI’ that there is a lot of interest in arguing the matter: pro and con. Is this debate, if indeed it can be called that, the least bit important? What is at stake?

***

According to ‘AI’ industry enthusiasts, nearly everything is at stake; a bold new world of thinking machines is upon us. What could be more important?  To answer this question, let’s do another Google search, this time, for the phrase, Project Nimbus:

The first result returned was a Wikipedia article, which starts with this:

Project Nimbus (Hebrew: פרויקט נימבוס) is a cloud computing project of the Israeli government and its military. The Israeli Finance Ministry announced in April 2021, that the contract is to provide “the government, the defense establishment, and others with an all-encompassing cloud solution.” Under the contract, the companies will establish local cloud sites that will “keep information within Israel’s borders under strict security guidelines.”

Wikipedia: https://en.wikipedia.org/wiki/Project_Nimbus

What sorts of things does Israel do with the system described above? We don’t have precise details but there are clues such as what’s described in this excerpt from the +972 Magazine article, ‘A mass assassination factory’: Inside Israel’s calculated bombing of Gaza’ –

According to the [+972 Magazine] investigation, another reason for the large number of targets, and the extensive harm to civilian life in Gaza, is the widespread use of a system called “Habsora” (“The Gospel”), which is largely built on artificial intelligence and can “generate” targets almost automatically at a rate that far exceeds what was previously possible. This AI system, as described by a former intelligence officer, essentially facilitates a “mass assassination factory.”

+972: https://www.972mag.com/mass-assassination-factory-israel-calculated-bombing-gaza/

***

History, and legend tell us that in ancient Athens there was a place called the Lyceum, founded by Aristotle, where the techniques of the Peripatetic school were practiced. Peripatetic means, more or less, ‘walking about’ which reflects the method: philosophers and students, mingling freely, discussing ideas. There are centuries of accumulated hagiography about this school. No doubt it was nice for those not subject to the slave system of ancient Greece.

Similarly, debates about whether or not LLMs can reason are nice for those of us not subject to hellfire missiles, fired by Apache helicopters sent on their errands based on targeting algorithms. But, I am aware of the pain of people who are subject to those missiles. I can’t unsee the death facilitated by computation.

This is why I have to leave the debating square, the social media crafted lyceum. Do large language models reason? No. But even spending time debating the question offends me now. A more pressing question is what the people building the systems killing our fellow human beings are thinking. What is their reasoning?

Leave a comment