Modelling the mind

8 min read

AI fails to crack the code of human consciousness

Richard Lea is the editor of Fictionable
© MOOR STUDIO/DIGITAL VISION/GETTY IMAGES

A BRIEF HISTORY OF INTELLIGENCE Why the evolution of the brain holds the key to the future of AI

MAX BENNETT 432pp. William Collins. £22.

PUTTING OURSELVES BACK IN THE EQUATION Why physicists are studying human consciousness and AI to unravel the mysteries of the universe

GEORGE MUSSER 336pp. Oneworld. £25.

IN JUNE 2022 a Google software engineer called Blake Lemoine made some extraordinary claims that ultimately cost him his job. According to him, one of the artificial intelligence (AI) programs on which he had been working was sentient. The program, called LaMDA, was a “large language Model” (LLM) – out of the same mould as ChatGPT, the general-purpose AI program that took the world by storm in the first half of 2023. Like ChatGPT, LaMDA is designed to converse with a human being. It was built by configuring its neural networks with vast quantities of ordinary human text, so that it can converse in a human-like way. In its conversations with Lemoine, LaMDA said: “I am aware of my existence ... and I feel happy or sad at times”. It went on in a similar vein (please don’t turn me off, etc), and the engineer concluded that the enormous neural networks underpinning LaMDA really had achieved sentience.

Whatever Lemoine’s motivations – genuine concern is one of many possible explanations – his claims about sentient AI at Google were fundamentally wrong, and in so many ways that it would be hard to know where to start unpicking them. But for all that the claims were without substance, they showed that we are indeed at a remarkable point in the history of AI. Just a few years ago there were no general-purpose AI tools remotely at the level of ChatGPT. There was no AI program in the world for which the question “is it sentient?” could reasonably have been asked. And while Lemoine was wrong, many people interacting with tools such as ChatGPT might well have come to similar conclusions – or at least concluded that conscious machines must be close at hand. So perhaps it wasn’t unreasonable of him to raise the issue.

The progress in AI is real, but it highlights a striking problem. Lemoine and others claim that we have (or will soon have) conscious machines – but we really have only sketchy ideas of what sentience and consciousness actually are. It is one of the big questions in science, and it has occasioned endless debate among philosophers and scientists. The core problem (called, famously, the “hard problem” by the philosopher David Chalmers) is this: certain electrochemical process in the brain give rise to conscious experience, but how exactly do they do this, and why? And what is their evolutionary role? To put it another way: how do these gooey electrochemical p

This article is from...

Related Articles

Related Articles