“are ais doing something like human reasoning?”

9 min read

As ChatGTP disrupts our lives from every angle, it seems like a new kind of machine intelligence may be upon us. Melanie Mitchell talks to Daniel Cossins about how to understand this moment

ARTIFICIAL intelligence has been all over the news in the past few years. Even so, in recent months the drumbeat has reached a crescendo, largely because an AIpowered chatbot called ChatGPT has taken the world by storm with its ability to generate fluent text and confidently answer all manner of questions. All of which has people wondering whether AIs have reached a turning point.

The current system behind ChatGPT is a large language model called GPT-3.5, which consists of an artificial neural network, a series of interlinked processing units that allow for programs that can learn. Nothing unusual there. What surprised many, however, is the extent of the abilities of the latest version, GPT- 4. In March, Microsoft researchers, who were given access to the system by OpenAI, which makes it, argued that by showing prowess on tasks beyond those it was trained on, as well as producing convincing language, GPT-4 displays “sparks” of artificial general intelligence. That is a long-held goal for AI research, often thought of as the ability to do anything that humans can do. Many experts pushed back, arguing that it is a long way from human-like intelligence.

So just how intelligent are these AIs, and what does their rise mean for us? Few are better placed to answer that than Melanie Mitchell, a professor at the Santa Fe Institute in New Mexico and author of the book Artificial Intelligence: A guide for thinking humans. Mitchell spoke to New Scientist about the wave of attention AI is getting, the challenges in evaluating how smart GPT-4 really is, and why AI is constantly forcing us to rethink intelligence.

Daniel Cossins: There is a groundswell of interest in AI at the moment. Why is it happening now?

Melanie Mitchell: The first thing is that these systems are now available to the public. Anyone can easily play with ChatGPT, so people are discovering these systems and what they can do. More broadly, we are seeing an era of astounding progress in linguistic abilities. Over the past five years or so, we’ve seen the emergence of these large language models, trained on enormous amounts of human-generated language, and they’re able to generate fluent, human-sounding text. Their fluency gives the appearance of human-like intelligence. That has caught people’s imagination; there’s this feeling that the AIs we’ve seen in movies and read about in science fiction are finally here. I think people are feeling both wonder and partly fear at what these AIs might do.

You mention “human-like intelligence”. Just how intelligent are today’s generative AIs, like those that generate text, and how do we assess that?

This is the subject of enormous