Facing ai extinction

2 min read

Comment AI special report

Why do many of today’s artificial intelligence researchers dismiss the potential risks to humanity, asks David Krueger

IN A recent White House press conference, press secretary Karine Jean-Pierre couldn’t suppress her laughter at the question: Is it “crazy” to worry that “literally everyone on Earth will die” due to artificial intelligence? Unfortunately, the answer is no.

While AI pioneers such as Alan Turing cautioned that we should expect “machines to take control”, many contemporary researchers downplay this concern. In an era of unprecedented growth in AI abilities, why aren’t more experts weighing in?

Before the deep-learning revolution in 2012, I didn’t think human-level AI would emerge in my lifetime. I was familiar with arguments that AI systems would insatiably seek power and resist shutdown – an obvious threat to humanity if it were to occur. But I also figured researchers must have good reasons not to be worried about human extinction risk (x-risk) from AI.

Yet after 10 years in the field, I believe the main reasons are actually cultural and historical. By 2012, after several hype cycles that didn’t pan out, most AI researchers had stopped asking “what if we succeed at replicating human intelligence?”, narrowing their ambitions to specific tasks like autonomous driving.

When concerns resurfaced outside their community, researchers were too quick to dismiss outsiders as ignorant and their worries as science fiction. But in my experience, AI researchers are themselves often ignorant of arguments for AI x-risk.

One basic argument is by analogy: humans’ cognitive abilities allowed us to outcompete other species for resources, leading to many extinctions. AI systems could likewise deprive us of the resources we need for our survival. Less abstractly, AI could displace humans economically and, through its powers of manipulation, politically.

But wouldn’t it be humans wielding AIs as tools who end up in control? Not necessarily. Many people might choose to deploy a system with a 99 per cent chance of making them phenomenally rich and powerful, even if it had a 1 per cent chance of escaping their control and killing everyone.

Because no safe experiment can definitively tell us whether an AI system will actually kill everyone, such concerns are often dismissed as uns