• Stat Significant
  • Posts
  • The AI Forecast: What Humanity Predicts for Artificial Intelligence.

The AI Forecast: What Humanity Predicts for Artificial Intelligence.

What do prediction markets and pollsters say about AI's future?

Intro: The Power of AI   

Artificial intelligence (AI) has gone mainstream — if you haven't heard. I know this because my dad references ChatGPT in everyday conversation, my personal barometer for what's mainstream (love you, Dad). With each passing day, the nascent technology transforms industries and captures the public's imagination, like this AI-generated picture of Trump and Obama playing pickup basketball:

Yet, the future of artificial intelligence remains shrouded in uncertainty. Modeling potential outcomes is manageable with a dataset of past results, but AI's consequences will likely prove unprecedented. So what to do? Perhaps we possess insufficient information to make predictions, but we do know what others are predicting — which is a dataset in and of itself.    

So if pollsters and prediction markets offer humanity's best guess — what do we collectively believe for AI's future? Do forecasters and survey respondents envision AI systems taking our jobs, inciting political discord, or wiping out humanity? Let's see what the people have to say.  

But First, a Quick Primer on Prediction Markets:  

Prediction markets (a.k.a betting markets) function like stock markets for uncertain outcomes. Take Polymarket's prediction market for Volodymyr Zelenskyy winning "2022 Time Person of the Year" as an example:  

In December, I could have bought $100 shares of "yes" at 68¢ - since the market believed this outcome was 68% likely. When Zelenskyy won, the value of my 68¢ "yes" shares would increase to $1, and my $100 investment would become $147. Congrats to Zelenskyy (though he probably doesn't care), and congrats to me.

Prediction markets have proven relatively accurate in forecasting uncertain events (elections, sports, economics, etc.), given their incentive structures and scale: 

These markets are imperfect, though they are as effective as any other high-quality forecasting apparatus (FiveThirtyEight models, Wall Street analysts, Paul the Octopus, etc.).  

I optimized my choice of markets and surveys to demonstrate a wide variety of salient AI concerns while also providing entertainment value.    

Predicting AI's Political Future: 

In the early days of the internet, the US government adopted a largely hands-off approach to regulating the world wide web. This light-touch stance was exemplified by The Telecommunications Act of 1996 and Section 230, which encouraged rapid growth and provided legal immunity for internet platforms hosting user-generated content.     

But this hands-off approach had consequences, as monopolies, privacy concerns, and widespread misinformation define today's internet. Moreover, US lawmakers struggle to legislate gigantic tech firms that reached massive scale partly due to lack of regulation. That said, Amazon Prime exists — so it's not all bad.

Will regulators learn from prior mistakes and respond aggressively to this new wave of innovation? And how will the masses react to technological disruption — will there be unrest?  

Prediction markets foresee more national bans for ChatGPT and laws regulating generative art. Forecasters also predict artificial intelligence will be a focal point of the 2024 presidential debates. I struggle to imagine these candidates debating AI in a manner that isn't highly cringe-inducing. However, if you're clamoring to hear Joe Biden and Donald Trump/Ron Desantis disagree over large language models, this election season may be your dream.

On the other hand, market participants forecast a lower likelihood of an AI company being attacked (by modern-day Luddites?) or the US limiting computing capacity (which would stop lone wolves from single-handedly building, say, Skynet). 

Forecasters find it unlikely that artificial intelligence will become embroiled in America's culture wars — at least this year. I have to say I love the wording of this particular question — as if the culture wars are a sentient chaos monkey intent on invading all facets of widespread culture. 

Perhaps the culture wars will pass over AI (like Passover), or maybe AI will supplant social media as the thing everyone is mad about. Let's hope not.                               

Predicting AI's Impact on the Human Race:

Artificial intelligence has not been depicted favorably in popular culture. From the menacing HAL 9000 of 2001: A Space Odyssey to the post-apocalyptic hellscapes of The Matrix and Terminator movies, fictional representations of AI have fascinated and alarmed audiences with visions of war-like relations between humans and artificial general intelligence.  

These prospective futures are colorful — after all, they're crafted to entertain — but are they inevitable? 

Markets are surprisingly optimistic about humanity's extinction — which is always good — assigning AI a 25% possibility of wiping out humanity before 2100. Hooray.

Even if AI doesn't obliterate the human race, forecasters overwhelmingly believe machine intelligence will catch up to human intelligence by 2040. Furthermore, markets predict a significant chance of these superior AI systems directly causing more than 100 deaths or $1B in economic damages.  

And finally, markets are slightly optimistic (but only slightly) that there will be a "positive transition" to a world of superior artificial general intelligence. This question is decidedly vague — it reads more like a vibe check on human flourishing. And that vibe is somewhat upbeat. 

Predicting AI's Disruption of the Job Market:

Why do we like a particular picture or article? What connection do we feel with a Picasso painting or a New York Times opinion piece? Sure, the content of their work may delight us, but there is an element of personal connection between a producer and consumer — or at least there is today.

With every pope-centric deepfake or successful bar exam result, ChatGPT, DallE, and Midjourney further demonstrate the fallibility of human ingenuity. Our own creations may very well outpace us — or that's what AI doomers evangelize. So do the masses also forecast wide-scale job loss and economic disruption?

Looking at YouGov opinion polling, we see a bizarre and somewhat contradictory find for perceived future job loss:   

Respondents assume AI will spur job loss, but it won't be their job. This question was crafted to measure sentiment, so some optimism bias isn't a bad thing. 

And which jobs do participants project as positively or negatively impacted by advances in artificial intelligence?

YouGov survey results indicate that manufacturing workers, retail sales workers, and customer service agents maintain the most significant risk of job loss. Conversely, this poll shows computer programmers, data scientists, and engineers with the highest likelihood of job gains and (paradoxically) an elevated chance of job losses. How can both be possible? 

Well, imagine a world where these knowledge workers are quite literally programming themselves out of a job. Building AI agents that can code more AI agents could inevitably backfire on the human agents that tipped the first domino.  

Final Thoughts: To Fret or Not to Fret?

In the final acts of Martin Scorsese's Shutter Island and Christopher Nolan's Memento, each film's protagonist discovers their lives are an elaborate delusion. Rather than escape their fiction, both protagonists commit to a life of self-deception — to continue living in a fantasy for the benefit of emotional self-preservation. These characters choose purpose over enlightenment.

Why do I bring up these movies? Because I sympathize with these characters and their acts of willful ignorance. Deep down, I believe AI advances pose dire consequences for humanity; clearly, I'm more pessimistic than many prediction markets and survey respondents.  

There is an interplay between sentiment and precision when it comes to forecasting, with the former often impeding the latter. When optimizing for precision, my best guess is that AI will be bad. However, my personal sentiments are to avoid cynicism and despair. So much like the characters of Shutter Island and Memento, I pick the illogical route (by that, I mean hopes of a brighter AI future) because living in anxiety is unideal.   

Is ignoring your instinct self-deception? Maybe. But I'd rather choose optimism and be wrong than choose pessimism and be right.  

Plus, I have it on good authority that the AI apocalypse won't happen before 2100 — so let's enjoy the time we have left. 

This post is public so feel free to share it.

Want to chat about data and statistics? Have an interesting data project? Just want to say hi? Email [email protected]