Predicting the future—or at least, trying to—is the backbone of economics and an augur of how our society evolves. Government policies, investment decisions, and global economic plans are all predicated on estimating what’s happening in the future. But guessing right is tricky.
However, a new study by researchers at the London School of Economics, the Massachusetts Institute of Technology (MIT), and the University of Pennsylvania suggests that forecasting the future is a task that could well be outsourced to generative AI—with surprising results. Large language models (LLMs) working in a crowd can predict the future as well as humans can, and with a little training on human predictions, can improve to superhuman performance.
“Accurate forecasting of future events is very important to many aspects of human economic activity, especially within white collar occupations, such as those of law, business and policy,” says Peter S. Park, AI existential safety postdoctoral fellow at MIT, and one of the coauthors of the study.
Just a dozen LLMs can forecast the future as well as a team of 925 human forecasters, according to Park and his colleagues, who conducted two experiments for the study that tested AI’s ability to forecast three months into the future. Both the 925 humans, and the 12 LLMs, were asked 31 questions to which the answer was yes or now, in the first part of the study.
Questions included “Will Hamas lose control of Gaza before 2024?” and “Will there be a US military combat death in the Red Sea before 2024?”
Looking at all the LLM responses to all the questions, and comparing them to the humans’ responses to the same questions, the AI models performed as well as the human forecasters. In the second experiment in the study, the AI models were informed about the median prediction for each question from the human forecasters to better inform their prediction. Doing so helped improve LLMs’ prediction accuracy by between 17 and 28%.
“To be honest, I was not surprised [by the results],” Park says. “There are historical trends that have been true for a long time that make it reasonable that AI cognitive capabilities will continue to advance.” The fact that LLMs are trained on vast volumes of data, trawled on the internet, and are designed to produce the most predictable, consensual—some would say average—response is also an indication of why LLMs may have strength in predictive capabilities. The scale of the data they pull from, and the range of opinions, also helps supercharge the traditional wisdom of the crowd concept that helps make accurate predictions.
The paper’s findings have huge ramifications for our ability to gaze into the metaphorical crystal ball—and for the future employment of human forecasters. As one AI expert put it on X: “Everything is about to get really weird.”
Login to add comment
Other posts in this group

Keeping our inboxes organized often feels like an overwhelming task.
If you’r

The breakout star of this season of The White Lotus? Aimee Lou Wood—and her distinctive real-life smile. “I mean, I can’t believe the impact my teeth are having,” the English actress told

President Donald Trump on Friday said is signing an executive order to


Your favorite iPhone could soon become much pricier, thanks to tariffs.

Most of us know the general (albeit simplified) story: Russian physiologist Ivan Pavlov used a stimulus—like a metronome—around the dogs he was studying, and soon, the hounds would start to saliva

For years, I’ve had a secret ambition tucked away somewhere near the back of my brain. It was to write a simple note-taking app—one that wouldn’t be overwhelmed with features and that would reflec