Welcome to AI Decoded, Fast Company’s weekly newsletter that breaks down the most important news in the world of AI. You can sign up to receive this newsletter every week here.
This week, I’m dedicating the newsletter to a conversation I had recently with the futurist Zack Kass about some of the risks and myths that will come with the advent of AI across business and society. Kass, who was head of go-to-market at OpenAI, is the author of the upcoming book, The Next Renaissance: AI and the Expansion of Human Potential.
What types of risks do you think we might be facing from AI in the next decade?
I give it a very low percentage, but I think there is a reasonable chance that we actually build systems that are so smart that we start to devalue critical thinking and sort of decline cognitively. This seems very unlikely because every generation is smarter than the last, but it’s worth calling out.
More likely is that, at some point, a percentage of the population will be more interested in the virtual reality than the physical one, and that percentage may grow and actually become sort of dominant, which would obviously be catastrophic for population growth and quality of life. This trend you can sort of see with Gen Z, the anxious generation—the attachment to the device, the addiction to the device era.
Do you think that job losses to AI and automation are more of a near-term problem that we’ll have to deal with, along with the effects on the economy?
This is the thing that I would love if people spent more time talking about: The risk is not an economic one. I think in a world where we actually automate all our work, something profoundly positive will happen economically. If you can actually figure out how to automate everything and the cost of everything declines so far that you can live freely, it’s more that people may not know what their purpose is in a world where their work changes so frequently and so much. I think that the future is incredibly optional in all sorts of interesting ways, and I really do caution that the risk in all this is simply that people will lack purpose, at least for a couple generations.
It will be our generation and maybe the next that bears figuring out what we do in a world where our work is just so dynamic, and maybe relatively less meaningful because the world is so much more robust. That being said, there’s also incredible new opportunities. For every job that goes away, there will probably be a new job created in some interesting new way that we just cannot imagine. And I caution people to consider how they would imagine the economy looking before the internet or before electricity, for that matter. How could you fathom the economy in 1900 or 1800?
What about other things like the use of AI to flood the information space with misinformation and disinformation?
I don’t even list it as one of my primary concerns because misinformation is one of these things that will have an incredible counterbalance—for every article and every photo that is generated by AI, we will have a system to actually determine its validity.
And we will have much more robust truth telling in the future. This has just been true forever. And by the way, I remember going to the grocery store with my mom and looking at magazine covers of women and my mom saying, “Oh God, Cindy Crawford is so beautiful” because for a long time they were Photoshopping photos and just not telling us. Now, of course, we all know that every photo is Photoshopped. We have this lens with which we view the world. I think—and this is what I say to publicists—we will have this return to traditional media if we do it right. We need the institutions to recapture trust, otherwise it will be very hard for people to know what to believe because in a world where people are more interested in Reddit and Quora, this could go a little strangely. In a world where people don’t trust traditional media—and they don’t—the institution has just lost so much trust.
And we didn’t even really need AI for that to happen.
That’s exactly right. So I think now presents an opportunity for us to find ways, and there’s a lot of historical precedent. The printing press introduced all sorts of incredible ways for people to behave as charlatans, and you don’t have to go back that far. We studied a bunch of people who sold early Ponzi schemes. There was so much financial fraud in the late 19th/early 20th century. There was an incredible amount of financial fraud because people could just print fake securities and sell them, and there was just no way to actually validate things. And obviously, there’s this incredible new way now that we can actually score things. I don’t basically ever talk about blockchain, but I do think blockchain will serve as a means to keep an official record of lots of things, a place that cannot be tampered with.
What are your thoughts about longer-term AI risks, the existential risks people like Geoff Hinton and Eliezer Yudkowsky talk about?
The existential risk has two parts. The first is, is this machine going to unwittingly do something untoward—are we building something that is going to do something really bad on its own? And that presents the alignment problem. The real risk in all this is not that the machine wakes up one day and says, I’m going to kill all. The theory of the alignment problem basically says we need to make sure that it cares about its unintended consequences because we [humans] may not fully appreciate what we’re doing.
And then there’s the bad actor. And this I think is also misunderstood because the real concern around bad actors in my opinion is not high-resource bad actors. I don’t spend time worrying about North Korea with AI. They already have plenty of tools at their disposal to be bad actors, and the reality is we get better at managing high-resource bad actors all the time. The low-resource bad actor problem is a risk. In a world where we embolden anyone to do interesting things with this technology, we should create very punitive measures to police bad acting with it. We should make bad actors terrified to use AI to do bad things—financial crime, deepfakes, etc.
And this is something that we could do really easily, like we did with mail theft. We could say, hey, we built a system that’s really fragile, and if we let people steal mail, domestic commerce will collapse. We need to make it a felony offense.
What needs to be done to address these risks over the next five years?
We should try to figure out how to come up with international standards by which all models are measured, and companies that use models that don’t meet these measures are penalized. We should just make sure that everyone honors alignment standards.
Second is explainability standards. The expectation that a model can be perfectly explainable is inherently dangerous because there’s plenty in a model that cannot be explained. But we should set standards by which tasks that require explainability meet explainability standards. For example, if you’re going to use a model to write an insurance policy, it should meet an explainability standard.
And then the third thing is bad acting: We just have to make it scary for low-resource bad actors to use this stuff. The market will figure itself out. Europe, I think, is going to have some really serious economic suffocation pretty soon here because they’ve passed a bunch of really strange policies that I don’t even know if it protects the consumers as much as it just gives the policymakers a reason to celebrate. If we can get these things right, the market will behave in a way that serves us the constituents.
Was Biden’s executive order on AI constructive?
It was passed at a time where basically no one working on it knew much about what they were talking about. So it’s less that it’s lip service and more that it didn’t actually change behavior. So it really is one of these things like, you know, are you just doing this to appease voters?
A lot of people in Congress have the perspective that “we missed the boat on social media and we sure don’t want to miss it again on AI.”
All progress has a cost . . . the cost of social media, the cost of the internet, is pretty great. The cost of social media on young children’s minds is terrible. It is also now something that we as individuals are identifying and working through. Passing policy on these things has potentially very dangerous consequences that you cannot unwind—economic consequences, massive learning and development consequences. It’s not that the government “missed the boat” on social media, so to speak. They just weren’t even paying attention. And no one went into this thing with eyes wide open because there was no one in Congress, if you recall, who knew anything about what the internet was. So you basically had Mark Zuckerberg going on stage in front of a bunch of people who were like, “I don’t know.”
I’ve written about California’s AI bill that was vetoed by the governor. What are your thoughts on that approach?
I fully support the regulation of AI. I’m not asking this to be the Wild West. This is the most important technology that we will build in our lifetime, maybe except for quantum. It’s really scary when people celebrate policy for the sake of policy, especially when it comes at the cost of what could be truly society-improving progress. Like massive amounts of progress are probably going to be found on the other side of this. And that’s not a hot take because that’s what technology does for the world. People spend so much time fixated on what the government will do to solve their problems that they’ve forgotten that technology is basically doing all the things that have been promised to us. You know, the utopias that we build in our mind may actually come to pass. I think they will, for what it’s worth, and not because of government intervention, but because of technological progress; because what one person can do today will pale in comparison to what one person can do [tomorrow].
More AI coverage from Fast Company:
- We used Google’s AI to analyze 188 predictions of what’s in store for tech in 2025
- Andrew Ng is betting big on agentic AI
- We called 1-800-ChatGPT to see if OpenAI would ruin Christmas
- As Bible sales boom, so does Christian tech
Want exclusive reporting and trend analysis on technology, business innovation, future of work, and design? Sign up for Fast Company Premium.
Autentifică-te pentru a adăuga comentarii
Alte posturi din acest grup
For Makenzie Gilkison, spelling is such a struggle that a word like rhinoceros might come out as “rineanswsaurs” or sarcastic as “srkastik.”
The 14-year-old from
Japan Airlines said it was hit by a cyberattack Thursday, causing delays to
Thumbnails play the YouTube equivalent of a movie poster, aiming to draw your attention to click and watch when you have hundreds of videos clogging your recommended content. Most of us have been
Over the past two years, generative AI has dominated tech conversations and media headlines. Tools like ChatGPT, Gemini, Midjourney, and Sora captured imaginations with their ability to create tex
Was YouTube TV’s recent price increase the straw that broke the camel’s back for you? Wh
TikTok is the new doctor’s office, quickly becoming a go-to platform for medical advice. Unfortunately, much of that advice is pretty sketchy.
A new report by the healthcare software fi