Eric Schmidt on Henry Kissinger’s surprising warning to the world on AI

Humans are beginning to really reckon with the possibility of sharing our planet with some far smarter non-corporeal entities. That outcome raises a lot of big questions: How do we begin to lay the groundwork for a good relationship with these AIs both in our labs and in our governing bodies? How do we train them to make discoveries and help resolve conflicts with human values in mind?

It’s these kinds of weighty issues that former Google CEO Eric Schmidt, former Microsoft executive Craig Mundie, and former secretary of state Henry Kissinger confront in their new book, Genesis: Artificial Intelligence, Hope, and the Human Spirit. I spoke to Schmidt about the book and about his collaboration with Kissinger, who worked on the book up until his death on November 29, 2023—almost exactly a year after the release of ChatGPT.

The title of the book has a biblical overtone. Can you tell me about the choice of the name? 

Henry, Doctor Kissinger, always liked short titles. He died, unfortunately, before we chose this title. He was very interested in sort of human rebirth, if you will. So the reason we put the subtitle “AI, Hope and the Human Spirit” was because ultimately, that is how humanity works. The arrival of a nonhuman intelligence that rivals human intelligence is a very, very big deal. Maybe not this year, but over 10 or 20 years. 

Did Dr. Kissinger have an “aha moment” that piqued his interest in this technology? 

We were at the Bilderberg Conference and I convinced him to listen to a Demis Hassabis speech. With Henry, his mind is always different from everybody else’s. So he didn’t hear the technology that Demis was hearing. He heard a new definition of reality. And his immediate reaction was that the future should not be entrusted to us. His view was that collectively we should not allow the tech people alone to invent this. We’re allowed to invent it, but we have to be heavily watched. 

And, as you discuss in the book, we may need new AI to help us oversee the AI.

A simple rule is that you’re not going to succeed unless AI is checking AI. And so that has some implications, because the ability to check something means that you’re at least as smart as the thing you’re checking. When the system invents something new . . . how do you know what’s in it? Maybe it developed evil at the same time and you just didn’t ask it an evil question [during safety testing]. The companies spend a lot of time on this, and they have a lot of manual processes, and they use very smart people to check, but that’s not going to scale at the end of the day. 

It seems to me that you and Craig could have worked with any number of people from any number of disciplines on a book like this. Why was it a natural pairing between two technologists and a renowned international relations expert to meditate on this topic? 

The book spends a fair amount of time talking about power, because that’s what Henry understood very well. How will power be exercised in the AI age? So for example, it’s pretty clear that the people who are doing [AI] themselves will become quite wealthy. The countries that do this will become quite wealthy—the U.S. and China will become very wealthy because of this. Why? Because they’re building it. So what happens to Europe? Is this yet another divergence between the European social model and the American capitalist model? 

But now we have a contest between the China autocratic model and the American model. In my [tech] world, there’s this debate between centralized and decentralized. In political science, there’s the debate between the rule of the many and the rule of the few. I’ve always assumed that the American model would win, but there’s a perfectly legitimate scenario, which says that efficiency is more important than freedom. China will use this technology to ensure that there are no challenges to their authority. And I did not think years ago that this would happen. I did not think that technology would allow that kind of centralization. Centralization has its benefits, and these large AI systems are in fact centralized at the moment. It doesn’t mean they’ll always be centralized.

Isn’t open-source, by definition, a decentralized system?

There are two results this week from China. One is called Qwen (Alibaba) and another one is called Hunyuan (Tencent). And it appears that the Hunyuan is better than the Llama 3.1 405B. Now these are technically called open weight models. I had assumed that China would be some years behind America because of the chip restrictions, and frankly, because they weren’t that organized. But it looks like in the last year, China has decided to overinvest in this area because they understand how important AI leadership is, and science leadership in business, and so forth. So it’s perfectly possible that the race between China and the U.S. will actually not be over military power, but rather on the path to AGI and the path to general intelligence. 

Craig says that the only reason this thing might go south or turn bad is if we’re lazy in engineering, policy, or proliferation control. Can you drill down on the proliferation problem? 

I’ll give you an example. You have one of these incredibly expensive, powerful models and someone steals it. And they put it on a hard drive and then they put it on the dark web and everyone has access to it. So it’s just true exfiltration [theft of sensitive data from a computer or network] of the data. The company that lost this is obviously going to be upset because of money. But the real issue is this stuff could then be used for, for example, a biological attack or financial attack or a cyberattack. So the industry talks a lot about this question of exfiltration and essentially adversarial attacks on the models to break them and so forth. It’s pretty clear to me that we’re going to have at least one terrorist group try to use these things for harm at some point in the future and that would be the moment where we had better be ready. 

What should governments be doing right now to oversee this technology? There was a big industry push to kill California’s AI bill, which established some basic safety and transparency requirements for large frontier model developers. The main argument was that it’s too early to establish such rules and a risk of killing an emergent technology in the crib.

That is my view. I think it was well-intentioned, but first it’s not the correct level of regulation. It should be at the U.S. level, not at the state level. More importantly, I don’t think we’re quite at the point where the government has to tell us [tech industry] to stop. The industry is developing this stuff very quickly. People are well aware of the issues that you and I are discussing and before you prohibit it, maybe you should see if [tech company–invented] solutions come up. There’s a consensus in the industry that the explainability problems and the hallucination problems will get solved in the next few years. People are working hard on that. 

I mean, this is the nature of how technology works. If we had banned cars before they were in heavy use, we would be dealing with horses. And cars are clearly dangerous because they can run over people, and yet we came up with various solutions including airbags and driver’s licenses and so forth.

The relevance of the arguments in the book depend somewhat on the idea that the current research will take us to AGI. What do you think about the current narrative that scaling up training data and computing power has stopped yielding big gains in intelligence as they did before? 

There’s some evidence that we’ve found most of the data that’s available for normal training. So now you have to start generating data, and also models are getting harder and harder to get baked. So there’s some evidence that there is a slowdown. But the way the press works is they say “there’s a slowdown.” [They] think of it as a mechanism that is hitting its natural limits without more data or better algorithms. Now, I can assure you there’s more data coming because it’s synthetic, and I can assure you that more algorithms because everyone’s working on agents and agents are a whole new ball game. So I’m not dismissing the headline. I think there’s some evidence of it, but that does not mean that the current AI boom stops or slows. It just means that the tools of competition change. 

https://www.fastcompany.com/91232708/eric-schmidt-youre-not-going-to-succeed-unless-ai-is-checking-ai?partner=rss&utm_source=rss&utm_medium=feed&utm_campaign=rss+fastcompany&utm_content=rss

Creată 1mo | 21 nov. 2024, 12:10:03


Autentifică-te pentru a adăuga comentarii

Alte posturi din acest grup

AI use cases are going to get even bigger in 2025

Over the past two years, generative AI has dominated tech conversations and media headlines. Tools like ChatGPT, Gemini, Midjourney, and Sora captured imaginations with their ability to create tex

25 dec. 2024, 07:30:03 | Fast company - tech
YouTube TV price hike got you down? 5 free alternatives

Was YouTube TV’s recent price increase the straw that broke the camel’s back for you? Wh

25 dec. 2024, 07:30:02 | Fast company - tech
TikTok is full of bogus, potentially dangerous medical advice

TikTok is the new doctor’s office, quickly becoming a go-to platform for medical advice. Unfortunately, much of that advice is pretty sketchy.

A new report by the healthcare software fi

25 dec. 2024, 00:30:03 | Fast company - tech
45 years ago, the Walkman changed how we listen to music

Back in 1979, Sony cofounder Masaru Ibuka was looking for a way to listen to classical music on long-haul flights. In response, his company’s engineers dreamed up the Walkman, ordering 30,000 unit

24 dec. 2024, 15:10:04 | Fast company - tech
The greatest keyboard never sold

Even as the latest phones and wearables tout speech recognition with unprecedented accuracy and spatial computing products flirt with replacing tablets and laptops, physical keyboards remain belov

24 dec. 2024, 12:50:02 | Fast company - tech
The 25 best new apps of 2024

One of the most pleasant surprises about this year’s best new apps have nothing to do with AI.

While AI tools are a frothy area for big tech companies and venture capitalists, ther

24 dec. 2024, 12:50:02 | Fast company - tech
The future belongs to systems of action

The world of enterprise tech is built on sturdy foundations. For decades, systems of record—the databases, customer relationship management (CRM), and enterprise resource planning (ERP) platforms

23 dec. 2024, 22:50:06 | Fast company - tech