For truly intelligent AI, we need to mimic the brain’s sensorimotor principles

In a recent essay by Sam Altman, titled “The Intelligence Age,” he paints a picture for the future of AI. He states that with AI, “fixing the climate, establishing a space colony, and the discovery of all of physics—will eventually become commonplace.” On an individual level, he states (italics added), “We can each have a personal AI team, full of virtual experts in different areas, working together to create almost anything we can imagine.” The benefits of AI, according to Altman, will soon be available to everyone around the world.

These claims are absurd, and we shouldn’t let them pass without criticism. Subsistence farmers in central Asia can imagine living in a villa on the Riviera, but no AI will make that happen. The “discovery of all of physics,” if even possible, will require decades or centuries of building sophisticated experiments, some of which will be located in space. The claim that AI will make this commonplace doesn’t even make sense.

Altman isn’t alone in claiming that we are on the cusp of creating super-intelligent machines that will solve most of the world’s problems. This is a view held by many of the people leading AI companies. For example, Dario Amodei, CEO of Anthropic, has proposed that AI will soon be able to accomplish in five to 10 years what humans, unassisted by AI, would accomplish in fifty to one hundred years. Although not guaranteed, he thinks AI will likely eliminate most cancers, cure most infectious diseases, and double the human lifespan. These advances will occur because AI will be much smarter than humans. As he put it, we will be “a country of geniuses,” although they will be “geniuses in a datacenter.”

Is Today’s AI Intelligent?

Today’s AI is based on a technology called deep learning. Altman and other deep learning advocates believe that to create super-intelligent machines, all we have to do is scale this technology. If we feed ever-increasing amounts of data into these systems, using larger and larger computers, there will be no limit to what AI can do. This belief has led to a race to build hundreds of massive server farms around the world to run deep learning AI systems.

Today’s AI is impressive and useful. So far, it has improved when trained on more data using bigger computers. But is that it? Is the challenge of building truly intelligent, world-altering, artificial intelligence solved, and all that remains to be done is scale today’s systems? Will the domination and profits of this technology go to the tech companies that collect the most data and build the largest server farms before anyone else?

There are plenty of AI scientists who believe that deep learning systems are not intelligent, that they have fundamental limitations that won’t be overcome by making them bigger. I agree with these criticisms, but I am not going to review these arguments here. Instead, I want to talk about brains. Brains suggest an alternate way to build AI, one that I believe will replace deep learning as the central technology for creating artificial intelligence. This is not a widely held view today, but it will be in the future.

A Brain-Based Blueprint

Why should we care about how the brain works? The answer should be obvious: The human brain is the only thing that everyone agrees is intelligent. Up to this point in time, everything we have ever associated with intelligence was created by brains. It is therefore reasonable to suppose that understanding how the brain creates intelligence would be useful for creating intelligent machines. We may not need to replicate brains exactly, but to ignore how they work is foolish.

Deep learning and brains are both built using networks of neurons. However, the artificial neurons and networks used in deep learning are not at all like the real neurons and networks we see in brains. They are completely different. Most deep learning advocates are willing to ignore these differences. To them, the details of how the brain works are mostly irrelevant. The thing that matters most is that deep learning can solve problems that humans also solve and, in many cases, outperform the best humans. Given how well it does at many tasks, deep learning advocates believe that deep learning-based AI will eventually exceed human performance on all tasks. All we need to do is collect more data and build bigger computers and we will enter the “Intelligence Age.”

As a scientist who has spent decades studying how brains create intelligence, I can say with certainty that brains work on completely different principles than deep learning, and these differences matter. We cannot create truly intelligent AI using deep learning. To understand why, we need to look at how brains and AI systems work on the inside, not just what they do on the outside.

The Fundamentals of Learning

All learning systems share three basic components. First, there is the data that the system is trained on. We can ask, where does this data come from and how is it fed into the system? Second, the system learns a model of the training data. AI scientists call this a model of the world. But the model is an abstraction of the real world based on, and limited by, the training data. We can also ask: How does the model represent knowledge? This tells us what kind of things the model can and cannot learn in principle. And finally, the model is used to generate outputs, to do something.

AI scientists have historically focused on the third component, the output. Progress in AI is judged by milestones, such as beating the best humans at Chess or GO, or labeling images, or translating text from one language to another. However, if we want to understand what an AI system can do, not just now but in the future, or why AI fails, or what risks it presents, then we need to look at the inputs and the models, not the outputs. The key things to understand are how a system learns and how the model represents the world. When we look at these two components, we can see that brains and deep learning are completely different.

How Deep Learning Learns

Let’s start with deep learning, using large language models such as ChatGPT as an example. In this case, the training data is text, lots of text. Before being fed to the model, the text is broken into parts called tokens. Tokens can be words, parts of words, letters, numbers, punctuation, etc. By training on huge amounts of text, the system learns what tokens are expected given other surrounding tokens. The resulting model is a model of language. The output of ChatGPT is also language (a string of tokens). Language in and language out. This works well if the topics you are interested in have been described in text that the system trained on. But there are two large problems with deep learning systems like this.

The first problem is that the language model has no means of knowing if its training data is true or false. It has no way to test its knowledge, to verify or falsify what it has learned. This is because deep learning models such as ChatGPT have no way to interact with the physical world. If I wrote that all dogs have purple fur, you can look at dogs and see if that is true or false. ChatGPT can’t do that; it only knows what was in the text that it was trained on. This is why deep learning is inherently prone to false beliefs. Unlike you and me, it can’t check the real world to see what is true and what is false.

The second problem is that deep learning models can’t discover new knowledge. The discovery of new knowledge is the essence of intelligence and human progress. You could train ChatGPT on every scientific paper ever written and it couldn’t tell you if there is life on Mars. To answer that question, we have to go to Mars and look. Even when deep learning systems make discoveries, they are limited by their training data. For example, AlphaFold is a deep learning system that predicts the shape of proteins given a molecular sequence. But the system was trained on curated data from thousands of published experiments, experiments designed and run by humans. AlphaFold is able to extract answers that are, in some sense, already in the data. Most scientific discoveries are not like this. Usually, we don’t have the needed data. Something or someone has to go out and collect it.

How Brains Learn

Now let’s look at brains. Brains are trained on data from sensors. Our skin, ears, and eyes directly detect the physical world. When you pick up a cat, you see and feel the fine fibers of its fur, you feel the warmth of its skin, and you hear its purr. In contrast, ChatGPT’s knowledge of cats is limited to what people have written about cats and other animals. It may be able to answer questions about cats because it has been trained on text written by humans, but ChatGPT can’t imagine how a cat feels or sounds. It can’t imagine these sensations because it has no sensors or means of interacting with the physical world. ChatGPT only knows words. In contrast, brains directly sense the physical world.

Our bodies and our sensors are constantly moving. When we walk, turn our head, touch objects, or move our eyes, the input to the brain changes. The brain’s model predicts what it should sense after each movement. These predictions are the brain’s way of testing its knowledge of the world. If our predictions are incorrect, we know that our internal model is flawed and needs to be updated. By moving, the brain can also explore new objects and new places, and thus learn new knowledge.

Two types of input are required for brains to learn. One input comes from the sensors, for example, a tactile edge, an auditory sound, or a visual color. The other type of input represents how the sensors are moving. Internally, the brain keeps track of the location of its sensors relative to things in the world. The brain knows where things are, not just what they are. The brain learns models of the world by associating what is sensed with where it was sensed. In this way, the brain’s models represent the physical structure of the world.

The output of the brain is also movement. The brain uses movement to test the accuracy of its models, to explore and learn new knowledge, and to manipulate the world. Even when we create language, we do so via muscle contractions. The brain is a sensorimotor system. Sensor and movement data are the inputs, movement is the output. This is the biggest and most important difference between brains and deep learning.

Brains have several other important attributes. These include being able to learn continuously and using very little energy, problems that plague deep learning. Importantly, we understand how brains do this. These will be attributes of brain-based AI.

Building the Future of AI

Can we build AI systems that work on the same principles as the brain? Yes, my team has been working on an open source project where anyone can contribute to the effort of creating sensorimotor AI.

What will the future of AI look like? Sensorimotor AI and deep learning AI will coexist. Each has its own value. For example, our brains did not evolve for language. In evolutionary terms, language is a recent addition. Despite how good we think we are at language; we shouldn’t be surprised that systems specifically designed to model language can outperform humans in the same way that calculators can outperform us at math. So deep learning systems will continue to be valuable for specific tasks. But the heart of AI, where the most exciting progress will be, will be sensorimotor AI, not deep learning.

Most of the things we want intelligent machines to do require interacting with the world. Scientific progress requires creating and using tools. Building colonies on Mars requires AI systems that do the work of engineers and construction workers in environments that are unsuitable for humans. AI-based personal assistants need to cook, clean, and do errands. We can try to apply deep learning to do these tasks, but deep learning is not designed to interact with the world. If we want artificial intelligence that can interact with the world, directly test its knowledge, learn continuously, and use little energy, then we have to learn from the brain and develop sensorimotor AI.

I am excited about the future of AI and how it can help humans survive and grow. Where I differ from some of my colleagues is that I see deep learning as an important but limited technology. The early success of deep learning has fooled many people into believing that deep learning is all that is needed. This isn’t true. Truly intelligent machines need to learn structured models of the world. They need to move and interact with the world in order to learn, to expand knowledge, and to achieve goals. Fortunately, we have the brain as our guide for how to do this.

https://www.fastcompany.com/91228937/for-truly-intelligent-ai-we-need-to-mimic-the-brains-sensorimotor-principles?partner=rss&utm_source=rss&utm_medium=feed&utm_campaign=rss+fastcompany&utm_content=rss

Created 1mo | Nov 15, 2024, 12:20:27 PM


Login to add comment

Other posts in this group

AI is helping students with disabilities. Schools worry about the risks

For Makenzie Gilkison, spelling is such a struggle that a word like rhinoceros might come out as “rineanswsaurs” or sarcastic as “srkastik.”

The 14-year-old from

Dec 26, 2024, 8:30:05 PM | Fast company - tech
Cyberattack hits Japan Airlines, delaying flights for holiday travelers

Japan Airlines said it was hit by a cyberattack Thursday, causing delays to

Dec 26, 2024, 6:20:03 PM | Fast company - tech
An ex-OpenAI exec and futurist talks about AI in 2025 and beyond

Welcome to AI DecodedFast Company’s weekly newsletter that breaks down the most important news in the world of AI. You can sign up to receive this newsletter every week 

Dec 26, 2024, 6:20:02 PM | Fast company - tech
Why social media influencers are going to have a very big 2025

As 2024 rolls into 2025, big changes are potentially afoot in the world of social media. TikTok is

Dec 26, 2024, 1:30:07 PM | Fast company - tech
YouTube has a new plan to combat clickbait

Thumbnails play the YouTube equivalent of a movie poster, aiming to draw your attention to click and watch when you have hundreds of videos clogging your recommended content. Most of us have been

Dec 26, 2024, 11:20:02 AM | Fast company - tech
AI use cases are going to get even bigger in 2025

Over the past two years, generative AI has dominated tech conversations and media headlines. Tools like ChatGPT, Gemini, Midjourney, and Sora captured imaginations with their ability to create tex

Dec 25, 2024, 7:30:03 AM | Fast company - tech
YouTube TV price hike got you down? 5 free alternatives

Was YouTube TV’s recent price increase the straw that broke the camel’s back for you? Wh

Dec 25, 2024, 7:30:02 AM | Fast company - tech