From training dogs to intelligent machines: Here’s how reinforcement learning is teaching AI

Understanding intelligence and creating intelligent machines are grand scientific challenges of our times. The ability to learn from experience is a cornerstone of intelligence for machines and living beings alike.

In a remarkably prescient 1948 report, Alan Turing—the father of modern computer science—proposed the construction of machines that display intelligent behavior. He also discussed the “education” of such machines “by means of rewards and punishments.”

Turing’s ideas ultimately led to the development of reinforcement learning, a branch of artificial intelligence. Reinforcement learning designs intelligent agents by training them to maximize rewards as they interact with their environment.

As a machine learning researcher, I find it fitting that reinforcement learning pioneers Andrew Barto and Richard Sutton were awarded the 2024 ACM Turing Award.

What is reinforcement learning?

Animal trainers know that animal behavior can be influenced by rewarding desirable behaviors. A dog trainer gives the dog a treat when it does a trick correctly. This reinforces the behavior, and the dog is more likely to do the trick correctly the next time. Reinforcement learning borrowed this insight from animal psychology.

But reinforcement learning is about training computational agents, not animals. The agent can be a software agent like a chess-playing program. But the agent can also be an embodied entity like a robot learning to do household chores. Similarly, the environment of an agent can be virtual, like the chessboard or the designed world in a video game. But it can also be a house where a robot is working.

Just like animals, an agent can perceive aspects of its environment and take actions. A chess-playing agent can access the chessboard configuration and make moves. A robot can sense its surroundings with cameras and microphones. It can use its motors to move about in the physical world.

Agents also have goals that their human designers program into them. A chess-playing agent’s goal is to win the game. A robot’s goal might be to assist its human owner with household chores.

The reinforcement learning problem in AI is how to design agents that achieve their goals by perceiving and acting in their environments. Reinforcement learning makes a bold claim: All goals can be achieved by designing a numerical signal, called the reward, and having the agent maximize the total sum of rewards it receives.

Researchers do not know if this claim is actually true, because of the wide variety of possible goals. Therefore, it is often referred to as the reward hypothesis.

Sometimes it is easy to pick a reward signal corresponding to a goal. For a chess-playing agent, the reward can be +1 for a win, 0 for a draw, and -1 for a loss. It is less clear how to design a reward signal for a helpful household robotic assistant. Nevertheless, the list of applications where reinforcement learning researchers have been able to design good reward signals is growing.

A big success of reinforcement learning was in the board game Go. Researchers thought that Go was much harder than chess for machines to master. The company DeepMind, now Google DeepMind, used reinforcement learning to create AlphaGo. AlphaGo defeated top Go player Lee Sedol in a five-match game in 2016.

A more recent example is the use of reinforcement learning to make chatbots such as ChatGPT more helpful. Reinforcement learning is also being used to improve the reasoning capabilities of chatbots.

Reinforcement learning’s origins

However, none of these successes could have been foreseen in the 1980s. That is when Barto and his then-PhD student Sutton proposed reinforcement learning as a general problem-solving framework. They drew inspiration not only from animal psychology but also from the field of control theory, the use of feedback to influence a system’s behavior, and optimization, a branch of mathematics that studies how to select the best choice among a range of available options. They provided the research community with mathematical foundations that have stood the test of time. They also created algorithms that have now become standard tools in the field.

It is a rare advantage for a field when pioneers take the time to write a textbook. Shining examples like The Nature of the Chemical Bond by Linus Pauling and The Art of Computer Programming by Donald E. Knuth are memorable because they are few and far between. Sutton and Barto’s Reinforcement Learning: An Introduction was first published in 1998. A second edition came out in 2018. Their book has influenced a generation of researchers and has been cited more than 75,000 times.

Reinforcement learning has also had an unexpected impact on neuroscience. The neurotransmitter dopamine plays a key role in reward-driven behaviors in humans and animals. Researchers have used specific algorithms developed in reinforcement learning to explain experimental findings in people and animals’ dopamine system.

Barto and Sutton’s foundational work, vision and advocacy have helped reinforcement learning grow. Their work has inspired a large body of research, made an impact on real-world applications, and attracted huge investments by tech companies. Reinforcement learning researchers, I’m sure, will continue to see further ahead by standing on their shoulders.

Ambuj Tewari is a professor of statistics at the University of Michigan.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

https://www.fastcompany.com/91312953/reinforcement-learning-teaching-ai?partner=rss&utm_source=rss&utm_medium=feed&utm_campaign=rss+fastcompany&utm_content=rss

Creado 9d | 8 abr 2025, 17:50:04


Inicia sesión para agregar comentarios

Otros mensajes en este grupo.

Facebook Groups are fueling a black market for Uber and DoorDash accounts, says a new report

A new watchdog report uncovers Facebook groups quietly fueling a black market fo

17 abr 2025, 12:50:02 | Fast company - tech
He built an AI app to beat coding interviews. Then Columbia suspended him

A software application called Interview Coder promises to help software developers succeed at technical job interviews—by surreptitiously feeding them

17 abr 2025, 10:30:03 | Fast company - tech
GE Vernova’s CEO on thriving through tariffs and supply chain shifts

Amid tariff whiplash and the rejuggling of global trade, GE Vernova’s CEO Scott Strazik is finding a way to stay “relentlessly optimistic.” Strazik returns to the Rapid Response podcast to share h

17 abr 2025, 5:50:02 | Fast company - tech
Tesla’s first quarter EV registrations slump 15.1% in California

Tesla‘s electric-vehicle registrations in California dropped 15.1% during the first quarter, industry data showed, signaling an

16 abr 2025, 22:50:04 | Fast company - tech
TikTok starts testing Footnotes, a new feature that looks a lot like X’s Community Notes

TikTok is launching its own version of community notes on the platform, called “Footnotes.”

The crowd-sourced approach to moderation, where users add additional context to p

16 abr 2025, 20:30:10 | Fast company - tech
‘I would get way more views if I didn’t help thousands of people’: MrBeast defends his philanthropy‑as‑content strategy

MrBeast has again defended his philanthropy‑as‑content, clapping back at critics who say he is “only in it for the views.”

On April 13, in a post on X, Jimmy Donaldson—better k

16 abr 2025, 20:30:09 | Fast company - tech
Zuckerberg once floated spinning off Instagram over antitrust fears, email reveals in trial

Meta CEO Mark Zuckerberg once considered separating Instagram from its parent company due to worries about antitrust litigation, a

16 abr 2025, 18:20:05 | Fast company - tech