AI safety summit kicks off this week in Seoul

South Korea is set to host a mini-summit this week on risks and regulation of artificial intelligence, following up on an inaugural AI safety meeting in Britain last year that drew a diverse crowd of tech luminaries, researchers and officials.

The gathering in Seoul aims to build on work started at the U.K. meeting on reining in threats posed by cutting edge artificial intelligence systems.

Here is what you need to know about the AI Seoul Summit and AI safety issues.

What international efforts have been made on AI safety?

The Seoul summit is one of many global efforts to create guardrails for the rapidly advancing technology that promises to transform many aspects of society, but has also raised concerns about new risks for both everyday life such as algorithmic bias that skews search results and potential existential threats to humanity.

At November’s U.K. summit, held at a former secret wartime codebreaking base in Bletchley north of London, researchers, government leaders, tech executives and members of civil society groups, many with opposing views on AI, huddled in closed-door talks. Tesla CEO Elon Musk and OpenAI CEO Sam Altman mingled with politicians like British Prime Minister Rishi Sunak.

Delegates from more than two dozen countries including the U.S. and China signed the Bletchley Declaration, agreeing to work together to contain the potentially “catastrophic” risks posed by galloping advances in artificial intelligence.

In March, the U.N. General Assembly approved its first resolution on artificial intelligence, lending support to an international effort to ensure the powerful new technology benefits all nations, respects human rights and is “safe, secure and trustworthy.”

Earlier this month, the U.S. and China held their first high-level talks on artificial intelligence in Geneva to discuss how to address the risks of the fast-evolving technology and set shared standards to manage it. There, U.S. officials raised concerns about China’s “misuse of AI” while Chinese representatives rebuked the U.S. over “restrictions and pressure” on artificial intelligence, according to their governments.

What will be discussed at the Seoul summit?

The May 21-22 meeting is co-hosted by the South Korean and U.K. governments.

On day one, Tuesday, South Korean President Yoon Suk Yeol and Sunak will meet leaders virtually. A few global industry leaders have been invited to provide updates on how they’ve been fulfilling the commitments made at the Bletchley summit to ensure the safety of their AI models.

On day two, digital ministers will gather for an in-person meeting hosted by South Korean Science Minister Lee Jong-ho and Britain’s Technology Secretary Michelle Donelan. Participants will share best practices and concrete action plans. They also will share ideas on how to protect society from potentially negative impacts of AI on areas such as energy use, workers and the proliferation of mis- and disinformation, according to the organizers.

The meeting has been dubbed a mini virtual summit, serving as an interim meeting until a full-fledged in-person edition that France has pledged to hold.

The digital ministers’ meeting is to include representatives from countries like the United States, China, Germany, France and Spain and companies including ChatGPT-maker OpenAI, Google, Microsoft and Anthropic.

What progress have AI safety efforts made?

The accord reached at the U.K. meeting was light on details and didn’t propose a way to regulate the development of AI.

“The United States and China came to the last summit. But when we look at some principles announced after the meeting, they were similar to what had already been announced after some U.N. and OECD meetings,” said Lee Seong-yeob, a professor at the Graduate School of Management of Technology at Seoul’s Korea University. “There was nothing new.”

It’s important to hold a global summit on AI safety issues, he said, but it will be “considerably difficult” for all participants to reach agreements since each country has different interests and different levels of domestic AI technologies and industries.

The gathering is being held as Meta, OpenAI and Google roll out the latest versions of their AI models.

The original AI Safety Summit was conceived as a venue for hashing out solutions for so-called existential risks posed by the most powerful “foundation models” that underpin general purpose AI systems like ChatGPT.

Pioneering computer scientist Yoshua Bengio, dubbed one of the “godfathers of AI,” was tapped at the U.K. meeting to lead an expert panel tasked with drafting a report on the state of AI safety. An interim version of the report released on Friday to inform discussions in Seoul identified a range of risks posed by general purpose AI, including its malicious use to increase the “scale and sophistication” of frauds and scams, supercharge the spread of disinformation, or create new bioweapons.

Malfunctioning AI systems could spread bias in areas like healthcare, job recruitment and financial lending, while the technology’s potential to automate a big range of tasks also poses systemic risks to the labor market, the report said.

South Korea hopes to use the Seoul summit to take the initiative in formulating global governance and norms for AI. But some critics say the country lacks AI infrastructure advanced enough to play a leadership role in such governance issues.

—Hyung-Jin Kim and Kelvin Chan, Associated Press

https://www.fastcompany.com/91128137/ai-seoul-safety-summit-things-to-know?partner=rss&utm_source=rss&utm_medium=feed&utm_campaign=rss+fastcompany&utm_content=rss

Creato 11mo | 20 mag 2024, 20:20:07


Accedi per aggiungere un commento

Altri post in questo gruppo

Tesla’s first quarter EV registrations slump 15.1% in California

Tesla‘s electric-vehicle registrations in California dropped 15.1% during the first quarter, industry data showed, signaling an

16 apr 2025, 22:50:04 | Fast company - tech
TikTok starts testing Footnotes, a new feature that looks a lot like X’s Community Notes

TikTok is launching its own version of community notes on the platform, called “Footnotes.”

The crowd-sourced approach to moderation, where users add additional context to p

16 apr 2025, 20:30:10 | Fast company - tech
‘I would get way more views if I didn’t help thousands of people’: MrBeast defends his philanthropy‑as‑content strategy

MrBeast has again defended his philanthropy‑as‑content, clapping back at critics who say he is “only in it for the views.”

On April 13, in a post on X, Jimmy Donaldson—better k

16 apr 2025, 20:30:09 | Fast company - tech
Zuckerberg once floated spinning off Instagram over antitrust fears, email reveals in trial

Meta CEO Mark Zuckerberg once considered separating Instagram from its parent company due to worries about antitrust litigation, a

16 apr 2025, 18:20:05 | Fast company - tech
Trump’s China tariffs spark viral TikTok work-arounds

President Trump’s trade wars have officially landed on TikTok.

U.S. TikTok users’ For You Pages are being flooded with videos from Chinese manufacturers urging Americans to bypass

16 apr 2025, 15:50:06 | Fast company - tech
The ‘chicken jockey’ trend is turning ‘Minecraft’ screenings into total chaos

If you’re planning to see the new Minecraft movie and haven’t heard of the viral “chicken jockey” trend w

16 apr 2025, 13:40:03 | Fast company - tech
Docusign expands beyond signatures with new AI-powered contract management tools

For about 20 years, Docusign has been known as a tool for collecting digital signatures—helping businesses replace paper forms with electronic versions that are just as secure and legally binding.

16 apr 2025, 13:40:03 | Fast company - tech