Biden still won’t hold AI companies’ feet to the fire

This might be the busiest and longest AI-related week to date: The Senate AI forum held two meetings in one day, there was a two-day AI Safety Summit in the U.K., G7 leaders released an AI governance code, and the Biden administration issued a long-awaited executive order on safe, secure, and trustworthy AI. The only thing longer than the week itself is President Joe Biden’s 100-plus-page executive order—a document that looks good on paper but falls short in terms of expectations and enforcement.

The anticipation and fanfare that took place at Monday’s presentation was well received by those in the room, but since then many—including me—have questioned the noticeable holes and overreach within the nearly 20,000-word order, particularly the suggested mechanism for enforcement and implementation. The intent is there, but this latest effort feels like lots of speed and no teeth.

Let’s start with the topic that is on everyone’s minds: the risks associated with AI. On Monday, Biden said that to realize the promise of AI and avoid the risk, we need to govern the technology. But he doesn’t get it quite right. To truly address the risks associated with AI, we need to govern the use cases associated with the technology.

Open-Source vs. Closed-Source AI: We need a combination of the two

By mandating government review of large language models before they can be made public, the executive order calls into question the future legality of open-source AI. The potential loss of the ability to leverage open-source AI would be a missed opportunity.

The debate around open-source software and tools has been around for decades with two sides: those who are in favor of open-source AI or software that is open to the public to use and modify without running into licensing issues, and those who prefer closed-source (or proprietary) software, which has source code that is not available to the public and therefore can’t be modified.

My fear is that the mandates set forth in the order will mean that we lose out on the benefits of open-source AI, particularly the speed of innovation, the community best practices that it builds, and the learnings that it offers. Case in point: Open-source can help eliminate instances of bias as developers are able to identify potential sources by inspecting the code.

Coincidentally, this was a major point raised on Monday—preventing bias in AI algorithms. It’s true that ungoverned AI algorithms can be harmful and come with many risks, but how will we learn if all AI is closed to the public? Perhaps even more important, what will the impact be on research if scientists and academics are unable to drive innovation and build cutting-edge tools and technologies? The truth is that open-source democratizes AI, which is important to a secure AI future.

This isn’t to suggest that there should be no closed-source AI. We can and should have a combination of the two. It’s possible for some companies to have proprietary algorithms and datasets while other companies—like Hugging Face—help users build and train machine learning models. According to a report by Gartner, 70% of new and internally developed applications will incorporate AI- or ML-based models by 2025. All the more reason to embrace a combination of open-source and closed-source AI.

And we don’t need to accept the loss of open-source AI in order to properly control the worst implications of AI. Here’s how.

We need the right approach to AI governance: Start with the use case, focus on data

To govern AI, we need to focus enforcement efforts on the use of AI, not the underlying R&D. Why? The risk associated with AI fundamentally depends on what it is used for. AI governance is crucial in mitigating risks and ensuring AI initiatives are transparent, ethical, and trustworthy. Think of it like a system of checks and balances for AI models.

AI governance is a necessary framework that sets the right policies and ensures organizational accountability. Companies should reuse their existing data governance framework as a starting point. It ensures that AI models are subject to the necessary data quality, trust, and privacy standards. Using the same blueprint for both data and AI governance ensures that AI is utilized responsibly and provides clear rules and accountability in its development and deployment.

Just look at the mandate around the creation of new AI governance boards and that all federal agencies have chief AI officers. Also consider the requirement for developers to share data and training information prior to publicly releasing future large AI models or updated versions of those models.

For the U.S. government to get AI governance right, regulators first must insist that organizations get AI governance right. It’s about defining use cases, identifying and understanding data, documenting models and results, and verifying and monitoring your model. The right approach to AI governance provides best practices to build on and learn from. With that approach, we will all win at AI.


Felix Van de Maele is the CEO of Collibra.

https://www.fastcompany.com/90977398/biden-still-wont-hold-ai-companies-feet-to-the-fire?partner=rss&utm_source=rss&utm_medium=feed&utm_campaign=rss+fastcompany&utm_content=rss

Établi 1y | 3 nov. 2023 à 09:10:11


Connectez-vous pour ajouter un commentaire

Autres messages de ce groupe

‘We are never going to stop existing’: Hunter Schafer called out Trump’s passport policy on TikTok

“I had a bit of a harsh reality check today, and felt like it’s important to share with whoever is listening,” model and actress Hunter Schafer said in an eight-minute

24 févr. 2025 à 20:20:06 | Fast company - tech
Anthropic’s new Claude AI model can decide between speed and deep thinking

Anthropic released on Monday its Claude 3.7 Sonnet model, which it says returns results faster and can show the user the “chain of thought” it follows to reach an answer. This latest model also po

24 févr. 2025 à 20:20:05 | Fast company - tech
What to know about Apple’s biggest-ever U.S. investment

This morning, Apple announced its largest spend commitment to da

24 févr. 2025 à 20:20:04 | Fast company - tech
Ai2’s Ali Farhadi advocates for open-source AI models. Here’s why

A year before Elon Musk helped start OpenAI in San Francisco, philanthropist and Microsoft cofounder Paul Allen already had established his own nonprofit

24 févr. 2025 à 17:50:07 | Fast company - tech
How agentic AI will shape the future of business

In 2024, Amazon introduced its AI-powered HR ass

24 févr. 2025 à 17:50:06 | Fast company - tech
How ‘lore’ became the internet’s favorite way to overshare

Lore isn’t just for games like The Elder Scrolls or films like The Lord of the Rings—online, it has evolved into something entirely new.

The Old English word made the s

24 févr. 2025 à 13:20:04 | Fast company - tech
These LinkedIn comedians are leaning into the cringe for clout

Ben Sweeny, the salesman-turned-comedian behind that online persona Corporate Sween, says that bosses should waterboard their employees. 

“Some companies drown their employees with

24 févr. 2025 à 10:50:08 | Fast company - tech