On Wednesday the European Parliament passed the AI Act by a wide majority, with 523 of its 705 members voting in favor. Hailed as the world’s first major effort to regulate AI, the legislation takes a risk-based approach to services or products that use AI. Content recommendation systems, for example, would be deemed low risk and thus subject to relatively lax oversight; medical devices, on the other hand, would face heightened scrutiny around issues like data and transparency.
Brando Benifei, the key legislator who shepherded the law through its passage, heralded the decision in parliament. “We finally have the world’s first binding law on artificial intelligence, to reduce risks, create opportunities, combat discrimination, and bring transparency,” he said.
But others worry that the AI Act is too favorable to industry interests—and fear it sets a bad precedent for other countries.
“For those advocating for human rights, the AI Act is a mixed bag,” says Laura Lazaro Cabrera, counsel and director of the equity and data program at the Center for Democracy and Technology, a digital rights advocacy group. “Whilst we can rightly celebrate that privacy and other human rights are foregrounded in the law, there are too many exemptions that could lead to harmful AI systems posing serious risks to citizens, particularly those in vulnerable situations such as at borders.”
Work on the act began in 2021, before the arrival of ChatGPT, and was intended to draw strict boundaries on how algorithms could be safely used to avoid discrimination. Following the November 2022 release of ChatGPT, legislators rushed to rewrite the law to encompass generative AI. “It was a piece of product safety legislation, a very particular type of EU regulation, with human rights protections kind of bolted on,” says Daniel Leufer, a senior policy analyst at the digital civil rights non-profit Access Now.
But tech companies, seeing the potential of the booming generative AI industry, quickly lobbied legislators to ensure the rules as outlined weren’t overly restrictive on their growth. Those efforts were largely successful: Time previously reported that OpenAI successfully managed to lobby to change the wording of a past draft of the act.
As regulators shifted their attention to focus on generative AI, Leufer says they took their eyes off the original purpose of the act: to limit the most dangerous uses of AI, including in biometric border control and policing. While there are addendums in the act discussing such usage, advocates believe the law as written falls far short. And simultaneously, a controversial clause inserted into the act—Article 6(3)—lets generative AI developers declare their products to be lower-risk, thereby skirting some of the oversight rules.
Leufer worries that the AI Act has set a new global standard, much like the General Data Protection Regulation (GDPR) did in 2018. But compared to the GDPR, which created a precedent for data protection rules, the AI Act is far too lax. “The AI Act from its very inception was already a concession,” he says. “And what we’ve got at the end is really a victory for industry and for law enforcement.”
The process of actually implementing the law still has further stages to go, giving companies time to prepare. And for activists like the Center for Democracy and Technology’s Lazaro Cabrera, that means the fight is far from over. “There’s so much at stake in the implementation of the AI Act,” she says. “As the dust settles, we all face the difficult task of unpacking a complex, lengthy, and unprecedented law.”
Login to add comment
Other posts in this group
A funny thing happened after I stopped using Clicks, the keyboard case that effectively turns an iPhone into an oversized Blackberry: The phone by itself suddenly seemed punier.
I mean t
You know what I miss? Listening to the radio.
I’ve always loved background music, which helps me focus. But modern music-streaming services can be distracting.
Yes, I enjoy hav
OpenAI released its newest reasoning model, called o3-mini, on Friday. OpenAI says the model delivers more intelligence than OpenAI’s first s
It looks like brothers Jake and Logan Paul won’t be squaring off in the boxing ring anytime soon. Instead, they are launching a family reality series, Paul American, starting March 27 on
After 17 years, Airbnb’s Brian Chesky is hitting reset—reinventing the business from the ground up and expanding the brand in unexpected ways. Chesky joins Rapid Response to explain why n
Capital One has launched an AI agent designed to help consumers with one of the most frustrating, time-consuming processes in life: buying a car.
The banking giant’s Chat Concierge