An ethics group devoted to artificial intelligence has declared GPT-4 to be “a risk to public safety,” and is urging the U.S. government to investigate its maker OpenAI for endangering consumers.
The Center for AI and Digital Policy (CAIDP) filed its complaint on Thursday with the Federal Trade Commission (FTC), on the heels of an open letter earlier this week calling more generally for a moratorium on all generative AI. Some 1,200 researchers, tech executives, and others in the field signed that letter—including Apple cofounder Steve Wozniak, and (somewhat more head-scratchingly) OpenAI cofounder Elon Musk. It argued for a minimum of a six-month pause on progress to give humans a chance to step back and do a cost-benefit analysis of this technology that’s developing at breakneck pace and enjoying runaway success.
Marc Rotenberg, president of CAIDP, was among the letter’s signers. And now his own group has piled on by making the case that the FTC should take a hard look at OpenAI’s GPT-4—a product that presents a serious enough liability for OpenAI itself to have recognized its potential for abuse in such categories as “disinformation,” “proliferation of conventional and unconventional weapons,” and “cybersecurity.”
“The Federal Trade Commission has declared that the use of AI should be ‘transparent, explainable, fair, and empirically sound while fostering accountability,'” the complaint says. “OpenAI’s product GPT-4 satisfies none of those requirements,” it adds, before essentially calling the government to arms: “It is time for the FTC to act.”
GPT-4’s alleged risks in CAIDP’s complaint include the potential to produce malicious code, reinforce everything from racial stereotypes to gender discrimination, and expose users’ ChatGPT histories (which has happened once already) and even payment details. It argues that OpenAI has violated the FTC Act’s unfair and deceptive trade practices rules, and that the FTC should also look into GPT-4’s so-called hallucinations—when it falsely and often repeatedly insists a made-up fact is real—because they amount to “deceptive commercial statements and advertising.” CAIDP argues OpenAI released GPT-4 for commercial use “with full knowledge of these risks,” which is why a regulatory response is needed.
To resolve these issues, CAIDP asks the FTC to ban additional commercial deployment of the GPT model, and demand an independent assessment. It also wants the government to create a public reporting tool like the one consumers can use to file fraud complaints.
GPT-4 has attracted a near-messianic following in certain tech circles—a fervor that probably amplified the need critics feel to sound the alarm over generative AI’s ubiquity in culture. However, OpenAI’s way of carrying itself has also given critics ammo. OpenAI isn’t open source, so it’s a black box, some complain. Others note that it’s copying tech’s worst impulses in the areas that are visible, like using Kenyan laborers who earn less than $2 per hour to make ChatGPT less toxic, or by seemingly hiding behind a “research lab” halo to ward off calls for greater scrutiny.
OpenAI seems to have understood these stakes, and even predicted this day would come. For a while, CEO Sam Altman has been addressing broader fears of AI essentially being let off leash, admitting that “current generation AI tools aren’t very scary,” but we’re “not that far away from potentially scary ones.” He has acknowledged that “regulation will be critical.”
Meanwhile, Mira Murati, who as CTO leads the strategy behind how to test OpenAI’s tools in public, told Fast Company when asked about GPT-4 right before its launch: “I think less hype would be good.”
Ak chcete pridať komentár, prihláste sa
Ostatné príspevky v tejto skupine

Noticed all the blondes going back to their natural hair color lately? As much as many try to claim it’s because of a “hair health journey,” other factors seem to be at play here.

There’s a special place in you-know-where for spam callers. They’re annoying. They waste time. They’re also dangerous.
And while it’s challenging to eliminate spam calls entirely,
Featuring Ben Lamm, Founder and CEO, Colossal Biosciences and Joe Manganiello, Actor, Producer. Moderated by Kc Ifeanyi, Executive Director of Ed

This week, Apple updated half of its iPad lineup.
After updating the iPad Pro and iPad mini in 2024, the company has just unveiled a third-generation iPad Air and an eleventh-generation

We need to talk about AI. Have you noticed it often just isn’t—well, very intelligent?
Already, we’ve lived through years of AI hype. We’ve watched companies pitch AI as a great

YouTube is taking steps to crack down on gambling content.
On Tuesday, the platform announced a new policy t

President Donald Trump signed an