The internet loves ChatGPT, but there’s a dark side to the tech

Amid the usual doom and gloom that surrounds the internet these days, the world experienced an all-too-rare moment of joy over the past week: the arrival of a new artificial intelligence chatbot, ChatGPT.

The AI-powered chat tool, which takes pretty much any prompt a user throws at it and produces what they want, whether code or text, was launched by the team at AI development company OpenAI on November 30; by December 5, more than one million users had tested it out. The AI model comes hot on the heels of other generative AI technologies that take text prompts and spit out polished work that has swept social media in recent months—but its Jack-of-all-trades ability makes it stand out among the crowd.

The chatbot is free to use, though OpenAI CEO Sam Altman expects that will change in the future—meaning users have embraced the tech wholeheartedly. People have been using ChatGPT to run a virtual Linux machine, answer coding queries, develop business plans, write song lyrics, even pen Shakespearean verses.

Yet for all the brouhaha, there are some important caveats to note. The system may seem too good to be true, in part because at times it is. While some have professed that there’s no need to learn to code because ChatGPT can do it for you, software bug site Stack Overflow has temporarily banned any responses to questions generated by the chatbot because of the poor quality of its answers. “The posting of answers created by ChatGPT is substantially harmful to the site and to users who are asking or looking for correct answers,” the site’s moderators say.

It’s also plagued by the same issues many chatbots have: It reflects society and all the incorrect biases society has. Computational scientist Steven T. Piantadosi, who heads the computation and language lab at UC Berkeley, has highlighted in a Twitter thread a number of issues with ChatGPT, where the AI turns up results that suggest “good scientists” are those who are white or Asian men, and that African American men’s lives should not be saved. Another query prompted ChatGPT to indulge in the idea that brain sizes differ and, as such, are more or less valuable as people.

pic.twitter.com/HngRj7ODGW

— steven t. piantadosi (@spiantado) December 4, 2022

OpenAI did not respond to a request for comment for this story. Altman, in response to Piantadosi’s Twitter thread highlighting serious incidents of his chatbot promoting racist beliefs, asked the computational scientist to “please hit the thumbs down on these and help us improve!”

“With these kind of chatbot models, if you search for certain toxic offensive queries, you’re likely to get toxic responses,” says Yang Zhang, faculty member at CISPA Helmholtz Center for Information Security, who was coauthor of a September 2022 paper looking at how chatbots, not including ChatGPT, turn nasty. “More importantly, if you search some innocent questions that aren’t that toxic, there’s still a chance that it will give a toxic response.”

The reason is the same that nobbles every chatbot: The data it uses to generate its responses are sourced from the internet, and folks online are plenty hostile. Zhang says that any chatbot developers ought to produce the worst-case scenario they can think of for their models as part of the development process, and then use that scenario to propose defense mechanisms to make the model safer. (A ChatGPT FAQ says: “We’ve made efforts to make the model refuse inappropriate requests.”) “We should also make the public aware that such models have a potential risk factor,” says Zhang.

The issue is that people often get caught up in incredulity at the prowess of the models’ output. ChatGPT appears to be streaks ahead of its competitors, with some already saying that it’s the death not just of Google’s chat models, but also of the search engine itself, so accurate are the model’s answers to some questions.

How the model has been trained is another conundrum, says Catalina Goanta, associate professor in private law and technology at Utrecht University. “Because of the very big computational power of these models, and the fact that they rely on all of this data that we cannot map; of course, a lot of ethical questions arise,” she says. The challenge is acknowledging the benefits that come from such powerful AI-powered chatbots while also ensuring there are sensible guide rails on its development.

That’s something, in the first flourish of social media hype, that it’s difficult to think about. But it’s important to do so. “I think we need to do more research to understand what are the case studies where it should be fair game to use such very large language models, as is the case with ChatGPT,” says Goanta, “and then where we have certain types of industries or situations where it should be forbidden to have that.”

https://www.fastcompany.com/90820090/the-internet-loves-chatgpt-but-theres-a-dark-side-to-the-tech?partner=rss&utm_source=rss&utm_medium=feed&utm_campaign=rss+fastcompany&utm_content=rss

Établi 2y | 8 déc. 2022 à 02:21:49


Connectez-vous pour ajouter un commentaire

Autres messages de ce groupe

SpaceX’s Starbase site inches closer to being its own city in Texas

A Texas county on Wednesday approved holding an election sought by

13 févr. 2025 à 17:40:09 | Fast company - tech
What to make of JD Vance’s speech at the Paris AI summit 

Welcome to AI DecodedFast Company’s weekly newsletter that breaks down the most important news in the world of AI. You can sign up to receive this newsletter every week 

13 févr. 2025 à 17:40:06 | Fast company - tech
Those workplace communication tools you hate might actually be good for you

Many things irk people about the way modern companies operate. Workplace communication tools and so-called enterprise socia

13 févr. 2025 à 13:10:06 | Fast company - tech
The Trump administration should follow its own order on free expression

“If we don’t have Free Speech, then we just don’t have a Free Country. It’s as simple as that.” President Donald Trump said in 

13 févr. 2025 à 13:10:05 | Fast company - tech
Instagram’s AI bots are often sexually suggestive—and sometimes underage

When Meta launched its “AI Studio” feature for over two billion Instagram users in July 2024, the company prom

13 févr. 2025 à 13:10:04 | Fast company - tech
The rebirth of Pebble is radically unambitious

Eric Migicovsky has barely started working on a successor to the Pebble smartwatch, and he’s already talking about being finished with it.

Eight years ago, Migicovsky

13 févr. 2025 à 10:40:07 | Fast company - tech