The generative AI bill is coming due, and it’s not cheap

Welcome to AI Decoded, Fast Company’s weekly LinkedIn newsletter that breaks down the most important news in the world of AI. If a friend or colleague shared this newsletter with you, you can sign up to receive it every week here.

The generative AI bill is coming due, and it’s not cheap

As AI developers try to commercialize and monetize their models, their customers are coming to grips with the fact that the technology is expensive. The up-front costs of developing AI models are significantly higher than those associated with developing traditional software. Developing large AI models requires highly talented (and highly paid) researchers. Training the models requires lots of expensive (often Nvidia) servers. And, increasingly, AI developers will have to pay for the text, image, and knowledge base data used to train models. SemiAnalysis analyst Dylan Patel estimated that running ChatGPT costs OpenAI about $700,000 a day, for example. A recent Reuters report says that in the first few months of 2023, Microsoft was losing about $20 per user on GitHub Copilot, the first LLM chatbot it offered, for which users pay $10 per month.

As the developers try to commercialize their models, those high costs must eventually be passed on to customers. The prices of the first available AI products for enterprises are already getting attention. Both Microsoft and Google have announced that they will charge $30 per user for their respective AI assistants within their productivity suites. That’s on top of the license costs customers already pay. Enterprises can also access large language models from companies like OpenAI, Anthropic, and Cohere by making calls on them via an application programming language. The cost per call, and then for the output from the model, can add up quickly.

For its part, OpenAI seems to be making a successful business out of selling subscriptions to its ChatGPT or selling API access to its GPT-3.5 Turbo and GPT-4 LLMs. Bloomberg reported in late August that the company is making $80 million per month, putting it on track for a billion in revenue for 2023. In 2022, the company lost $540 million during the development of ChatGPT and GPT-4, The Information reported.

But the economics described above applies to the commercialization of huge, general-purpose models that are designed to do everything from summarizing long emails to writing computer code to discovering new cancer drugs. OpenAI, for example, explains that it’s trying to offer enterprises a generalized “intelligence layer” that can be used across business functions and knowledge areas. But that’s not the only approach. Many in the open-source community believe that enterprises can build and use a number of smaller, more specialized models that are cheaper to train and operate.

Clem Delangue, CEO of the popular open-source model sharing platform Hugging Face, tweeted Tuesday: “My prediction: in 2024, most companies will realize that smaller, cheaper, more specialized models make more sense for 99% of AI use-cases. The current market & usage is fooled by companies sponsoring the cost of training and running big models (especially with cloud incentives).”

AI disinformation in 2024: New details about what U.S. voters might encounter next year 

Senator Mark Warner, one of the smartest members of Congress when it comes to AI, fears AI-generated disinformation could wreak havoc during election season next year.  “[Russia’s actions were] child’s play, compared to what either domestic or foreign AI tools could do to completely screw up our elections,” he told Axios.

A new study from Freedom House puts some facts behind the fear. The researchers found that generative AI has already been used in at least 16 countries to “sow doubt, smear opponents, or influence public debate.” Surprisingly, the two most recent examples of widely distributed AI-generated disinformation were audio. Politico notes that right-wing operatives released fake audio clips depicting the voice of a liberal candidate talking about plans to rig the election and raise the price of beer. And Poland’s centrist opposition party used AI-generated audio clips mimicking the country’s right-wing prime minister in a series of attack ads.

Generative AI tools, including image, text, audio, and even meme generators have quickly become more available, more affordable, and easier to operate over the past few years. And social media platforms such as X and Facebook provide ready distribution networks that can reach millions of people very quickly. To make matters worse, the U.S. and many other countries have no binding regulations requiring that the developers and users of these tools make clear that their output is AI-generated.

Americans want the government to develop its own AI braintrust, not rely on big tech, consulting firms

New polling on AI policy from the Vanderbilt Policy Accelerator finds that most people want the government to develop its own braintrust for regulating AI, and for deciding how federal agencies should use the technology. The government has traditionally relied on tech companies and consulting firms for the technical expertise needed for these things, but much of the public seems to believe that the stakes of AI regulation are too high to allow tech companies to define regulation, or regulate themselves. More than three-quarters (77%) of the thousand-plus surveyed support the creation of a dedicated team of government AI experts to improve public services and advise regulators. But that number dropped to 62% when confronted with the argument that such a team of experts might amount to “big government.”

More AI coverage from Fast Company:

From around the web:

https://www.fastcompany.com/90965639/the-generative-ai-bill-is-coming-due-and-its-not-cheap?partner=rss&utm_source=rss&utm_medium=feed&utm_campaign=rss+fastcompany&utm_content=rss

Établi 2y | 11 oct. 2023, 18:40:05


Connectez-vous pour ajouter un commentaire

Autres messages de ce groupe

In his first 100 days, Trump’s tariffs are already threatening the AI boom

When Donald Trump returned to the White House in 2025, many in the tech world hoped his promises to champion artificial intelligence and cut regulation would outweigh the risks of his famously vol

29 avr. 2025, 16:50:07 | Fast company - tech
How learning like a gamer helped this high-school dropout succeed

There are so many ways to die. You could fall off a cliff. A monk could light you on fire. A bat the size of a yacht could kick your head in. You’ve only just begun the game, and yet here you are,

29 avr. 2025, 12:20:08 | Fast company - tech
Renate Nyborg’s Meeno wants to become the Duolingo of dating

Former Tinder CEO Renate Nyborg launched Meeno less than two years ago with the intention of it being an AI chatbot that help

29 avr. 2025, 12:20:07 | Fast company - tech
How Big Tech’s Faustian bargain with Trump backfired

The most indelible image from Donald Trump’s inauguration in January is not the image of the president taking the oath of office without his hand on the Bible. It is not the image of the First Lad

29 avr. 2025, 12:20:06 | Fast company - tech
Turns out AI is really bad at picking up on social cues

Ernest Hemingway had an influential theory about fiction that might explain a lot about a p

29 avr. 2025, 12:20:04 | Fast company - tech
Signal is the unlikely star of Trump’s first 100 days

The first 100 days of Trump’s second presidential term have included a surprising player that doesn’t seem likely to go away anytime soon: Signal.

The encrypted messaging pl

29 avr. 2025, 09:50:13 | Fast company - tech