Nvidia on Monday showed a new artificial intelligence model for generating music and audio that can modify voices and generate novel sounds — technology aimed at the producers of music, films and video games.
Nvidia, the world’s biggest supplier of chips and software used to create AI systems, said it does not have immediate plans to publicly release the technology, which it calls Fugatto, short for Foundational Generative Audio Transformer Opus 1.
It joins other technologies shown by startups such as Runway and larger players such as Meta Platforms that can generate audio or video from a text prompt.
Santa Clara, California-based Nvidia’s version generates sound effects and music from a text description, including novel sounds such as making a trumpet bark like a dog.
What makes it different from other AI technologies is its ability to take in and modify existing audio, for example by taking a line played on a piano and transforming it into a line sung by a human voice, or by taking a spoken word recording and changing the accent used and the mood expressed.
“If we think about synthetic audio over the past 50 years, music sounds different now because of computers, because of synthesizers,” said Bryan Catanzaro, vice president of applied deep learning research at Nvidia. “I think that generative AI is going to bring new capabilities to music, to video games and to ordinary folks that want to create things.”
While companies such as OpenAI are negotiating with Hollywood studios over whether and how the AI could be used in the entertainment industry, the relationship between tech and Hollywood has become tense, particularly after Hollywood star Scarlett Johansson accused OpenAI of imitating her voice.
Nvidia’s new model was trained on open-source data, and the company said it is still debating whether and how to release it publicly.
“Any generative technology always carries some risks, because people might use that to generate things that we would prefer they don’t,” Catanzaro said. “We need to be careful about that, which is why we don’t have immediate plans to release this.”
Creators of generative AI models have yet to determine how to prevent abuse of the technology such as a user generating misinformation or infringing on copyrights by generating copyrighted characters.
OpenAI and Meta similarly have not said when they plan to release to the public their models that generate audio or video.
—Stephen Nellis, Reuters
Accedi per aggiungere un commento
Altri post in questo gruppo

The European Commission is coming for “SkinnyTok.”
EU regulators are investigating a recent wave of social media videos that promote extreme thinness and “tough-love” weight loss advice,

The infamous “Am I The A**hole?” subreddit is making its way to the small screen.
Hosted by Jimmy Carr, the new game show for Comedy Central U.K. will feature members of the public appea

Former employees of OpenAI are asking the top law enforcement officers in California and Delaware to s


Microsoft released its annual Work Trend Index report on Tuesday, which argued that 2025 is the year that companies stop simply experimenting with AI and start building it into key missions.

Artificial intelligence has rapidly started finding its place in the workplace, but this year will be remembered as the moment when companies pushed past simply experimenting with AI and started b

As the founder of World Central Kitchen, renowned chef and humanitarian José Andrés has truly mastered the art of leading through crisis. Andrés shares insights from his new book, Change the R