Nvidia on Monday showed a new artificial intelligence model for generating music and audio that can modify voices and generate novel sounds — technology aimed at the producers of music, films and video games.
Nvidia, the world’s biggest supplier of chips and software used to create AI systems, said it does not have immediate plans to publicly release the technology, which it calls Fugatto, short for Foundational Generative Audio Transformer Opus 1.
It joins other technologies shown by startups such as Runway and larger players such as Meta Platforms that can generate audio or video from a text prompt.
Santa Clara, California-based Nvidia’s version generates sound effects and music from a text description, including novel sounds such as making a trumpet bark like a dog.
What makes it different from other AI technologies is its ability to take in and modify existing audio, for example by taking a line played on a piano and transforming it into a line sung by a human voice, or by taking a spoken word recording and changing the accent used and the mood expressed.
“If we think about synthetic audio over the past 50 years, music sounds different now because of computers, because of synthesizers,” said Bryan Catanzaro, vice president of applied deep learning research at Nvidia. “I think that generative AI is going to bring new capabilities to music, to video games and to ordinary folks that want to create things.”
While companies such as OpenAI are negotiating with Hollywood studios over whether and how the AI could be used in the entertainment industry, the relationship between tech and Hollywood has become tense, particularly after Hollywood star Scarlett Johansson accused OpenAI of imitating her voice.
Nvidia’s new model was trained on open-source data, and the company said it is still debating whether and how to release it publicly.
“Any generative technology always carries some risks, because people might use that to generate things that we would prefer they don’t,” Catanzaro said. “We need to be careful about that, which is why we don’t have immediate plans to release this.”
Creators of generative AI models have yet to determine how to prevent abuse of the technology such as a user generating misinformation or infringing on copyrights by generating copyrighted characters.
OpenAI and Meta similarly have not said when they plan to release to the public their models that generate audio or video.
—Stephen Nellis, Reuters
Connectez-vous pour ajouter un commentaire
Autres messages de ce groupe
When Russell Maichel started growing almonds, walnuts and pistachios in the 1980s, he didn’t own a cellphone. Now, a fully autonomous tractor drives through his expansive orchard, spraying p
DoorDash is expanding its portable benefits pilot program to certain gig workers in Georgia starting next year, the food-delivery giant tells Fast Company.
Dashers (which is wha
To get from 0 to 60 in Formula 1 engine design while competing against organizations with much more experience, Red Bull Ford Powertrains will need extra help (and, no, that boost won’t come in th
I am not what you would call a finely tuned athletic machine. I am, if anything, an outdated lawnmower engine held together by duct tape and rust. So when I was offered the opportunity to let AI h
It’s hard to remember now, as you scroll through a thicket of porn bots, anti-trans activists, and AI slop
There’s been plenty of speculation about whether generative AI could replace—or perh
When Meta established its Oversight Board to adjudicate on decisions it made about removing content from its platforms in 2020, the goal was for the select group of individuals from the media, civ