Meta said Thursday it has figured out how to teach an AI to create original video content based on text input. The result is a still-in-development generative tool called “Make-a-Video.” You might type in “make a video of a dog riding a horse,” and voilà: AI creates the image. The system can also create videos based on images or other video fed into the system.
That could be great, even revolutionary, for content creators—but also a boon for misinformation artists.
Already, deepfake videos present a potentially serious problem. While there are few reports of them influencing the public in a malicious way, the technology is ready and available. DARPA (Defense Advanced Research Projects Agency) even has a program dedicated to detecting them.
“What [Meta] did was put deepfakes using AI generated images on steroids,” says Wael Abd–Almageed, an engineering professor at the University of Southern California. “So they’re not just creating a deepfake that changes what somebody said or did; they are going to create a complete video of something that’s not grounded in reality at all.”
Abd-Almageed says that, in the worst case scenario, text-to-video generators could get someone thrown in jail by creating a video showing them committing a crime or doing something pernicious. He says Meta, and anybody else who creates generative text-to-video AI tools, has an obligation to embed a highly visible watermark in the video, to make clear to any viewer that it was a creation of AI.
Abd-Almageed adds that AI video generation tools should be designed such that if someone removes the watermark or the metadata from a video, the video would self-destruct.
Meta spokesperson Natalie Hereth says that the videos created by the Meta tool will indeed show a watermark, but she conceded that the watermark could probably be removed. If Meta cannot find a way to retain the watermark, or have the video self-destruct when it’s removed, Abd-Almageed says the company has an obligation to never release the tool to the public.
Meta is quick to say that the technology is still in the research phase and “not anywhere close” to being available to the public. “It’s an exciting breakthrough and it’s a very hard technical challenge,” Hereth says, adding that it’s important to Meta to get the research into the AI community so it can be discussed openly and further developed.
But Meta went out to the media with the news of its breakthrough; CEO Mark Zuckerberg even made a statement about it. As to why Meta wants the public to know about the technology now, Hereth explains: “The vision eventually is that you can think of creators and artists who post a lot on our platforms using it.” She says Meta is interested in “unlocking that creative opportunity.”
It was inevitable that AI researchers somewhere would push the state of the art past still images and to video. AI still image generators like OpenAI’s DALL-E and Stability.ai’s Stable Diffusion emerged this year and are already available to the public for use. These tools are impressive because they can create images that are photorealistic and hard to distinguish from camera-shot images. It’s likely that Make-A-Video, or some future iteration of it, will create video that’s hard to distinguish from real life. Meta, which employs hundreds of AI researchers—including one of the godfathers of AI, Yann LeCun—saw it as important to be first with the breakthrough (just as OpenAI saw it important to be first with text-to-image generators).
But the company, which has been up to its ears in misinformation controversies, may be way out over its skis. Meta has struggled to get its AI systems up to the massive task of protecting its 3 billion users from dangerous misinformation and disinformation, which can show up anywhere, anytime on its platforms. For instance, its content-moderation AI has trouble recognizing memes that are designed, through a careful combination of image and word, to mislead.
Now the company is leading the charge to develop another potentially dangerous mode of communication, live action video. Will its own content-moderation AI be able to detect an AI-created video (perhaps one with the watermark removed) on its platforms before it’s widely viewed and shared?
Connectez-vous pour ajouter un commentaire
Autres messages de ce groupe
![What can we learn from insulin price reductions](https://www.cdn5.niftycent.com/a/k/8/y/0/N/n/what-can-we-learn-from-insulin-price-reductions.webp)
The Fast Company Impact Council is a private membership community of influential leaders, experts, executives, and entrepreneurs who share their insights with our audience. Members pay annual
![Why your IoT devices are the weakest link in security](https://www.cdn5.niftycent.com/a/1/G/w/a/g/l/why-your-iot-devices-are-the-weakest-link-in-security.webp)
The Fast Company Impact Council is a private membership community of influential leaders, experts, executives, and entrepreneurs who share their insights with our audience. Members pay annual
![Meet the Bop House, the internet’s divisive new OnlyFans hype house](https://www.cdn5.niftycent.com/a/e/b/9/g/6/p/meet-the-bop-house-the-internet-s-divisive-new-onlyfans-hype-house.webp)
What if the Playboy Mansion was filled with OnlyFans content creators? That’s the pitch for the Bop House, a TikTok page that has gained nearly three
![This Valentine’s Day, don’t fall for romance scams, Meta warns](https://www.cdn5.niftycent.com/a/e/7/v/L/z/9/this-valentine-s-day-don-t-fall-for-romance-scams-meta-warns.webp)
![‘I will never recover from this’: The internet is spiraling over the Duolingo owl’s untimely death](https://www.cdn5.niftycent.com/a/1/E/V/7/r/W/i-will-never-recover-from-this-the-internet-is-spiraling-over-the-duolingo-owl-s-untimely-death.webp)
Duo, the infamous Duolingo owl, is dead.
The language-learning app shared the news in a tongue-in-cheek post yesterday. The cause of death remains under investigation, but Duolingo
![Hate speech dramatically increased on X under Elon Musk’s watch, researchers say](https://www.cdn5.niftycent.com/a/1/B/q/y/n/o/hate-speech-dramatically-increased-on-x-under-elon-musk-s-watch-researchers-say.webp)
![SoftBank reveals $2.4 billion loss in Q3](https://www.cdn5.niftycent.com/a/k/M/r/Z/O/x/softbank-reveals-2-4-billion-loss-in-q3.webp)
Japanese technology company SoftBank Group Corp. reported a 369.2 billion yen ($2.4 billion)