This new Adobe tool will help you spot manipulated images

In the photo, Beyoncé looks beatific, with a closed-lip Mona Lisa smile. But it’s easy enough to give her a toothy grin. Just dial up her “Happiness” to the maximum level using Adobe Photoshop’s Smart Portrait tool, and her face gets a Cheshire cat-like smile, white teeth appearing out of thin air. Smart Portrait, released in beta last year, is one of Adobe’s AI-powered “neural filters,” which can age faces, change expressions, and alter the background of a photo so it appears to have been taken at a different time of year. These tools may seem innocuous, but they provide increasingly powerful ways to manipulate photos in an era when altered media spreads across social media in dangerous ways. For Adobe, this is both a big business and a big liability. The company—which brought in $12.9 billion in 2020, with more than $7.8 billion tied to Creative Cloud products aimed at helping creators design, edit, and customize images and video—is committed to offering users the latest technologies, which keeps Adobe ahead of its competition. This includes both neural filters and older AI-powered tools, such as 2015’s Face-Aware Liquify, which lets people manually alter someone’s face. Adobe executives are aware of the perils of such products at a time when fake information spreads on Twitter six times faster than the truth. But instead of limiting the development of its tools, Adobe is focused on the other side of the equation: giving people the ability to verify where images were taken and see how they’ve been edited. Step one: a new Photoshop tool and website that offer unprecedented transparency into how images are manipulated.

Adobe has been exploring the edge of acceptable media editing for a while now. During the company’s annual Max conference in 2016, it offered a sneak peek of a tool that allowed users to change words in a voice-over simply by typing new ones. It was a thrilling—and terrifying—demonstration of how artificial intelligence could literally put words into someone’s mouth. A backlash erupted around how it might embolden deepfakes, and the company shelved the tool. Two years later, when Adobe again used Max to preview cutting-edge AI technologies—including a feature that turns still photos into videos and a host of tools for video editing—Dana Rao, its new general counsel, was watching closely. After the presentation, he sought out chief product officer Scott Belsky to discuss the repercussions of releasing these capabilities into the world. They decided to take action. Rao, who now leads the company’s AI ethics committee, teamed up with Gavin Miller, the head of Adobe Research, to find a technical solution. Initially, they pursued ways to identify when one of Adobe’s AI tools had been used on an image, but they soon realized that these kinds of detection algorithms would never be able to catch up with the latest manipulation technologies. Instead, they sought out a way to show when and where images were taken—and turn editing history into metadata that could be attached to images. The result is the new Content Credentials feature, which went into public beta this October. Users can turn on the feature and embed their images with their identification information and a simplified record of edits that notes which of the company’s tools have been used. Once an image is exported out of Photoshop, it maintains this metadata, all of which can be viewed by anyone online through a new Adobe website called Verify. Simply upload any JPEG, and if it’s been edited with Content Credentials turned on, Verify will show you its metadata and editing history, as well as before-and-after images. Content Credentials is part of a larger effort by both tech and media companies to combat the spread of fake information by providing more transparency around where images come from online. An industry consortium called the Coalition for Content Provenance and Authenticity (C2PA), which includes Adobe, Microsoft, Twitter, and the BBC, recently created a set of standards for how to establish content authenticity, which are reflected in Adobe’s new tool. Members of the group are also backing a bill in the U.S. Senate that would create a Deepfake Task Force under the purview of the Secretary of Homeland Security. But while Adobe has thrown its weight behind this fledgling ecosystem of companies championing image provenance technologies, it also continues to release features that make it increasingly easy to alter reality. It’s the accessibility of such tools that troubles researchers. “Until recently . . . you needed to be someone like Steven Spielberg” to make convincing fake media, says University of Michigan assistant professor Andrew Owens, who has collaborated with Adobe on trying to detect fake images. “What’s most worrisome about recent advances in computer vision is that they’re commoditizing the process.” For content provenance technologies to become widely accepted, they need buy-in from camera-app makers, editing-software companies, and social media platforms. For Hany Farid, a professor at the University of California, Berkeley, who has studied image manipulation for two decades, Adobe and its partners have taken the first steps, but now it’s up to platforms like Facebook to prioritize content that has C2PA-standardized metadata attached. “You don’t want to get in the business of [saying] ‘This is true or false,'” Farid says. “The best [Adobe] can do is to arm people—the average citizen, investigators—with information. And we use that as a launchpad for what comes next: to regain some trust online.” Three other efforts to authenticate images before they’re released into the wild [Illustration: Kemal Sanli]Truepic Content provenance company Truepic recently announced an SDK that will allow any app with a camera to embed images and videos with verified metadata. [Illustration: Kemal Sanli]Starling Lab A project between Stanford and the USC Shoah Foundation, this lab uses cryptography and decentralized web protocols to capture, store, and verify images and video. [Illustration: Kemal Sanli]Project Origin Microsoft and the BBC joined forces in 2020 to help people understand whether images and videos have been manipulated.

https://www.fastcompany.com/90686493/adobe-is-releasing-a-set-of-tools-to-identify-manipulated-images?partner=rss&utm_source=rss&utm_medium=feed&utm_campaign=rss+fastcompany&utm_content=rss

Établi 3y | 28 oct. 2021 à 15:21:23


Connectez-vous pour ajouter un commentaire

Autres messages de ce groupe

Dropping X for Bluesky? These tips will make the migration easier

“Bluesky has the juice.”

This phrase seems to be coming up a lot as people flock to the decentralized X al

14 nov. 2024 à 13:10:07 | Fast company - tech
The Space Coast’s new congressman says we’re ‘in a de facto cold war’

Representative-elect Mike Haridopolos, the incoming Republican lawmaker who will represent Kennedy Space Center, expects space policy to play a big part in the Trump administration—and he’s excite

14 nov. 2024 à 10:50:04 | Fast company - tech
My friend always posts party photos. How do I tell her to stop?

There are certain social media rules we can all agree on: Ghosting a conversation is impolite, and replying “k” to a text is the equivalent of a backhand slap (violent, wrong, and rude). But what

14 nov. 2024 à 10:50:03 | Fast company - tech
Even YouTube can’t resist the doomscroll

YouTube is among the last bastions of limited, curated content. A new experiment could threaten that. 

YouTube is testing out an unlimited swipe-through model, in which certain Andr

14 nov. 2024 à 10:50:02 | Fast company - tech
Please listen to Mark Zuckerberg performing ‘Get Low’ with T-Pain

If I know anything, it’s that Meta CEO Mark Zuckerberg is a certified Wife Guy. Like, that man really loves his wife, pediatrician and philanthropist Priscilla Chan.

He c

13 nov. 2024 à 23:20:03 | Fast company - tech
‘We’re getting a kakistocracy’: Social media users are reeling over Trump’s defense secretary pick

On Tuesday evening, President-elect Donald Trump tapped Fox News host Pete Hegseth as his pick for Defense Secretary–the same man who believes germs aren’t real because . . . he can’t see them. 

13 nov. 2024 à 23:20:02 | Fast company - tech
How long will Elon Musk and Donald Trump’s lovefest last?

The bromance between Donald Trump and Elon Musk is one of the strangest outcomes of the 2024 presidential election cycle, with Musk

13 nov. 2024 à 20:50:05 | Fast company - tech