How data scientist Rebecca Portnoff is pushing AI companies to tackle child abuse head on

As she was finishing up her undergraduate degree at Princeton University, Rebecca Portnoff was trying to figure out what was next: more school, or go work for a tech company?

She studied computer science, doing her dissertation on natural language processing. This was over a decade ago, when artificial intelligence wasn’t the buzzword it is today but still held scores of promise and excitement for those working with it. 

Around the same time, she picked up a copy of Half the Sky, a book by Nicholas Kristof and Sheryl WuDunn about human rights abuses against women across the world. The book, recommended by her sister, ended up leading her to the groundbreaking path she’s on today at the nonprofit Thorn.

“I decided that I wanted to make an impact in this space, but didn’t really know what that looked like as someone with a machine learning and computer science background, and figured I would have a better chance of answering that question as a graduate student than working full-time at a tech company,” Portnoff tells Fast Company

Portnoff completed her PhD at U.C. Berkeley, and spent her time learning about the impacts of child sexual abuse and what efforts are in place to combat it. 

Flash forward to today, Portnoff is the vice president of data science at Thorn, a nonprofit cofounded by Demi Moore and Ashton Kutcher that uses tech to fight child exploitation. Her team works to identify victims, stop revictimization, and prevent abuse from occurring in the first place using machine learning and artificial intelligence. 

For all the ways tech can fight child sexual abuse, it also can amplify it. For example, bad actors could use generative AI to create realistic child sexual abuse material. Portnoff is leading an initiative with the nonprofit All Tech Is Human that works with tech giants to put new safety measures in place to prevent certain misuse cases. She also led Thorn and All Tech Is Human’s Safety by Design initiative last year, which works to encourage tech companies to develop their AI with the intent to combat child sexual abuse from the start, rather than retrofit the tech later once issues arise. 

Amazon, Anthropic, Google, Meta, OpenAI, Microsoft, and a handful of other companies have pledged to adopt Safety by Design principles as part of the project. For example, OpenAI integrated part of the tech into its DALL-E 2 generative AI web app. 

“As far as where things need to go, or where things will be headed with the Safety by Design work and preventing the misuse of some of this, I know that there are days where I feel really hopeful with how the ecosystem has moved to try to mitigate this,” Portnoff says. “And there are also days where I feel it seems like we haven’t moved fast enough. At the end of the day there are going to be companies and developers that work to prevent this misuse, and there will be those that do not, and so there is going to need to be legislation that comes into play when it comes to bringing along that full ecosystem.”

This story is part of AI 20, our monthlong series of profiles spotlighting the most interesting technologists, entrepreneurs, corporate leaders, and creative thinkers shaping the world of artificial intelligence.

https://www.fastcompany.com/91245556/how-rebecca-portnoff-is-pushing-ai-companies-to-tackle-child-abuse-head-on?partner=rss&utm_source=rss&utm_medium=feed&utm_campaign=rss+fastcompany&utm_content=rss

Vytvorené 2mo | 16. 12. 2024, 12:30:04


Ak chcete pridať komentár, prihláste sa

Ostatné príspevky v tejto skupine

5 tips for mastering virtual communication

Andrew Brodsky is a management professor at McCombs School of Business at the University of Texas at Austin. He is also CEO of Ping Group and has received nume

23. 2. 2025, 11:50:03 | Fast company - tech
Apple’s hidden white noise feature may be just the productivity boost you need

As I write this, the most pleasing sound is washing over me—gentle waves ebbing and flowing onto the shore. Sadly, I’m not actually on some magnificent tropical beach. Instead, the sounds of the s

22. 2. 2025, 12:40:06 | Fast company - tech
The next wave of AI is here: Autonomous AI agents are amazing—and scary

The relentless hype around AI makes it difficult to separate the signal from the

22. 2. 2025, 12:40:05 | Fast company - tech
This slick new service puts ChatGPT, Perplexity, and Wikipedia on the map

I don’t know about you, but I tend to think about my favorite tech tools as being split into two separate saucepans: the “classic” apps we’ve known and relied on for ages and then the newer “AI” a

22. 2. 2025, 12:40:03 | Fast company - tech
The government or 4chan? The White House’s social media account is sparking outreach

The official White House social media account is under fire for posts that resemble something typically found on the internet forum 4chan.

A post shared on February 14, styled like a Val

21. 2. 2025, 20:30:04 | Fast company - tech
How Wikipedia became a political lightening rod

Wikipedia has faced political threats for years, but this time, it may be at a breaking point.

Republicans have ramped up attacks against Wikipedia as yet another “

21. 2. 2025, 18:10:17 | Fast company - tech