Most U.S.-based companies have no idea how to mitigate AI risk. Credo AI wants to change that

Companies are at a crossroad when it comes to AI adoption: Either embrace the technology—along with all of its flaws, unknowns, and alarming capability to spread disinformation—or risk falling into obsolescence. 

Navrina Singh, founder and CEO of AI governance platform Credo AI, told attendees of Fast Company’s Impact Council annual meeting earlier this month that we have entered a “state of reinvention.” It’s no longer an option for companies to adopt and embrace the opportunities artificial intelligence promises. Rather, it’s essential to their survival and success. It’s also crucial for businesses to understand the risks the technology poses to their organization.

“It’s really important to think about this lens of how is trust going to be built for responsible practices, rather than just trying to give into the sphere of regulations?” Singh said. 

[Photo: Alyssa Ringler for Fast Company]

Understanding the risks

Singh, who founded Credo AI in 2020, was working in the robotics industry around 2010 when machine learning began to hit its stride. While companies were understandably bullish about the technology’s capabilities, Singh was concerned by the lack of discussion around potential dangers.

“My daughter was born 10 years ago, and I was seeing these very, very powerful AI systems evolve, I would say, as quickly as human brains. And there was this realization that as engineers, we don’t take responsibility,” Singh said. “We are just excited by the thrill of innovation and we are excited by putting our technology out in the market and making a lot of money, which is great. But now, we can’t take [that] chance on AI.” 

Credo AI helps businesses understand what risks the technology poses to their organization, how to mitigate those risks, and how to ensure companies are in compliance with government standards. Singh said the company has partnered with both the European Commission, the politically independent executive arm of the European Union, and the Biden Administration to advise both institutions on rights-based and risk-based regulations. 

In Europe, where the EU AI Act passed in March, Singh said there’s an understanding that new technology allows for progress. At the same time, companies at the forefront of the AI revolution are not only ensuring compliance with current and future government standards, but also prioritizing the rights of users and cultivating a sense of trust. 

“In order to enable innovation in Europe, they’re going to put European citizens front and center, and the rights of those citizens front and center,” she said. In the U.S., the path to regulation has proven more complex due to a more state-level approach to regulation as opposed to a federal one. 

[Photo: Alyssa Ringler for Fast Company]

Developing AI literacy

Although there’s been a lack of concrete federal regulation around AI in the U.S., the Biden Administration issued an executive order in October 2023, which included a mandate that agencies hire a chief artificial intelligence officer. Singh said that at this point, most officers have been hired or are in the process of being recruited. 

While it’s important to have a chief AI officer at the helm, Singh stressed the need for AI proficiency and literacy across job titles. 

“We really need a multi-stakeholder oversight mechanism for artificial intelligence,” she said. “What we are seeing is if you just put in AI experts as the stakeholders of managing oversight, they are going to be so far removed from the business outcomes like reputional damage, regulatory risk, impact, [and] mission.”

Acting, not reacting

According to Singh, the U.S. has fallen behind on AI literacy because of this lack of government oversight and the treatment of regulation as an afterthought. When companies that are not as technologically forward outsource their AI adoption to third party sources, that’s where the risk comes in. 

Singh argued that when companies employ technology like ChatGPT, they need to ask themselves what the risk implications are, which could range from chatbots producing hallucinations to live agents lacking an understanding around how the adoption of AI will impact their roles. Without a standard approach to risk management, companies are forced into reactionary positions. 

“Governance needs to be front and center,” Singh said. “The organizations who are able to tackle that very proactively have a very good sense of where true artificial intelligence or generative AI is actually used in their organization.”

https://www.fastcompany.com/91137361/most-u-s-based-companies-have-no-idea-how-to-mitigate-ai-risk-credo-ai-wants-to-change-that?partner=rss&utm_source=rss&utm_medium=feed&utm_campaign=rss+fastcompany&utm_content=rss

Created 8mo | Jun 21, 2024, 11:20:04 AM


Login to add comment

Other posts in this group

OpenAI launches cross-country search to build data center sites for the Stargate project

OpenAI is scouring the U.S. for sites to build a network of huge data centers to power

Feb 7, 2025, 5:50:07 PM | Fast company - tech
‘It’s not only centered around video anymore’: Zoom’s CEO explains the video conference giant’s next act

Zoom made a name for itself during the pandemic, becoming synonymous with video conference calls. But the company

Feb 7, 2025, 1:20:06 PM | Fast company - tech
These groups are pushing for the NFL to end facial recognition

Ahead of Super Bowl Sunday, online privacy groups Fight for the Future and the Algorithmic Justice League are reiter

Feb 7, 2025, 1:20:04 PM | Fast company - tech
Welcome to HillmanTok University, the digital HBCU inspired by ‘A Different World’

You can learn many things from TikTok, like how to make a dense bean salad or how to tell if you have “

Feb 7, 2025, 10:50:09 AM | Fast company - tech
Academic researchers are going to have to learn algospeak

Imagine you’re an academic researcher. You’re writing a pitch for funding to the National Science Foundation (NSF), the independent agency of the federal government that funds projects designed to

Feb 7, 2025, 10:50:07 AM | Fast company - tech
Like TikTok, lawmakers want to ban DeepSeek from government devices

A bipartisan duo in the the U.S. House is proposing legislation to ban the Chinese artificial intelligence app DeepSeek from federal devices,

Feb 6, 2025, 11:20:08 PM | Fast company - tech