Companies are at a crossroad when it comes to AI adoption: Either embrace the technology—along with all of its flaws, unknowns, and alarming capability to spread disinformation—or risk falling into obsolescence.
Navrina Singh, founder and CEO of AI governance platform Credo AI, told attendees of Fast Company’s Impact Council annual meeting earlier this month that we have entered a “state of reinvention.” It’s no longer an option for companies to adopt and embrace the opportunities artificial intelligence promises. Rather, it’s essential to their survival and success. It’s also crucial for businesses to understand the risks the technology poses to their organization.
“It’s really important to think about this lens of how is trust going to be built for responsible practices, rather than just trying to give into the sphere of regulations?” Singh said.
Understanding the risks
Singh, who founded Credo AI in 2020, was working in the robotics industry around 2010 when machine learning began to hit its stride. While companies were understandably bullish about the technology’s capabilities, Singh was concerned by the lack of discussion around potential dangers.
“My daughter was born 10 years ago, and I was seeing these very, very powerful AI systems evolve, I would say, as quickly as human brains. And there was this realization that as engineers, we don’t take responsibility,” Singh said. “We are just excited by the thrill of innovation and we are excited by putting our technology out in the market and making a lot of money, which is great. But now, we can’t take [that] chance on AI.”
Credo AI helps businesses understand what risks the technology poses to their organization, how to mitigate those risks, and how to ensure companies are in compliance with government standards. Singh said the company has partnered with both the European Commission, the politically independent executive arm of the European Union, and the Biden Administration to advise both institutions on rights-based and risk-based regulations.
In Europe, where the EU AI Act passed in March, Singh said there’s an understanding that new technology allows for progress. At the same time, companies at the forefront of the AI revolution are not only ensuring compliance with current and future government standards, but also prioritizing the rights of users and cultivating a sense of trust.
“In order to enable innovation in Europe, they’re going to put European citizens front and center, and the rights of those citizens front and center,” she said. In the U.S., the path to regulation has proven more complex due to a more state-level approach to regulation as opposed to a federal one.
Developing AI literacy
Although there’s been a lack of concrete federal regulation around AI in the U.S., the Biden Administration issued an executive order in October 2023, which included a mandate that agencies hire a chief artificial intelligence officer. Singh said that at this point, most officers have been hired or are in the process of being recruited.
While it’s important to have a chief AI officer at the helm, Singh stressed the need for AI proficiency and literacy across job titles.
“We really need a multi-stakeholder oversight mechanism for artificial intelligence,” she said. “What we are seeing is if you just put in AI experts as the stakeholders of managing oversight, they are going to be so far removed from the business outcomes like reputional damage, regulatory risk, impact, [and] mission.”
Acting, not reacting
According to Singh, the U.S. has fallen behind on AI literacy because of this lack of government oversight and the treatment of regulation as an afterthought. When companies that are not as technologically forward outsource their AI adoption to third party sources, that’s where the risk comes in.
Singh argued that when companies employ technology like ChatGPT, they need to ask themselves what the risk implications are, which could range from chatbots producing hallucinations to live agents lacking an understanding around how the adoption of AI will impact their roles. Without a standard approach to risk management, companies are forced into reactionary positions.
“Governance needs to be front and center,” Singh said. “The organizations who are able to tackle that very proactively have a very good sense of where true artificial intelligence or generative AI is actually used in their organization.”
Login to add comment
Other posts in this group
OpenAI is scouring the U.S. for sites to build a network of huge data centers to power
Ahead of Super Bowl Sunday, online privacy groups Fight for the Future and the Algorithmic Justice League are reiter
You can learn many things from TikTok, like how to make a dense bean salad or how to tell if you have “
Imagine you’re an academic researcher. You’re writing a pitch for funding to the National Science Foundation (NSF), the independent agency of the federal government that funds projects designed to