As companies rush to join the AI race and harness the technology’s potential to shape the future, a critical question looms large: Can ethics keep up with innovation?
While I’m particularly excited about generative AI’s power to transform creativity and productivity everywhere, I recognize that without appropriate oversight, this potentially revolutionary technology can also present real threats and hard challenges. As AI becomes a cornerstone of innovation across industries, it’s increasingly clear that companies have a duty to mitigate the risks it could pose. The journey towards responsible innovation must begin inside the companies that are driving innovation, calling for deep and wide-ranging collaboration from diverse stakeholders across the business.
One highly promising development is the growing embrace of an AI ethics review process, helmed by an AI ethics review board. Many companies establish these boards to ensure new products and technologies are developed responsibly and can safeguard against potential harm. When done appropriately, a robust AI ethics review board and process can serve as indispensable tools for companies aiming to adopt a comprehensive approach to responsible innovation. Still, work remains especially regarding the efficacy of AI ethics review boards in guiding the development of AI-powered products and features.
Many companies wrongly center their AI ethics review process on a stagnant board, channeling the company’s responsible innovation decisions to a singular entity. This approach causes obstacles that halt the pace of innovation and rely on perfunctory guardrails that serve only an abstract greater good. Other AI review boards act as strict regulatory bodies that set rigid guidelines and boundaries around what is off limits for the development of product or features, inhibiting discovery and stifling creativity from the ideation stage of a product lifecycle.
In the beginning, the AI review board should drive the design and execution of the company’s approach to an ethics review process. However, once these norms are established, then an AI ethics team can help operationalize the charter that the board has set. At Adobe, what we’ve seen is that a diverse AI ethics review process, armed with shared principles and tactical approaches to thoughtful innovation, can be a powerful catalyst for innovation. Further, a dedicated AI ethics team can empower teams to explore new frontiers without compromising ethical standards.
The role of an AI ethics board: guiding with precision
With the rise of generative AI, there lies an immense opportunity to amplify human creativity, not replace or diminish it. We believe AI review boards can further this opportunity when they serve as a guardian for innovation, not a gatekeeper. The focus of an AI review board should not lie in what companies can or should not do with AI technology, but instead, in figuring out how best to assess and mitigate risks so that internal teams can pursue their boldest dreams in the most ethical way.
To evaluate risk, Adobe created an AI Ethics Impact Assessment, led by our AI ethics team, for new AI-powered features- designed to identify features and products that can perpetuate harmful biases and stereotypes. In most cases, products show no major ethical concern and meet our standards for approval. If the assessment shows a higher potential ethical impact, further technical review is needed including a presentation to the AI review board who can ask questions and share improvements to mitigate risks of a product. With a robust assessment process already in place to guide responsible innovation from the outset, presentations to an AI review board should be minimal and reserved for addressing the most challenging issues.
Championing a risk-based approach rather than a constrained one creates a playground for innovation. When teams are confident that an AI ethics team will support them in managing potential risks, it builds a trust-based culture that encourages stakeholders across the business—from senior leaders to product developers to marketers—to view ethics as a crucial component of their creative process. This allows boards to forego granular reviews and concentrate on assessing the highest-risk technologies without slowing down the pace of innovation.
Diverse outcomes necessitate diverse perspectives
AI systems are only as inclusive as the data they are trained on, and the same principle applies to the diversity of the individuals overseeing the development of AI technologies. Diversity within AI review is paramount, as it ensures a wide array of perspectives in risk assessment and decision-making processes. As companies scale and introduce new products and features, adjustments to AI review process become necessary. To match, the board should periodically refresh its membership to bring in fresh perspectives, aligning with the evolving needs of the business.
For instance, during the initial stages of forming your AI ethics review board and assessment process, prioritizing technical expertise is crucial for in-depth discussions on product advancement. This is not a nice to have. It’s a must have to ensure that ethics are baked into product development from the onset. As products evolve towards market readiness, it’s important to expand the reviewers to encompass diverse viewpoints from various sectors within the company, including marketing, legal, and HR. This ensures comprehensive consideration of implications for both end-users and employees alike.
Incorporating members that embody not only a diversity of ethnicity, gender and sexual orientation, but also a diversity of thought and professional experience is critical for identifying potential issues in a product or feature.
Establishing sustainable practices for innovation
It is essential that companies build a review and oversight process for products with significant ethical stakes grounded in shared organization principles. Leveraging AI ethics assessments can enable companies to fully harness the power of AI while simultaneously cultivating and upholding trust among both employees and customers. Establishing guiding principles for ethical innovation provides a robust and unwavering foundation for boards as they navigate complex decisions and assess potential risks. When ethics are ingrained into design principles, responsible innovation becomes a natural part of a company’s DNA.
Our review process is firmly rooted in the principles of accountability, responsibility, and transparency, which we established in 2019. These principles are ingrained in every stage of product development, from design to deployment, including training and testing. To champion these principles, we rely on the diverse range of leaders and backgrounds within our AI review board, along with evangelists at every level and with perspectives from across our business to promote and uphold these values. We believe our principles are tactical, actionable, and broad enough to allow innovation to thrive. And even after a product launch, we continuously gather customer feedback to ensure our AI models are delivering outputs that are aligned to our customers’ needs.
Embarking on the journey of responsible innovation is challenging, yet imperative. While even the most efficient AI ethics review assessment cannot foresee and mitigate every risk, they play a crucial role in building trust within your business and with your customers. In the event that an unforeseen risk emerges, having an AI review process and strong principles integrated throughout your organization assures stakeholders that you took every possible step to reduce harm.
Companies pursuing ethical approaches to AI do not need to aim to change the world. Instead, they should focus on making tangible impact and embedding their principles into the aspects within immediate reach: their products, people, and purpose.
Chcete-li přidat komentář, přihlaste se
Ostatní příspěvky v této skupině

The official White House social media account is under fire for posts that resemble something typically found on the internet forum 4chan.
A post shared on February 14, styled like a Val

The prospect of banning the sale of so-called


The day after the Super Bowl, ZapperBox quietly raised the price on Amazon of its over-the-air DVR.
ZapperBox offers one of the best means of recording local channels from an antenna, an

Within Walmart, employees known as merchants make decisions about which products the company carries online and in stores, as well as pricing for those items.
Naturally, the job involves

With TikTok’s future in the U.S. still uncertain, Substack is doubling dow
