How to apply AI in complex domains

Skepticism in artificial intelligence (AI) applications has proliferated exponentially over the past year, especially in industries with complex regulatory frameworks and rigid operational boundaries. While AI holds enormous potential, its application in fields like healthcare, finance, and tax preparation comes with unique challenges. Compliance, accuracy, and risk management must remain top priorities, making AI adoption a careful balance between technological advancement and regulatory adherence.

In these high-stakes environments, two approaches to AI stand out as immediately impactful: automating repetitive tasks to increase operational efficiency and using generative AI as a collaborative tool—a copilot to domain experts. These strategies allow companies to tap into AI’s potential while maintaining the human oversight necessary to navigate complex regulatory requirements.

Automation for operational efficiency

In regulated industries, a significant portion of daily tasks are repetitive and administratively intensive, making them ripe for automation. By taking on these routine functions, AI can increase operational efficiency, reduce human error, and lower costs. In healthcare alone, McKinsey and Harvard research estimate that AI could save up to $360 billion annually if adopted more widely, with applications spanning everything from reducing drug development timelines to alleviating clinician burnout.

However, automation also introduces its own set of risks, particularly when AI is left to make complex decisions without human oversight. Take the example of UnitedHealthcare and Humana, which are currently embroiled in multiple class-action lawsuits for allegedly using AI to deny claims in their Medicare Advantage plans. These AI-driven systems automatically processed and rejected claims based on historical data patterns, resulting in erroneous rejections and accusations of care denial. This situation illustrates a critical flaw in AI-driven automation: When decisions are based solely on historical data without contextual human oversight, AI can replicate or amplify errors, particularly in high-stakes scenarios where individualized judgment is necessary.

The problem lies not only in the AI algorithms themselves but in the data used to train them. In regulated industries, historical patterns may not account for the nuanced, case-by-case decisions real-life situations demand. Without careful data selection, human oversight, and continuous refinement, automated AI systems run the risk of compounding past mistakes rather than solving them.

Collaborative AI: A copilot for domain experts

While automating routine tasks is essential for operational efficiency, AI should not operate in a vacuum—especially in fields where expertise and contextual judgment are critical. Generative AI offers a different model of value: As a copilot, it can support human experts in analyzing vast amounts of data and generating insights that enhance decision making, while still relying on human judgment for final assessments.

In the tax industry, for example, april uses AI to automate code generation from tax analysis documents through a system we call Tax-to-Code. Our proprietary backend technology leverages large language models (LLMs) trained to interpret and apply complex tax regulations directly into code, eliminating the need for human engineers to write code manually. This dramatically accelerates our ability to incorporate changes in tax law and reduces the time required to update our products. However, the AI-generated code is reviewed by our team of tax engineers, who ensure it aligns with regulatory standards and meets our business requirements. This collaborative approach allows us to capture the speed of AI while preserving the accuracy and compliance that are essential in the highly regulated tax domain.

Data is the foundation for AI success

AI’s effectiveness in regulated domains depends not only on the capabilities of the models but also on the quality and reliability of the data it processes. In wealth management, for instance, tax returns—often an underutilized data source—provide over 300 data points that can yield valuable insights into income, investments, life events, and retirement contributions. AI-driven tools can analyze this information to offer clients more personalized financial guidance, including tax-efficient investment strategies and projections for upcoming tax liabilities.

However, the successful use of such data hinges on AI’s careful integration with human expertise. Just as in healthcare, where flawed data can result in wrongful claim denials, wealth management AI must be equipped with high-quality data and refined by human insight to provide nuanced advice. Without these safeguards, AI risks producing fragmented or incomplete recommendations, highlighting the need for both data accuracy and informed oversight in high-stakes sectors.

Align tech strategy with risk management

To apply AI responsibly in regulated industries, companies must align their tech strategy with proactive risk management frameworks. Staying on top of regulatory changes and embedding ongoing monitoring into AI deployments is crucial. In the tax industry, for instance, regulations are continually evolving—now exceeding 75,000 pages of tax code—and AI systems must be updated accordingly. At april, our Tax-to-Code system leverages LLMs that continually process regulatory updates to generate code suggestions, with all outputs reviewed by tax professionals to ensure compliance and accuracy.

For effective risk management, compliance teams should establish rigorous governance standards that work in harmony with tech development. This approach mitigates legal and reputational risks, embedding a level of accountability within AI applications to keep them both effective and compliant.

Maintain close alignment with industry stakeholders

As AI technology advances and regulatory frameworks grow more complex, it’s essential for businesses to maintain open channels with governing bodies and industry stakeholders. Companies in fields like finance, healthcare, and tax preparation must work closely with regulatory authorities to ensure their AI applications remain aligned with compliance standards. At april, we collaborate with tax agencies and are actively involved in the National Association of Computerized Tax Processors (NACTP), which enables us to stay connected with both state tax agencies and the IRS. This engagement allows us to align our AI-driven tax solutions with current regulations and maintain a clear dialogue with regulatory bodies.

The key to successfully leveraging AI in complex, regulated industries is a balanced approach that combines automation for efficiency with human oversight for accuracy and judgment. By integrating collaborative AI that empowers domain experts and ensuring the data foundation is robust, businesses can unlock AI’s potential while minimizing risks. Founders and executives in these fields must structure their AI strategies to align with regulatory protocols, ensuring that innovation serves as a sustainable, responsible differentiator.

Ben Borodach is the cofounder and CEO of april.

https://www.fastcompany.com/91237102/how-to-apply-ai-in-complex-domains?partner=rss&utm_source=rss&utm_medium=feed&utm_campaign=rss+fastcompany&utm_content=rss

Created 2mo | Dec 2, 2024, 12:20:07 PM


Login to add comment

Other posts in this group

Frustrated with today’s ‘attention economy’? You’re really going to hate what comes next

In the 1990s, the internet was a bit of a wonderland. It was new and liberating and largely free of

Jan 25, 2025, 12:20:09 PM | Fast company - tech
Why tech in Congress lags  behind the modern world

On a typical day, you can’t turn on the news without hearing someone say that Congress is broken.

Jan 25, 2025, 12:20:08 PM | Fast company - tech
$TRUMP was just the beginning: The new administration is finding all sorts of ways to cash in

At President Donald Trump’s inauguration on Monday, Detroit pastor Lorenzo Sewell took the stage to pray for the incoming administration, peppering his

Jan 25, 2025, 12:20:07 PM | Fast company - tech
Did you show ‘negative sentiment’ for insurance companies after the UHC CEO shooting? Police were watching

When news broke that the United Healthcare CEO was shot in broad daylight early last month, outrage erupted online. But it wasn’t aimed at the assassin. Instead, it was directed at the broken U.S.

Jan 25, 2025, 12:50:02 AM | Fast company - tech
How an AI-generated ‘expert’ sank into media deadlines

Ashley Abramson first came across Sophie Cress in a cold pitch to her work email. Cress was asking to be an expert source for any stories Abramson was working on as a freelance reporter. “I’ve got

Jan 24, 2025, 10:30:03 PM | Fast company - tech
Meta’s Threads is finally getting ads

Threads, Meta’s X and Bluesky rival, is testing ads with certain brands in the United States and Japan, the company said Friday.

“We know there will be plenty of feedback abo

Jan 24, 2025, 8:10:07 PM | Fast company - tech
How the broligarchy is imitating Trump in more ways than one

Sooner or later, the politicians who most admire Donald Trump begin to emulate him. They

Jan 24, 2025, 5:50:03 PM | Fast company - tech