In 2011, when Michigan was looking for ways to cut spending on its unemployment program after it had been drained by the Great Recession, the state turned to a new idea: building—and eventually deploying—an automated computer system to root out benefit fraud.
The automated fraud detection system generated nearly 63,000 cases between 2013 and 2015 in which Michigan residents were accused of fraud, about 70 percent of which would later be found to be false. Michigan residents accused of fraud were hit with quadruple penalties and subjected to aggressive collection efforts, such as seizing as much as a quarter their wages. Some were arrested and many filed for bankruptcy. The experience took such a hard toll on so many people that the University of Michigan added a suicide hotline number to the website for its unemployment insurance clinic; people accused of fraud openly talked about suicide in front of administrative judges. At least one person took her own life after being hit with $50,000 in fraud penalties.
By 2016 the state admitted the $47 million system wasn’t working and started having human employees review and issue all fraud determinations. In 2017, it announced it would refund nearly $21 million to residents falsely accused of fraud.
The episode is far from an isolated incident. Indeed, a recent report released by TechTonic Justice, a nonprofit focused on the use of artificial intelligence in systems that impact low-income people, found that most public benefit programs are riddled with AI.
All state Medicaid systems use automation to determine eligibility, according to the report, as well as for the Supplemental Nutrition Assistance Program. State SNAP programs also use it to determine how much benefits someone will receive as well as to detect fraud and overpayments. Some states use it to determine access to mental health services in Medicaid. It’s often used in privately managed Medicaid plans’ responses to prior authorization requests, determining whether or not someone’s treatment gets approved, as well as in Medicare Advantage plans. The Social Security Administration uses AI technologies to decide eligibility for disability benefits and enforce the program’s strict asset limits. Some of these systems are built by outside firms like Deloitte or Google, while others, like Michigan’s unemployment fraud detection program, are built by governments themselves.
“AI is in every aspect of public benefits administration,” said Kevin De Liban, founder of TechTonic Justice and author of the report. He’s seen it appear in nearly every part of the process: from determining someone’s eligibility to deciding how much in benefits they’re entitled to processing their renewal paperwork to accusing people of wrongly receiving benefits. And it’s almost always at the detriment of poor people, not their benefit. It’s “never really expanding access to benefits, just restricting it and causing devastation on really massive scales,” he said. “Nowhere has this been implemented where it didn’t mean cuts, delays, loss of benefits for people who are eligible, unfounded fraud accusations.”
Despite Michigan’s experience, many state unemployment insurance systems are currently using AI-based and automatic decision-making systems to determine eligibility, verify identities, and detect fraud. States, faced with rock-bottom funding for administrative tasks, are looking for quick fixes to staffing shortages. Nevada plans to launch an AI system created by Google to analyze the transcripts of appeals hearings and issue recommendations to judges about whether someone should receive benefits.
If judges are facing long backlogs and are getting pressure to churn quickly through cases, “it’s always a temptation” to do what AI says, said Michele Evermore, a senior fellow at The Century Foundation who worked in the Labor Department’s Office of Unemployment Insurance Modernization, “especially if you essentially have to prove the computer wrong and rework whatever the technology came up with.”
When AI makes mistakes, benefits are delayed. An unemployment case that’s flagged for a fraud check will take weeks for a human to investigate, “so people are getting slower benefits because of AI,” Evermore said. Then there’s the chance that AI outright denies people. “I’m concerned about the right decision getting made for claimants and about protecting the role of civil service,” Evermore said. “We are increasingly denigrating civil servants and not recognizing the value human beings bring to the table.” AI is one more way to push humans aside.
There is also a lack of transparency around how AI makes decisions. Government benefit recipients often don’t even know AI is involved in the process, and if they somehow find that out the algorithms used aren’t public. That leads to situations where “fundamental decisions about people’s health are made and they have no way of knowing why a decision was made or how to fight it,” De Liban said.
The other big problem with AI creeping into public benefit systems is that it can hurt so many people at once. An individual caseworker, even if hell-bent on denying people care, can only touch so many cases. But systems using AI “break down for everybody who’s subject to them,” De Liban said—thousands of people across entire states.
De Liban first experienced the harms of AI in public benefits in his prior job as a legal aid lawyer in Arkansas. In 2016, he started to get calls from “desperate” people who were suddenly receiving fewer hours of home and community-based services through Medicaid, or having a nurse or aide help them with basic life tasks such as bathing, toileting, and eating. Eventually he figured out that the state had changed the way it decided how many hours of care someone was entitled to receive. Originally nurses would interview a recipient and go through a list of questions, using their professional judgement to determine a number. But while the nurses were still coming and asking questions, people were getting their hours cut by “drastic” numbers, he said. When asked, they were told their hours were cut because of “the computer.” The state had deployed a new algorithmic decision-making process that cut the hours of somewhere between 4,000 and 8,000 people with severe disabilities like quadriplegia and cerebral palsy anywhere from 20 to 50%. They were left to lie in their own waste or get bed sores from not being turned.
De Liban eventually sued the state and won, with a court ruling that the state had to stop using the algorithm. The legislature also forced the state to abandon the system. But the problem continues elsewhere: Missouri, for example, just implemented an eligibility algorithm in its home-based care program that could deprive nearly 8,000 people of services.
De Liban has since seen the same playbook rolled out in other places and other programs. It’s almost always it’s about cost cutting. AI can be used as a way to winnow public benefit rolls and save money, even if the people cut off might still be technically eligible. Other times, states talk about ensuring that only the “right” people get the right amount of services, another way of lowering costs. Some talk about AI being a more neutral way to make determinations than a human, but De Liban says that’s just “a cloak of unwarranted rationality.”
There are ways AI could be deployed to the benefit of recipients. In the unemployment insurance system, Evermore said, “There are definitely places in the process that can be automated.” That could include automating the process through which applications get assigned to people on staff, as well as things like scheduling and case management. “But it can go too far,” she cautioned.
De Liban sees the possibility of AI helping to more automatically enroll and renew people’s benefits by relying on income information the state government already has. But he argues such changes have to be rolled out slowly and in phases to make sure they don’t end up making the situation worse for recipients and applicants, and if things go wrong, they should be abandoned. They also require “a healthy ecosystem,” he said, with incentives to keep people on instead of kick them off and accountability when things break. “Theoretically” it could be deployed in a positive way, he said. But “all the evidence shows that it hasn’t been, so we have to start doubting the theoretical promise of it when we haven’t seen it play out in reality yet.”
“This is not the time to be experimenting and deploying some technology and then working out the kinks later,” De Liban said. “This is people’s lives, their health, their work, their housing, their kids. When the stakes are so high the burden can’t be on them to challenge and fix systems that are deployed and break.”
Connectez-vous pour ajouter un commentaire
Autres messages de ce groupe
In the moment when her world shattered three years ago, Stephanie Mistre found her 15-year-ol
Pinterest’s CEO wants teens to use their app, but not during school hours.
Bill Ready has joined the growing chorus of parents, educators, and policymakers advocating for “phone-fr
There are certain social media rules we can all agree on: Ghosting a conversation is impolite, and replying “k” to a text is the equivalent of a backhand slap (violent, wrong, and rude). But what
The Brutalist, a three-and-a-half-hour awards favorite, is a film about human creativity. Ironically, its biggest scandal surrounds artificial intelligence.
Brady Corbet’s epic
Hours after returning to the White House, President Donald Trump made a symbolic mark on the