Boston experimented with using generative AI for governing. It went surprisingly well

The recent Biden White House Executive Order on AI addresses important questions. If it’s not implemented in a dynamic and flexible way, however, it runs the risk of impeding the kinds of dramatic improvements in both government and community participation that generative AI stands to offer.

Current bureaucratic procedures, developed 150 years ago, need reform, and generative AI presents a unique opportunity to do just that. As two lifelong public servants, we believe that the risk of delaying reform is just as great as the risk of negative impacts.

Anxiety around generative AI, which has been spilling across sectors from screenwriting to university education, is understandable. Too often, though, the debate is framed only around how the tools will disrupt us, not how these they might reform systems that have been calcified for too long in regressive and inefficient patterns.

OpenAI’s ChatGPT and its competitors are not yet part of the government reform movement, but they should be. Most recent attempts to reinvent government have centered around elevating good people within bad systems, with the hope that this will chip away at the fossilized bad practices.

The level of transformative change now will depend on visionary political leaders willing to work through the tangle of outdated procedures, inequitable services, hierarchical practices, and siloed agency verticals that hold back advances in responsive government.

New AI tools offer the most hope ever for creating a broadly reformed, citizen-oriented governance. The reforms we propose do not demand reorganization of municipal departments; rather, they require examining the fundamental government operating systems and using generative AI to empower employees to look across agencies for solutions, analyze problems, calculate risk, and respond in record time.

What makes generative AI’s potential so great is its ability to fundamentally change the operations of government.

Bureaucracies rely on paper and routines. The red tape of bureaucracy has been strangling employees and constituents alike. Employees, denied the ability to quickly examine underlying problems or risks, resort to slow-moving approval processes despite knowing, through frontline experience, how systems could be optimized. And the big machine of bureaucracy, unable or unwilling to identify the cause of a prospective problem, resorts to reaction rather than preemption.

Finding patterns of any sort, in everything from crime to waste, fraud to abuse, occurs infrequently and often involves legions of inspectors. Regulators take months to painstakingly look through compliance forms, unable to process a request based on its own distinctive characteristics. Field workers equipped with AI could quickly access the information they need to make a judgment about the cause of a problem or offer a solution to help residents seeking assistance. These new technologies allow workers to quickly review massive amounts of data that are already in city government and find patterns, make predictions, and identify norms in response to well framed inquiries.

Together, we have overseen advancing technology innovation in five cities and worked with chief data officers from 20 other municipalities toward the same goals, and we see the possible advances of generative AI as having the most potential. For example, Boston asked OpenAI to “suggest interesting analyses” after we uploaded 311 data. In response, it suggested two things: time series analysis by case time, and a comparative analysis by neighborhood. This meant that city officials spent less time navigating the mechanics of computing an analysis, and had more time to dive into the patterns of discrepancy in service. The tools make graphs, maps, and other visualizations with a simple prompt. With lower barriers to analyze data, our city officials can formulate more hypotheses and challenge assumptions, resulting in better decisions.

Not all city officials have the engineering and web development experience needed to run these tests and code. But this experiment shows that other city employees, without any STEM background, could, with just a bit of training, utilize these generative AI tools to supplement their work.

To make this possible, more authority would need to be granted to frontline workers who too often have their hands tied with red tape. Therefore, we encourage government leaders to allow workers more discretion to solve problems, identify risks, and check data. This is not inconsistent with accountability; rather, supervisors can utilize these same generative AI tools, to identify patterns or outliers—say, where race is inappropriately playing a part in decision-making, or where program effectiveness drops off (and why). These new tools will more quickly provide an indication as to which interventions are making a difference, or precisely where a historic barrier is continuing to harm an already marginalized community.

Civic groups will be able to hold government accountable in new ways, too. This is where the linguistic power of large language models really shines: Public employees and community leaders alike can request that tools create visual process maps, build checklists based on a description of a project, or monitor progress compliance. Imagine if people who have a deep understanding of a city—its operations, neighborhoods, history, and hopes for the future—can work toward shared goals, equipped with the most powerful tools of the digital age. Gatekeepers of formerly mysterious processes will lose their stranglehold, and expediters versed in state and local ordinances, codes, and standards, will no longer be necessary to maneuver around things like zoning or permitting processes.

Numerous challenges would remain. Public workforces would still need better data analysis skills in order to verify whether a tool is following the right steps and producing correct information. City and state officials would need technology partners in the private sector to develop and refine the necessary tools, and these relationships raise challenging questions about privacy, security, and algorithmic bias.

However, unlike previous government reforms that merely made a dent in the issue of sprawling, outdated government processes, the use of generative AI will, if broadly, correctly, and fairly incorporated, produce the comprehensive changes necessary to bring residents back to the center of local decision-making—and restore trust in official conduct.

Santiago “Santi” Garces is the chief information officer for the city of Boston, overseeing the Department of Innovation and Technology and a team of nearly 150 employees.

Stephen Goldsmith is a professor of the practice of urban policy at Harvard Kennedy School and faculty director of the Data Smart Cities Solutions program, located at the Bloomberg Center for Cities at Harvard University. He is also the former mayor of Indianapolis and deputy mayor of New York City.

https://www.fastcompany.com/90983427/chatgpt-generative-ai-government-reform-biden-garces-boston-goldsmith-harvard?partner=rss&utm_source=rss&utm_medium=feed&utm_campaign=rss+fastcompany&utm_content=rss

Created 1y | Nov 19, 2023, 11:20:07 AM


Login to add comment

Other posts in this group

How learning like a gamer helped this high-school dropout succeed

There are so many ways to die. You could fall off a cliff. A monk could light you on fire. A bat the size of a yacht could kick your head in. You’ve only just begun the game, and yet here you are,

Apr 29, 2025, 12:20:08 PM | Fast company - tech
Renate Nyborg’s Meeno wants to become the Duolingo of dating

Former Tinder CEO Renate Nyborg launched Meeno less than two years ago with the intention of it being an AI chatbot that help

Apr 29, 2025, 12:20:07 PM | Fast company - tech
How Big Tech’s Faustian bargain with Trump backfired

The most indelible image from Donald Trump’s inauguration in January is not the image of the president taking the oath of office without his hand on the Bible. It is not the image of the First Lad

Apr 29, 2025, 12:20:06 PM | Fast company - tech
Turns out AI is really bad at picking up on social cues

Ernest Hemingway had an influential theory about fiction that might explain a lot about a p

Apr 29, 2025, 12:20:04 PM | Fast company - tech
Signal is the unlikely star of Trump’s first 100 days

The first 100 days of Trump’s second presidential term have included a surprising player that doesn’t seem likely to go away anytime soon: Signal.

The encrypted messaging pl

Apr 29, 2025, 9:50:13 AM | Fast company - tech
How federal funding cuts could threaten America’s lead in cancer research

Cancer research in the U.S. doesn’t rely on a single institution or funding stream—it’s a complex ecosystem made up of interdependent parts: academia, pharmaceutical companies, biotechnology start

Apr 29, 2025, 9:50:11 AM | Fast company - tech
Why Bluesky is more than just an alternative to X

Dive into the exhilarating world of innovation with FC Explains, a video series that spotlights the game changers and visionaries from Fast Company’s prestigious Most Innovative Companies list. Th

Apr 29, 2025, 9:50:11 AM | Fast company - tech