Boston experimented with using generative AI for governing. It went surprisingly well

 

By Santiago Garces and Stephen Goldsmith

The recent Biden White House Executive Order on AI addresses important questions. If it’s not implemented in a dynamic and flexible way, however, it runs the risk of impeding the kinds of dramatic improvements in both government and community participation that generative AI stands to offer.

Current bureaucratic procedures, developed 150 years ago, need reform, and generative AI presents a unique opportunity to do just that. As two lifelong public servants, we believe that the risk of delaying reform is just as great as the risk of negative impacts.

Anxiety around generative AI, which has been spilling across sectors from screenwriting to university education, is understandable. Too often, though, the debate is framed only around how the tools will disrupt us, not how these they might reform systems that have been calcified for too long in regressive and inefficient patterns.

OpenAI’s ChatGPT and its competitors are not yet part of the government reform movement, but they should be. Most recent attempts to reinvent government have centered around elevating good people within bad systems, with the hope that this will chip away at the fossilized bad practices.

The level of transformative change now will depend on visionary political leaders willing to work through the tangle of outdated procedures, inequitable services, hierarchical practices, and siloed agency verticals that hold back advances in responsive government.

New AI tools offer the most hope ever for creating a broadly reformed, citizen-oriented governance. The reforms we propose do not demand reorganization of municipal departments; rather, they require examining the fundamental government operating systems and using generative AI to empower employees to look across agencies for solutions, analyze problems, calculate risk, and respond in record time. 

What makes generative AI’s potential so great is its ability to fundamentally change the operations of government. 

Bureaucracies rely on paper and routines. The red tape of bureaucracy has been strangling employees and constituents alike. Employees, denied the ability to quickly examine underlying problems or risks, resort to slow-moving approval processes despite knowing, through frontline experience, how systems could be optimized. And the big machine of bureaucracy, unable or unwilling to identify the cause of a prospective problem, resorts to reaction rather than preemption. 

Finding patterns of any sort, in everything from crime to waste, fraud to abuse, occurs infrequently and often involves legions of inspectors. Regulators take months to painstakingly look through compliance forms, unable to process a request based on its own distinctive characteristics. Field workers equipped with AI could quickly access the information they need to make a judgment about the cause of a problem or offer a solution to help residents seeking assistance. These new technologies allow workers to quickly review massive amounts of data that are already in city government and find patterns, make predictions, and identify norms in response to well framed inquiries. 

Together, we have overseen advancing technology innovation in five cities and worked with chief data officers from 20 other municipalities toward the same goals, and we see the possible advances of generative AI as having the most potential. For example, Boston asked OpenAI to “suggest interesting analyses” after we uploaded 311 data. In response, it suggested two things: time series analysis by case time, and a comparative analysis by neighborhood. This meant that city officials spent less time navigating the mechanics of computing an analysis, and had more time to dive into the patterns of discrepancy in service. The tools make graphs, maps, and other visualizations with a simple prompt. With lower barriers to analyze data, our city officials can formulate more hypotheses and challenge assumptions, resulting in better decisions.

Boston experimented with using generative AI for governing. It went surprisingly well

Not all city officials have the engineering and web development experience needed to run these tests and code. But this experiment shows that other city employees, without any STEM background, could, with just a bit of training, utilize these generative AI tools to supplement their work.

To make this possible, more authority would need to be granted to frontline workers who too often have their hands tied with red tape. Therefore, we encourage government leaders to allow workers more discretion to solve problems, identify risks, and check data. This is not inconsistent with accountability; rather, supervisors can utilize these same generative AI tools, to identify patterns or outliers—say, where race is inappropriately playing a part in decision-making, or where program effectiveness drops off (and why). These new tools will more quickly provide an indication as to which interventions are making a difference, or precisely where a historic barrier is continuing to harm an already marginalized community.  

Civic groups will be able to hold government accountable in new ways, too. This is where the linguistic power of large language models really shines: Public employees and community leaders alike can request that tools create visual process maps, build checklists based on a description of a project, or monitor progress compliance. Imagine if people who have a deep understanding of a city—its operations, neighborhoods, history, and hopes for the future—can work toward shared goals, equipped with the most powerful tools of the digital age. Gatekeepers of formerly mysterious processes will lose their stranglehold, and expediters versed in state and local ordinances, codes, and standards, will no longer be necessary to maneuver around things like zoning or permitting processes. 

Numerous challenges would remain. Public workforces would still need better data analysis skills in order to verify whether a tool is following the right steps and producing correct information. City and state officials would need technology partners in the private sector to develop and refine the necessary tools, and these relationships raise challenging questions about privacy, security, and algorithmic bias. 

However, unlike previous government reforms that merely made a dent in the issue of sprawling, outdated government processes, the use of generative AI will, if broadly, correctly, and fairly incorporated, produce the comprehensive changes necessary to bring residents back to the center of local decision-making—and restore trust in official conduct.

Santiago “Santi” Garces is the chief information officer for the city of Boston, overseeing the Department of Innovation and Technology and a team of nearly 150 employees.

Stephen Goldsmith is a professor of the practice of urban policy at Harvard Kennedy School and faculty director of the Data Smart Cities Solutions program, located at the Bloomberg Center for Cities at Harvard University. He is also the former mayor of Indianapolis and deputy mayor of New York City.

Fast Company

(3)