Dive Brief:
- The University of Pittsburgh Institute for Cyber Law, Policy, and Security developed a task force that will examine potential bias in algorithms and automated systems used by governments.
- The Pittsburgh Task Force on Public Algorithms is chaired by former U.S. Attorney David Hickton, the founding director of Pitt Cyber. Members also include experts in law, computer science and government, and representatives from community and advocacy groups.
- The task force will produce best practice recommendations for algorithms and other artificial intelligence (AI), although it will not seek to make binding suggestions for governments. The task force will have a government advisory panel including representatives from the Pittsburgh Bureau of Police, the Allegheny County Department of Human Services and the Allegheny County public defender’s office.
Dive Insight:
Governments have been adopting AI systems for various city functions, including everything from facial recognition tools in law enforcement to algorithms that can determine whether inmates should get parole. Allegheny County, which contains Pittsburgh, even has an algorithm in place to screen calls about child neglect, determining which calls need further response.
The problem is that those systems can be built on biased data that discriminates by race or other status, Hickton told Smart Cities Dive.
"We have increasingly relied almost entirely on digital platforms for all sorts of decision making, and we have done so without taking sufficient time to account for the overt and latent impacts," he said. "If we take data that has been gathered historically and may be informed with bias, we don’t want to extend and perpetuate that bias."
High-profile studies have identified bias in all sorts of systems. A National Institute of Standards and Technology study found that some facial recognition systems misidentified people of color up to 100 times more frequently than white faces, fitting in with other findings that have prompted some cities to call for the technology to be banned. A 2018 study from Dartmouth College researchers found that one algorithm designed to predict recidivism among incarcerated people was about as accurate as a random online poll.
The recommendations from Hickton’s group will help inform the local government as it adopts more AI tools, although it will not prescribe policy. "Those who have decision-making responsibility can make the decisions," Hickton said.
The group will also seek input from affected communities.
The task force with a growing trend of cities seeking ways to limit bias in AI. The Center for Government Excellence at Johns Hopkins University released a toolkit to help cities spot bias in algorithms, and New York City in November created a new policy officer position to oversee its use of algorithms, following a task force on the topic.