NEW YORK — The extent to which artificial intelligence and automated decision-making should play a role in policing is at the center of a debate over civil liberties in the nation’s largest city.
Technologies powered by artificial intelligence, such as facial recognition and predictive policing tools, have been shown to have racial and ethnic biases, raising questions about whether their use by police could exacerbate inequalities already baked into the criminal justice system.
The debate in New York City reflects similar conversations across the country as cities and other municipalities seek to make use of new technologies to safeguard residents without infringing on their rights or harming members of marginalized communities.
Advocates who favor greater oversight of the New York Police Department’s use of algorithms to aid in investigations are expressing concern after Mayor Bill de Blasio included an exemption for law enforcement in an executive order establishing a new position to monitor use of algorithms and artificial intelligence by city agencies.
The issuance of the executive order in late November coincided with the release of a report by the city’s Automated Decision Systems Task Force, which was established to look at machine-learning tools in city government. Some members of the task force criticized the report because they said it did not include dissenting recommendations. They also said the task force was not given sufficient access to information on how individual city agencies use automated decision-making systems.
Disapproval of the report and the executive order’s law enforcement exemption indicate an ongoing rift between a metropolis that uses a range of automated decision-making tools, for things like assessing the risk of building fires and evaluating public school teachers, and activists and academics who argue that reliance on algorithms, especially those used in policing, can increase bias against minorities.
Jumaane Williams, the city’s public advocate, and Brad Lander, a city councilman from Brooklyn who sits on the council’s committees on technology and civil rights, praised de Blasio’s creation of a new Algorithms Management and Policy Officer. But they said police investigations should not be exempt from the officer’s reach, arguing it would “risk recreating ingrained patterns of discrimination and inequality.”
“Decades of racially discriminatory policing have left us with troves of biased data that continue to inform today’s predictive policing algorithms, creating feedback loops that recreate harmful overpolicing in communities of color,” Williams and Lander said in a joint statement.
The executive order exempts city agencies from providing the algorithm officer with information “that would interfere with a law enforcement investigation or other investigative activity by an agency or would compromise public safety.” That broad exemption could include the NYPD’s use of facial recognition technology or predictive policing tools, which inspired the algorithm task force’s creation last year.
Vincent Southerland, a member of the task force and executive director of New York University’s Center on Race, Inequality, and the Law, told CQ Roll Call that he hoped the task force would bring about “a shift in the power dynamic that’s out of balance between institutions and individuals” because of algorithms.
But Southerland said the task force struggled to gain information on how the NYPD and other city agencies use them.
“Law enforcement has not earned the right to be exempt from oversight,” said Southerland, in reference to de Blasio’s executive order.
Another task force member, Meredith Whittaker, said on Twitter that the report “reflects the views of the City, not a task force consensus.” Whittaker, who co-founded the AI Now Institute and rose to prominence after organizing walkouts at Google, her former employer, to protest the company’s sexual harassment policies, posted recommendations she submitted that were not included in the final report.
Among those recommendations, Whittaker said the report should make clear that although the task force included governmental and nongovernmental members, the final version “is a document drafted and produced primarily by city employees and should acknowledge the report’s bias in favor of the city’s use of automated decision-making systems,” or ADS.
She also recommended that the report include “documentation of the repeated requests from non-governmental task force members for information on the city’s currently applied ADS systems, and the justifications given by the city for not providing this information.”
Whittaker said that the task force was “eventually” briefed by agencies regarding their use of algorithms but that they covered a “very small sample” of the city’s systems and “were not accompanied by documentation that would allow task force members to interrogate a given system.”
“These briefings often sounded more like sales pitches than robust interviews,” Whittaker said.
Standard practice for algorithms
In general, the report recommends standardizing the city’s algorithm policies and practices, facilitating public education about the use of algorithms and giving individuals the right to challenge a decision made by a city agency that was informed by an algorithm. It also recommends creating an internal process for assessing “risk of disproportionate impact to any individual or group.”
In a statement to CQ Roll Call, a spokesperson for de Blasio acknowledged that “not everyone agreed on every issue” but said the report “reflected consensus on key, actionable recommendations for the city.”
The creation of the algorithms officer “responds to a key recommendation, to create a centralized resource on algorithms for agencies and the public alike, and continuing the important conversations that the task force initiated,” said the spokesperson, Laura Feyer.
Feyer also said “no agency is exempt” from de Blasio’s executive order, only “specific information that would harm public safety and security of New Yorkers.”
But activists who have worked to increase oversight of the NYPD’s use of algorithms and surveillance technology say the exemption sets a dangerous precedent that could result in false arrests or wrongful convictions.
“Law enforcement use of artificial intelligence is one of the most dangerous and problematic areas we see in all of city government,” Albert Fox Cahn, who runs the Surveillance Technology Oversight Project at New York’s Urban Justice Center. “When we’re talking about tools that could potentially strip New Yorkers of their liberties, we need even more oversight and accountability, not less.”
More work ahead
Cahn wants the city council to pass legislation introduced in 2017 that would require the NYPD to disclose information about its use of surveillance tools and data management policies. But progress on the bill has stalled because the extent to which the council can govern NYPD’s activities is unclear.
The NYPD has defended its use of artificial intelligence and facial recognition. Former Commissioner James O’Neill, who stepped down from his post on Sunday, wrote in a New York Times op-ed in June that “it would be an injustice to the people we serve if we policed our 21st century city without using 21st century technology.”
As for the ADS task force, Southerland believes its initial report is a step in the right direction for the city and a prime example of the tensions likely to arise as advocates and government agencies seek to strike a balance between the productive use of new technologies and the protection of civil liberties.
But the report cannot serve as the city’s final word on regulating how agencies use automated decision-making, said Southerland.
“There’s a lot fear that this will be a one-time effort,” he said. “We need to continually revisit this issue.”