Business

The Next Target for a Facial Recognition Ban? New York

San Francisco, Oakland, and other cities have enacted moratoriums on government use of the tech. New York looks like a harder sell.

Civil rights activists have successfully pushed for bans on police use of facial recognition in cities like Oakland, San Francisco, and Somerville, Massachusetts. Now, a coalition led by Amnesty International is setting its sights on the nation’s biggest city—New York—as part of a drive for a global moratorium on government use of the technology.

Amnesty’s #BantheScan campaign is backed by Legal Aid, the New York Civil Liberties Union, and AI for the People, among other groups. After New York, the group plans to target New Delhi and Ulaanbaatar in Mongolia.

“New York is the biggest city in the country,” says Michael Kleinman, director of Amnesty International’s Silicon Valley Initiative. “If we can get New York to ban this technology, that shows that it’s possible to ban it almost anywhere.”

Activists have long sounded the alarm about the risks of police use of facial recognition. The technology is less accurate on dark-skinned people, contributing to the wrongful arrests of Black men in New Jersey and Michigan. Last year, BuzzFeed News reported the NYPD had run over 11,000 facial recognition searches using software purchased from Clearview AI.

Banning facial recognition in the city won’t be easy. Digital rights groups have long pushed the New York City Council to ban use of facial recognition by city agencies. Though the council has taken up bills regulating landlords’ or businesses’ use of the tech, it has not advanced a ban. So Amnesty has shifted some attention to Albany, pushing for the state to enact Senate Bill S79, introduced by state Senator Brad Hoylman, which would ban law enforcement use of biometric surveillance technology, including facial recognition. The bill would also create a task force to recommend regulations around its use.

“We could then evaluate whether law enforcement should be permitted to use this technology and if so, create a regulatory framework to determine, what’s prohibited, minimum accuracy standards, and protections for due process and privacy,” Hoylman says.

In a statement, NYPD spokesperson Detective Sophia Mason said, “The NYPD uses facial recognition as a limited investigative tool, comparing a still image from a surveillance video to a pool of lawfully possessed arrest photos. This technology helps bring justice to victims of crimes. Any facial recognition match is solely an investigative lead and not probable cause for arrest—no enforcement action is ever taken solely on the basis of a facial recognition match.”

Amnesty International’s Kleinman questions whether facial recognition is a vital investigative tool. “The police already have tremendous resources,” he says. “The idea that without facial recognition, they’re left powerless ignores the powers they already have.

“That kind of argument, ‘If you don’t allow us to do X then we can’t do our job,’ can be used to justify any level of surveillance,” Kleinman says. “There’s no stopping point to that argument.”

In December, the NYPD released a list of surveillance technologies the department uses in its investigations, including facial recognition, in response to a new city law. The disclosures covered dozens of technologies, including closed-circuit TV cameras, license plate reading cameras, drone surveillance, Wi-Fi geolocation services, and more.

Chief among Amnesty International’s concerns about facial recognition is the potential “chilling effect” on protests and activism. People may not want to assemble freely if they’re worried about being targeted by police cameras or identified using facial recognition.

The concern is top of mind for Dwreck Ingram, organizer and co-founder of the advocacy group Warriors In The Garden, another member of the #BantheScan campaign. Last summer, during the George Floyd protests, the NYPD used facial recognition to identify Ingram, confronting him outside his apartment. Ingram believes the comparison photos were sourced from his Instagram. Police claim Ingram assaulted a police officer, a felony offense, by allegedly shouting into an officer’s ear using a megaphone. Eventually, Ingram turned himself in to police and the Manhattan district attorney reduced the charge to a misdemeanor.

“I still feel like I’m being monitored,” he says.

The charges against Ingram are pending, but he says the encounter has made him critically aware of how surveillance is used against protesters, with lasting consequences. In its public disclosure of facial recognition, NYPD says it “does not use facial recognition technology to monitor and identify people in crowds or political rallies.” But last August, NYPD acknowledged to Gothamist it used facial recognition in Ingram’s case.

Ingram sees the #BantheScan campaign as part of the nationwide conversation on police reform and racial equity, explaining that the lack of proper regulation, or even understanding, of surveillance tools only compounds the injustices that are fueling protests across the country.

“We have a duty to be responsible stewards of the technology,” he says. “Before we implement things nationwide, before we utilize surveillance tools, we have to know the ramifications and the damages that they may cause. We have to look at how people’s information and images and faces can be manipulated and included in virtual lineups.”

As part of the campaign, Amnesty International wants to demonstrate to New Yorkers the level of surveillance they’re already under. Next month, #BantheScan plans to release a map of the location of the city’s facial-recognition enabled cameras.

“We want people to get a sense of just how pervasive this surveillance is,” Kleinman says. “One of the most powerful ways to do that is to show people that almost anywhere you are in New York, you risk being surveilled by the NYPD and having your image captured.”


More Great WIRED Stories

Products You May Like

Articles You May Like

The White House Puts New Guardrails on Government Use of AI
The Case Against Apple Weaponizes the Cult of Cupertino
The EU Targets Apple, Meta, and Alphabet for Investigations Under New Tech Law
Inside the Creation of DBRX, the World’s Most Powerful Open Source AI Model
4 Internal Apple Emails That Helped the DOJ Build Its Case

Leave a Reply