Technology

Controversial facial-recognition app used by State Police, U.S. attorney

Technology that pulls images from social media platforms used by New York law enforcement agencies

The New York State Police seal

The New York State Police seal Leonard Zhukovsky/Shutterstock

One day after news broke that the entire client list of the embattled New York-based facial recognition company Clearview AI was stolen, a new report unveiled that the company’s controversial software has been used by law enforcement agencies, including the New York State Police and the U.S. Attorney’s Office for the Southern District of New York.

BuzzFeed News reported Thursday that both the New York State Police and the U.S. Attorney’s Office – along with more than 2,200 other agencies, companies and individuals around the world – have used Clearview AI’s facial recognition technology, which scrapes billions of public images from platforms like Facebook and YouTube to build a facial recognition database that police and others can use to match with unidentified faces. Clearview has landed in hot water with those platforms, which emphasize that those kinds of scraping practices violate their policies.

It’s notable in particular that the state police have paid for Clearview licenses, given how forcefully the New York Police Department denied any official partnership with Clearview AI. (It was later reported that individual NYPD officers had, however, used the Clearview AI app on their personal phones to help solve cases.) A spokesperson for the State Police told BuzzFeed that Clearview was one of many tools they use, and that the software is helpful in finding potential leads.

State Sen. Brad Hoylman has introduced legislation that would ban law enforcement’s use of biometric surveillance technology – including facial recognition – and create a task force to study possible future uses of the technology and how to regulate it. Assemblywoman Deborah Glick recently signed on as a co-sponsor of the legislation. Hoylman and others have raised concerns about the inaccuracy and biases of facial recognition in particular. One fear among people with those concerns is that inaccurate tools or biased algorithms could lead to law enforcement making suspects out of the wrong people, for example. In response to BuzzFeed’s reporting, Hoylman said that even amid a busy session in Albany, the state needs to pass an immediate moratorium on the use of these kinds of technologies. “The NYPD and the State Police are doing what they’re allowed to do. Nobody’s breaking any laws, because there are none,” Hoylman told City & State on Thursday. “We have to act and make certain that this technology is regulated in a responsible manner.”

For the rest of today's tech news, head over to First Read Tech.