A Flawed Facial-Recognition System Sent This Man to Jail

Robert Williams may be the first person in the US arrested based on a bad match—exposing problems with the algorithms and the ways they are used.
CCTV camera with shadows on wall
Photograph: David Denby/Alamy

In January, Detroit police arrested and charged 42-year-old Robert Williams with stealing $4,000 in watches from a retail store 15 months earlier. Taken away in handcuffs in front of his two children, Williams was sent to an interrogation room where police presented him with their evidence: Facial-recognition software matched his driver’s license photo with surveillance footage from the night of the crime.

Williams had an alibi, The New York Times reports, and immediately denied the charges. Police pointed to the image of the suspect from the night of the theft. It wasn’t him. “I just see a big black guy,” he told NPR.

Williams spent the next 30 hours in custody before he was released on bail. With seemingly no other evidence of Williams’ involvement, police eventually dropped the charges. On Wednesday, Williams joined with the ACLU of Michigan to file a complaint against the Detroit Police Department, demanding they stop using the software in investigations.

Williams' arrest may have been the first in the US to stem from faulty facial-recognition technology. But it wasn’t a simple case of mistaken identity. It was the latest link in a long chain of investigative failures that critics of how law enforcement uses facial recognition have warned about for years.

Privacy scholars and civil liberties groups have criticized facial-recognition technology because, among other things, it is less accurate on people with darker skin. That’s led cities from San Francisco to Cambridge, Massachusetts, to ban or limit use of the tool; the Boston City Council voted to ban the technology on Wednesday.

It’s best not to think of facial recognition as a single tool but as a multistep process that relies on both human and algorithmic judgment. Critics have spotlighted privacy issues at each step; in Williams’ case, the lack of safeguards led to an avoidable arrest.

Michigan State Police used facial-recognition software to compare surveillance footage from the theft to a state database of 49 million images, including Williams’ driver’s license photo. People don’t knowingly opt in to having their images used this way, but half of all US adults have their photos attached to a database. Police around the US have also used social media photos, witness sketches, even 3D renderings to match against crime scene photos.

The practice is especially pernicious when the databases include photos of people who were arrested but never charged or convicted of a crime. In New York, for example, police have come under fire for using mugshots from stop-and-frisk arrests as part of “probe photo” searches, even though stop and frisk was outlawed.

Williams’ photo seemingly became the main lead in the case against him. The Michigan State Police report on the match says facial-recognition matches are “not probable cause” to arrest someone. The state police guidelines say facial recognition is not a “form of positive identification” and should be considered “an investigative lead only.”

After the “match,” investigators sought evidence that would corroborate the case against Williams. The Times reports that police didn’t check Williams’ phone or if he had an alibi; instead, police asked an outside security consultant, who was not in the store at the time of the burglary, if he was the man in the surveillance footage. The woman’s answer was enough to prompt the arrest.

While federal research has found that facial recognition often performs less accurately on darker skin, critics also contest the very definition of a “match.”

The Times reports that when Williams’ photo was scanned, the software would’ve returned a list of potential matches alongside respective “confidence scores,” the algorithm’s projected likelihood that each photo was, in fact, the burglar in the surveillance footage. These confidence scores are important in facial-recognition matches. When the ACLU reported that Amazon’s Rekognition matched congresspeople to criminal databases, Amazon replied that the report used too low a threshold. Amazon said it considers 99 percent confidence a match; the ACLU set the confidence threshold at 80 percent.

It’s not clear what confidence levels the Michigan State Police’s algorithm offered for the matches it returned.

This is why the conversation has turned from regulation to moratorium—even when rules around corroborating leads are honored, police find ways around them. Critics fear that facial recognition will only automate and accelerate the worst abuses of the criminal justice system.

Consider Ferguson, Missouri. In 2015, the Department of Justice alleged that the city’s police force targeted black drivers for traffic tickets as part of a revenue scheme that forced people into high fees that could lead to arrest warrants after a single missed payment. In a city equipped with widespread facial recognition, these drivers could be identified and threatened with arrest when they came in contact with a camera or an officer equipped with a body camera.

Williams’ case is only the first known instance of mistaken charges filed due to facial recognition, but it’s possible there are others. It’s not clear how to improve a single investigative tool when it’s used by a largely ineffective system.

Tech companies furnishing the software, like IBM and Amazon, have offered vocal support for police reform but are still taking a more moderate approach than the bans supported by activists, by temporarily halting sales to police while still lobbying for regulations that are, to many experts, ineffectual.


More Great WIRED Stories