Thursday, February 9, 2023

AI and The New Junk Forensic Science

As army artificial intelligence burst onto the public radar with both written words and images, the initial reaction is largely curiosity and amusement. Some have tried to trick chatbots to see what it will get wrong. Others have created “art” that never existed, assuming you don’t consider DALL E an artist. But what about law? Sure, there was the stunt by DONOTPAY to offer $1,000,000 to let its AI bot argue a Supreme Court case, or for no money, a traffic court case, whichever came first. But that was just goofy and unserious.

At the same time, others are creating more serious applications for AI that raise new, yet old, concerns about the damage it can cause.

Two developers have used OpenAI’s DALL-E 2 image generation model to create a forensic sketch program that can create “hyper-realistic” police sketches of a suspect based on user inputs.

The program, called Forensic Sketch AI-rtist, was created by developers Artur Fortunato and Filipe Reynaud as part of a hackathon in December 2022. The developers wrote that the program’s purpose is to cut down the time it usually takes to draw a suspect of a crime, which is “around two to three hours,” according to a presentation uploaded to the internet.

In the past, the creation of sketches of suspects was both a time-consuming process and limited by the ability of the witness/victim to remember correctly and describe accurately what he saw so the artist could do the best sketch possible. Even then, the best a sketch artist could create was a sketch, an image that was pretty obviously not to be confused with a photograph of the actual person.

While the same input problems exist, AI can not only create the sketch in a fraction of the time, but can fill in the blanks. More importantly, it’s output is no longer a pen and ink drawing with perhaps some coloration, but what appears to be an actual photograph.

He certainly looks like a pretty real guy, and anybody seeing this image would be inclined to believe this is a picture of a pretty real guy. Except it’s not. Or perhaps it is, if AI used data from real people to flesh out the image of a killer such that the killer bore a striking resemblance to an actual person who had nothing to do with the crime.

The program asks users to provide information either through a template that asks for gender, skin color, eyebrows, nose, beard, age, hair, eyes, and jaw descriptions or through the open description feature, in which users can type any description they have of the suspect. Then, users can click “generate profile,” which sends the descriptions to DALL-E 2 and produces an AI-generated portrait.

Why should that be a problem? Because human minds work in weird ways, often substituting what’s seen or learned for what was vaguely recalled, such that a fuzzy image of what someone looked like comes into sharper focus once an image is seen. The only problem is that the witness remember the image better than what he actually so, such that the image becomes reality as it replaces memory.

And this can give rise to a multitude of problems, not the least of which is the same as when an asteroid strikes earth.

AI ethicists and researchers told Motherboard that the use of generative AI in police forensics is incredibly dangerous, with the potential to worsen existing racial and gender biases that appear in initial witness descriptions.

“The problem with traditional forensic sketches is not that they take time to produce (which seems to be the only problem that this AI forensic sketch program is trying to solve). The problem is that any forensic sketch is already subject to human biases and the frailty of human memory,” Jennifer Lynch, the Surveillance Litigation Director of the Electronic Frontier Foundation, told Motherboard. “AI can’t fix those human problems, and this particular program will likely make them worse through its very design.”

Why this very real problem is more problematic based on race or gender is unclear, beyond the obvious that in some retellings, every problem is worse based on race or gender.

Creating hyper-realistic suspect profiles resembling innocent people would be especially harmful to Black and Latino people, with Black people being five times more likely to be stopped by police without cause than a white person. People of color are also more likely to be stopped, searched, and suspected of a crime, even when no crime has occurred.

Of course, if the image is an accurate representation of the accused perpetrator of a crime, and the point of the image is to enable police and the public to identify the perp, then the “hyper-realistic” image would be hyper-useful. If the alleged rapist is an elderly white man, then the “hyper-realistic” image would serve to prevent police from targeting teenage black men for the crime. It cuts both ways, and would benefit black men by taking them out of the mix of potential suspects.

The problem is that it gives a far greater impression of certainty in the image than it merits. See a pen-and-ink sketch and the mind realizes that it’s imprecise and will give an impression at best. See what appears to be a photograph of the guy and you know who did it, who it was, and if that pic happens to resemble an innocent man who has no idea that cops think he’s a dangerous armed killer, there’s a very strong possibility that something really bad will happen when the police approach him and he doesn’t understand why they’re pointing guns at his head.

Granted, in the scheme of AI, this is just one small step onto the slippery slope of things that can be touted as great tech innovations. Just wait until deep fake videos of crimes being perpetrated by the arrested defendant are offered into evidence before a jury. But AI isn’t just funny images or dumb questions and answers that play out on social media. In short order, expect to find an array of new tech products in courtrooms that could prove simple, effective and completely wrong.

No comments:

Post a Comment