It was going to be our savior. It was going to remove the human factor, the bias whether explicit or implicit, from the mix. Artificial intelligence would rid us of the inherent racism of humans that we spent decades trying to shake but just never seemed to go away. After all, an algo can’t be racist. An algo has no feelings. An algo can’t love or hate. An algo is just an algo. Algorithms would save us.
Neither “woke” nor “social justice” were in vogue yet, so it would be unfair to characterize its proponents as such. They were against racism in the legal system, as were we all, but they weren’t “anti-racists” as that word is used today to characterize the new racism. That was back when eliminating racism was the goal rather than substituting new racism for old racism. And algos were the answer.
Then people began to realize that an algo was neither more nor less racist than the developer coded it. It used cold data, but the data came from humans. It applied it without favor, but fear might be built into its source. It was a dilemma. Factors like jobs and family ties were strong predictors of defendants returning to court or not committing new crimes, but jobs and families favored white people over black people. Algos worked, to the extent math works, but ended up providing similar disparate outcomes. Since the belief was that these outcomes were, per se, racist, the algos were racist.
The failing was made abundantly clear in Cathy O’Neil’s “Weapons of Math Destruction.” which demonstrated that the assumptions that went into the numbers perpetuated error in AI but hid it behind the seemingly cold, neutral readout. Since then, AI flipped on its head, another evil black box that enabled racism to remain in place while masking it with “science.” Prawf Frank Pasquale argues that it’s not enough that some realize this, but that evil algos must be prohibited.
[T]here is a risk of discrimination or lack of fair process in sensitive areas of evaluation, including education, employment, social assistance and credit scoring. This is a risk to fundamental rights, amply demonstrated in the United States in works like Cathy O’Neil’s “Weapons of Math Destruction” and Ruha Benjamin’s “Race After Technology.” Here, the E.U. is insisting on formal documentation from companies to demonstrate fair and nondiscriminatory practices. National supervisory authorities in each member state can impose hefty fines if businesses fail to comply.
Frank has a point about not trusting AI to do our dirty work any more than Tesla crashes instill confidence that self-driving cars (hear much about them lately?) won’t crash into big rigs. But the E.U. regs he promotes define an evil AI by its outcome, “nondiscriminatory practices.”
If the AI is predicated, for example, on prior criminal history, and black people are going to have more significant priors because cops treat them like dirt, focus their attention on black people, and consequently cause a feedback loop where racist policing gives rise to black people with more priors, which gives rise to AI treating black people as more prone to criminality, which is used to justify greater deployment in black neighborhoods such that more black people are arrested and have more priors, then the math works but it’s garbage in, garbage out.
But that doesn’t mean all data that ends up with disparate outcomes is discriminatory, either. Poverty correlates with crime, and not just for the obvious reason that poor people commit crimes because they have nothing to eat. There are cultural factors, two-parent families, good role models, a strong appreciation of education, what are derisively called “bourgeois values,” that come into play. The arguments about why don’t do much to change the short term effects of black on black crime, violence, drugs, theft and people being physically harmed. When someone is about to shoot you, it’s not a good time to argue over social welfare programs or whether the SAT is racist.
A.I. developers should not simply “move fast and break things,” to quote an early Facebook motto. Real technological advance depends on respect for fundamental rights, ensuring safety and banning particularly treacherous uses of artificial intelligence. The E.U. is now laying the intellectual foundations for such protections, in a wide spectrum of areas where advanced computation is now (or will be) deployed to make life-or-death decisions about the allocation of public assistance services, the targets of policing and the cost of credit.
There is an ideological assumption that these three factors, “fundamental rights, ensuring safety and banning particularly treacherous uses of artificial intelligence,” aren’t in internal conflict. Frank’s point is that AI developers should be careful not to develop algos that perpetuate racist input and then wrap it up in a pretty math bow. At the same time, what purpose is there to AI if it does its job well, is fundamentally sound, and still produces what Frank calls “treacherous uses” because it results in disparate outcomes?
Just as it’s wrong to hide racist assumptions within the data used by algos, is there any point to putting math to use when it’s only allowed to tell us 2+2=5 because that’s the outcome we want it to tell us? Don’t be falsely discriminatory, but also don’t be falsely non-discriminatory. But is there any support for accurate AI that doesn’t comport with our preconceived biases? Not if any algo that results in disparate outcomes is per se prohibited, no matter how factually accurate it may be.
No comments:
Post a Comment