Another dirty little secret about law is that much of what lawyers produce is repetitive crap. Back in the olden days, we went to the corner legal stationer and bought pads of fill-in-the-blank forms for the sort of stuff like contracts of sale, authorizations and releases that had all the magic words already printed on the page and we would put in clients’ names and act as if it took a brilliant lawyer to make it happen. Truth is, we told the secretary to prep a release for Joe Smith and our work was done.
When computers came along, we input the same forms into the computer and did pretty much the same thing. You only had to type it in once, and it was there forever. Whenever someone came up with a new clause or some new language, we added it into the form and the form grew from one page to 25, which by definition meant we could charge 25 times as much because the client had no clue that it was invented just for him. And clients were thrilled, because they never read the forms, but merely felt the heft and were invariably impressed with our lawyer brilliance based on the weight of the paper. More paper, better lawyer.
So why not let AI do the trick for us now?
Generative AI is having a cultural and commercial moment, being touted as the future of search, sparking legal disputes over copyright, and causing panic in schools and universities.
The technology, which uses large datasets to learn to generate pictures or text that appear natural, could be a good fit for the legal industry, which relies heavily on standardized documents and precedents.
To the extent it’s simply the next generation of the forms we once bought from Blumberg, it makes complete sense. They were completely routine and required no bespoke lawyering beyond filling in the blanks. If that was all you were going to do anyway, why not let the computer handle it for you?
But the problems with current generations of generative AI have already started to show. Most significantly, their tendency to confidently make things up—or “hallucinate.” That is problematic enough in search, but in the law, the difference between success and failure can be serious, and costly.
To no one’s surprise, legal AI businesses are blooming, because why not suck the money out of the legal space as long as you can? But what about the hallucinations?
Over email, Gabriel Pereyra, Harvey’s founder and CEO, says that the AI has a number of systems in place to prevent and detect hallucinations. “Our systems are finetuned for legal use cases on massive legal datasets, which greatly reduces hallucinations compared to existing systems,” he says.
Let’s follow that logic: We build AI systems that produce crap, but promise we build other AI systems that detect and prevent crap, so nothing to worry about. Convincing?
Even so, Harvey has gotten things wrong, says Wakeling—which is why Allen & Overy has a careful risk management program around the technology.
“We’ve got to provide the highest level of professional services,” Wakeling says. “We can’t have hallucinations contaminating legal advice.” Users who log in to Allen & Overy’s Harvey portal are confronted by a list of rules for using the tool. The most important, to Wakeling’s mind? “You must validate everything coming out of the system. You have to check everything.”
On the one hand, head of London-based law firm Allen & Overy’s markets innovation group, David Wakeling, says the right words, that competency matters. On the other hand, if he were really concerned about quality, why would he be using a system that’s known to fail in the first place when the same work required to get it right in the first place is what’s required to validate it afterward? True, the lawyer who produces his own work product could be an incompetent buffoon whose papers suck, but then, if the lawyer is no good at lawyering, why would he be any better at lawyering when it comes to validating untrustworthy AI lawyering?
AI is likely to remain used for entry-level work, says Daniel Sereduick, a data protection lawyer based in Paris, France. “Legal document drafting can be a very labor-intensive task that AI seems to be able to grasp quite well. Contracts, policies, and other legal documents tend to be normative, so AI’s capabilities in gathering and synthesizing information can do a lot of heavy lifting.”
Entry-level work is where baby lawyers learn their craft. They learn why words and clauses are needed in one document but not another, and why a misplaced comma can cost millions. If they don’t do this work, they don’t learn from their mistakes. And if their work is done by AI, they won’t be capable of recognizing mistakes because they’re never done the work from scratch.
And then there’s the confidentiality issue arising from feeding a client’s personal data into some other, non-lawyer, AI business, which means the other business now has all of the client’s personal data on its server.
“Can you lawfully use a piece of software built on that foundation [of mass data scraping]? In my opinion, this is an open question,” says data protection expert Robert Bateman.
Law firms would likely need a firm legal basis under the GDPR to feed any personal data about clients they control into a generative AI tool like Harvey, and contracts in place covering the processing of that data by third parties operating the AI tools, Bateman says.
Europe, where this is happening already, has the General Data Protection Regulation to theoretically control the capture and spread of personal data. Here, it would be a matter of trust, not to mention the lawyer’s willingness to sacrifice client confidences for the sake of expediency, because what tech company would ever sell or use your personal data?
Lawyers have long been called tech averse, slow on the acceptance and adoption of new technology. Are we?
It’s not that lawyers are anti-technology, it’s that they are anti-bullshit.
If you don’t see how anything can go wrong, then you’re hallucinating.
No comments:
Post a Comment