Law practices are seeing emergent artificial intelligence (AI) technology beginning to change their world.
It’s happening on many frontiers: web robots, or bots, aid in document search processes, sometimes analyzing millions of pages for information relevant to a proceeding. Many legal research platforms now include some form of AI technology and data analytics. Judicial officials have begun to rely on artificial intelligence to support decision-making in setting bail, paroles and even sentencing. Twenty-three states use some kind of computerized risk assessment tool in criminal processes, and in Colorado, the law requires it. The applications are almost endless.
As with other high-tech disruptions, the use of AI in legal practice raises questions.
Advocates of AI in data analytics point to its amazing speed and efficiency, compared to that of mere mortals. Proponents of AI in courtrooms believe technology can help reduce human bias by impartially, emotionlessly, analyzing cases.
But other voices raise a note of caution.
They point to manmade problems with the algorithms that drive AI. After all, humans — fallible humans — create bots and algorithms. What if these people, either in error or on purpose, build in bias or mistakes? What authority controls whether AI represents superior intelligence … or whether “garbage in, garbage out” drives decisions that affect human lives?
Elon Musk, co-founder of Tesla Inc. and founder of SpaceX, warns that humankind is “summoning the demon” with AI.
Kris Niedringhaus, associate dean for library and information services, thinks Musk has a point.
“I may not have stated it quite as strongly as he did,” Niedringhaus said, “but I think it’s a valid call for us to pay attention to what we’re doing. As with all advancements, AI can be used for good but could also have unintended consequences.
“We simply need to be very conscious about what we’re doing.”
Let the lawyer beware
Niedringhaus, who is vice president of the board of directors of the Center for Computer-Assisted Legal Instruction (CALI), believes technology can be oversold.
“People think with AI you simply enter words into a magic box, and it always gives you the right answer,” she said. “We need to be careful not to embrace AI without understanding what its problems are.”
She points out that artificial intelligence isn’t autonomous intelligence.
“Attorneys who use AI must understand that they remain ethically responsible for their work,” Niedringhaus emphasized. “If your legal work rests on one of these AI technologies, and you don’t understand how it works, how do you ensure that what it has produced is correct and complete?”
Niedringhaus continued, “Lawyers are responsible for understanding all that needs to be done to properly use and maintain technology, including artificial intelligence, and they also need to be aware that algorithms inherently take on the biases of the coders. With algorithms and predictive coding or legal research, you have to make sure competent people are training the systems, and you have to be continually monitoring and maintaining them to ensure the machines are doing what they are supposed to be doing.”
In 2012, the American Bar Association added a comment to its competency rule on ethics that counsels lawyers to keep up-to-date on relevant technology, its uses in law practice and its rules of confidentiality.
“There needs to be more legal education on this,” Niedringhaus said.
Data analytics and quality control
Rose Jones (J.D. ’02) is the director of e-Discovery Project Management and Client Services for Atlanta-based King & Spalding. Her practice focuses on e-discovery management, including the development of protocols for the discovery process. Jones represents clients “with millions of files and pages who must produce key documents as part of discovery.”
She compares AI technology used for data searches to tech used by Amazon and Netflix to make recommendations to a consumer based on previous habits. The system learns as it is trained, and its trainers ultimately determine its value.
“Quality assurance is the key to data analytics,” Jones said. The best way to make AI effective in e-discovery is by “collaboration among legal teams, in-house counsel, regulatory counsel and tech counsel, with input from subject-matter experts or employees,” she said.
The more knowledgeable the people are who touch the training of the AI system, she says, the more valuable the discovery.
Jones also notes AI’s challenges.
“Most lawyers and judges don’t understand the math that drives algorithms,” she said. “They can’t explain it if challenged. Others can be concerned about the level of disclosure and may worry that they’ll be asked to hand over data to show how coding was created. There’s also a misunderstanding that some privileged documents or trade secrets can be handed over without the ability to review documents before they’re disclosed.
“All these issues,” she said, “can be addressed with proper advice from an experienced attorney who practices e-discovery.”
New frontiers for law
There’s nothing artificial about the scope of change that law practices will eventually see with AI. It’s happening already, especially in the provision of legal services to people previously unable to afford them.
In the United Kingdom, a designer has created a bot for the smartphone. If a driver gets a parking ticket, the bot helps appeal the parking ticket and file appeal documents.
Applications of AI are also appearing in immigration and in sexual assault systems. Surprising? Not really, says Niedringhaus.
“Consider that immigrants and assault victims may want to explore their legal options without talking to another human being. Having a bot that can interact with authorities offers a new kind of legal access.”
So with AI on the horizon, what should students be considering?
“Many law students get their first employment opportunity doing document review,” said Niedringhaus. “If that goes away, what kind of jobs do students need to prepare for? That’s an important conversation to be having.”
Even if entry-level jobs grow scarcer, young lawyers will likely embrace AI, says third-year student Yasmin Assar (J.D. ’18).
“I think that with any sort of new tech, the younger generation will promulgate it,” she said. “I think that younger people are more accepting of tech and more willing to utilize it.”
Niedringhaus agrees that AI will open new opportunities.
“A handful of firms have set up their own law technology or AI incubators,” she said. “Some are investing in research to develop things with AI. A lot of cutting-edge stuff will come from that.”
The human touch
In all, it’s an intelligent assumption to believe AI will forever change law practice.
However, Niedringhaus says, “There are things that machines still can’t do.
“It’s hard to imagine artificial intelligence being able to read a jury without human assistance,” she said. “AI may be able to scan responses to voir dire or analyze facial expressions, but there is another, less tangible aspect that is hard to define. At this point, I don’t know of any technology that replaces that. There are still parts of the human thought process that will be extremely difficult to replicate with AI.
“I think there will forever remain a place for the human element in the practice of law.”