Law and analytics students huddle together in small groups, animatedly discussing summary judgments, algorithms, decision trees and predictive results while poring over charts and graphs in the Legal Analytics Lab.
Using text mining, they examined more than 200,000 docket sheet entries of 5,111 employment-related cases filed in the U.S. District Court for the Northern District of Georgia between 2010 and 2017 to quantify the types and frequency of lawsuit outcomes, to determine doctrines used by judges, and to identify if there are lawyer and judge “playbooks.” Essentially, the goal was to build a model that would predict the outcome of the various employment-related cases.
Pearson Cunningham (J.D. ’18) was on the team that analyzed summary judgment orders to determine if macro patterns emerged that may explain the variables driving outcomes.
“How do you take the words and phrases that compose legal reasoning and turn it into data? It was something we grappled with,” he said. “It was a nice change of pace to interact with students who approach a problem from a fundamentally different perspective than I do as a J.D. student.”
The project is the lab’s first sprint with Atlanta employment law firm Barrett & Farahany. The 20 students worked under the direction of Charlotte Alexander, lab director and associate professor of legal studies; Anne Tucker, associate professor of law; Javad Feizollahi, assistant professor of business analytics in the Institute for Insight, and Khalifeh al Jadda, research scientist.
Barrett & Farahany had started its own in-house data analysis initiative years ago to see how the courts handled summary judgment (the study was published in 2013). “Our goal was to choose and develop cases to ensure our clients survived summary judgment, which the data helped us with—although it was gathered at a high cost,” said Amanda Farahany, managing partner.
The initial study was labor intensive, and as the attorneys were seeking ways to automate it they learned about the Legal Analytics Lab, Farahany said. “It was the right combination of intellect and ideas at the right time. Our goal is to automate the information we gathered before, and then glean additional insights. We want to be able to automatically download and categorize how the courts—throughout the country—handle employment discrimination cases to show the disparity in treatment of these cases at summary judgment. The Legal Analytics Lab is getting us closer to that goal,” she said.
Some results were not surprising, said Alexander, who holds a joint appointment with the Robinson College of Business and the College of Law. For example, they found the total number of defense attorneys was almost double the number of plaintiffs’ attorneys in the data set; race discrimination and overtime claims were the most common types of employment law claims.
Delving into the text of judges’ summary judgment opinions uncovered more interesting results.
“We were able to build profiles of judges based on the legal doctrines they mentioned and the cases they cited most frequently. We got a sense of how often district court judges adopted magistrates’ reports and recommendations wholesale, suggesting something about where and how summary judgment decision-making was actually happening during our study period,” Alexander said.
The preliminary predictive model they created from the findings can forecast litigation outcomes with relatively high accuracy, given inputs that capture plaintiff, defendant, lawyer, judge and claim characteristics. They will continue to refine it to improve accuracy, testing various techniques to determine which are most effective in extracting meaning from legal text.
“In this sense, the Legal Analytics Lab really is a site for experimentation,” Alexander said. “We try and sometimes fail, and we iterate until we find the best solution or set of solutions.”
One challenge is the difficulty in writing code that takes into account the many variations of lawyers’ and judges’ writing styles, document structure and citation formats. The lab team is working to develop tools that rely less on specific inputs from researchers that are then encoded as computer commands, and more on ‘unsupervised’ or machine learning techniques. In machine learning, the computer code identifies patterns and commonalities across documents, and then researchers check its output and “train” the algorithm to achieve greater accuracy.
“These techniques hold promise especially when working with unstructured and sometimes idiosyncratic legal writing,” Alexander said.
Documents that are not machine-readable because of poor quality or being hand-written also present a challenge—it’s important for researchers to identify those types of potential issues, Alexander said.
“It creates gaps in the data—and those gaps are not randomly distributed but represent a particular subset of plaintiffs or document types,” she said.
For example, hand-written complaints are often filed by pro se plaintiffs.
“Given this, it would be misleading to use text analytics to draw conclusions about all complaints filed in employment law cases, if the analytics techniques simply don’t work on certain subsets of complaints.
“Exercises like the sprint are invaluable in exposing students to real-world data questions such as these and making them smart consumers of analytics-driven conclusions,” she said.
Working on the sprint made Cunningham feel more comfortable in using and understanding data analytics.
“I’m more fluent in the concepts now and eager to investigate the many ways big data can provide valuable insight to improve my own practice,” he said. “It’s relieving to know I got the chance to dip my toe in the water before graduating.”
The project required the law and the analytics students to learn a tremendous amount quickly about subjects they were not familiar with.
“Law students had to learn to think of text as data, rather than argument; analytics students got a crash course in employment law, civil procedure and Bluebooking,” she said. “The students were curious, motivated, dedicated and smart in figuring out ways to combine their separate skills and knowledge to produce results that were very much more than the sum of their parts.”
The Legal Analytics Lab is planning more sprints with another law firm and with a local corporation’s legal department, in addition to continuing faculty research projects that involve teams of law and analytics graduate students. The first Legal Analytics course will begin in the fall.