Exploring the future of legal tech: An interview with Mauricio Figueroa

Mauricio Figueroa is a legal researcher who has published on the regulation of digital technologies, taught university-level law courses, and delivered guest lectures worldwide. He studied law at UNAM in Mexico and Tel Aviv University, conducted PhD-level research at Newcastle University Law School, and has experience as a public officer in digital policy and procurement in Mexico City and with Mexico’s Ministry of Science and Technology.
79% of US law firms now use AI, yet understanding how to use this technology responsibly while keeping pace with its rapid evolution remains a challenge.
With experience in academia, public policy, and digital regulation, Mauricio Figueroa offers a unique perspective on the future of legal tech. I recently sat down with Mauricio to discuss the current state of legal technology, evolving policies both in the US and abroad, and how lawyers can use AI tools responsibly while balancing innovation and ethics.
Big picture: The future of AI in law
The legal tech industry is currently valued at $29.81 billion and is expected to hit $32.54 billion by 2026, a clear indication that AI in law shows no signs of slowing down.
Figueroa observes this shift firsthand while teaching law students, who use AI tools regularly and will likely carry that habit into their professional lives. However, he distinguishes between automation and substitution, indicating that AI isn’t going to be a replacement for attorneys. “[These tools] are more about streamlining processes than replacing professionals and making concrete decisions,” he explains.
In a recent journal paper, Figueroa explored the potential for certain conversational agents to take on more authoritative roles in law, such as assisting with dispute resolution. However, he considers this scenario unlikely within the next five years. “I expect some pretty intense debates about the ethical and practical implications of that idea.”
As AI becomes more embedded in legal practice, the skill set expected of law graduates is evolving, with proficiency in AI tools and technologies becoming increasingly essential. It’s possible this shift will prompt law schools to integrate AI training into their curricula or introduce dedicated courses on AI and legal tech.
AI tools have already shifted how attorneys practice, from how they uncover legal violations, conduct research, draft complaints, and measure legal analytics, all of which still require human oversight.
Challenges of AI in the legal industry

Integrating AI into the legal field presents both ethical and legal concerns, with no clear consensus yet on how to regulate its use, leaving this responsibility to the private sector. Legal tech companies need to ensure they prioritize ethical use, data security, and compliance to maintain trust and accountability amongst both attorneys and clients.
For example, Darrow’s company values include a People First approach, pertaining both to internal culture and how our company builds technology; we put humans at the center of everything we do. Our AI technology doesn’t make decisions for us, but helps our team of Legal Intelligence Analysts to detect legal violations. We prioritize transparency, human oversight, and respect for the legal process, ensuring that AI remains a tool for justice, not a risk to it.
The only data we crawl and normalize comes from publicly-available sources, such as consumer complaints, product reviews, court dockets, among other sources, so there’s no risk of data breaches. We also never input any sensitive information into our anomaly detection algorithms and we enforce rigorous validation processes for all AI-generated outputs.
While companies like Darrow ensure ethical use of AI, Figueroa discusses some of the primary issues that generative AI presents if not monitored responsibly.
“You’ve got the more immediate concerns, like hallucinations, where AI systems generate false or misleading information, and then you’ve got the bigger-picture issues, like the environmental costs of running these models.”
However, Figueroa points out that two of the most pressing legal challenges associated with AI are privacy and intellectual property (IP) rights, both in terms of the datasets used for training these models and the outputs they generate. “There are a lot of open questions about how personal data is being collected, processed, and potentially exposed by these systems.”
Government regulation vs. private sector innovation
On the IP front, disputes over AI-generated content are already making their way through the courts, suggesting that clearer regulations may emerge sooner than privacy reforms. “It feels like the legal profession is paying closer attention here, and I wouldn’t be surprised if we start to see more defined rules in IP law before privacy laws catch up,” Figueroa says.
He notes a growing divide between regulatory and deregulatory approaches, with Europe generally adopting stricter AI laws than the US. GDPR and the Artificial Intelligence Act (AIA) are two examples, though Figueroa highlights gaps in the AIA: “A lot of AI systems might slip through the cracks because the Act relies heavily on standard-setting bodies and self-labeling by companies. In that sense, it’s more of a co-regulatory framework than a strict top-down model.”
Meanwhile, with the US revoking Biden-era AI policies and reducing regulatory barriers to encourage innovation, Europe appears to be shifting its approach, as well. In February 2025, the EU withdrew from the AI Liability Directive proposal and is expected to do the same with the ePrivacy Regulation, Figueroa explains.
“The lesson here isn’t necessarily about copying one model or the other, but rather understanding the different regulatory philosophies and the political economy of AI. The real divide isn’t just about how to regulate—it’s about whether to regulate at all.”
Contrary to what some believe, Figueroa challenges the notion that regulation stifles innovation, pointing to the pharmaceutical industry as an example where strict oversight has not hindered progress.
As regulations take shape, AI companies must prioritize transparency, accountability, and fairness to build public trust while driving innovation within ethical boundaries.
Should international law regulate AI?

In a recent article published in Computers and Law Magazine, Figueroa explored the challenges of enforcing AI regulations on a global scale: “International law is built on categories and frameworks that just don’t align with how the AI landscape operates,” he explains. “It doesn’t account for the elasticity of Big Tech, the fluidity of digital markets, or the stark disparities between the Global North and the Global South.”
Big Tech, largely based in the Global North, benefits from flexible regulations and strong infrastructure, while labor-intensive AI tasks, like data annotation, are outsourced to the Global South, where labor protections may be weaker. This creates an imbalance: AI’s benefits are concentrated in wealthier nations, while labor and environmental costs fall elsewhere.
Despite these challenges, Figueroa sees a role for international law, although not through sweeping treaties but through targeted, context-specific frameworks that address AI’s unique challenges, like international labor law and international environmental law. “I wouldn’t call myself an absolute pessimist,” he notes, suggesting that a fragmented legal approach may prove more effective than broad global policies.
Balancing AI and the human element in law
While some have expressed concern that AI is going to replace lawyers, Figueroa emphasizes that this isn’t necessarily realistic. The core of practicing law lies in nuanced judgment, ethical considerations, and the ability to build trust with clients, all of which are inherently human skills that no machine can replicate.
“One of the core issues here is that justice isn’t a purely mathematical exercise. It’s performative, relational, and deeply rooted in human context. A judge’s discretion, for instance, cannot be reduced to a mathematical calculation,” Figueroa explains, echoing a conversation he had with Cari Hyde-Vaamonde, Anca Radu, and Tomás Mcinerney for the SCL podcast.
In his view, AI can assist with routine legal tasks, but “when it comes to decisions that affect people’s rights, livelihoods, or freedoms, that human sense of professional responsibility and ethical judgment needs to stay front and center.”
Applying AI in legal practice

Figueroa says that litigators who want to integrate AI into their practices should focus on tools that deliver measurable results beyond just the hype of generative models. His advice for attorneys? “Don’t get stuck in the ‘GPT-ization’ of legal tech,” he cautions, pointing to other valuable tools such as AI-driven case detection and predictive legal analytics.
“You at Darrow know this better than myself. Your company has pioneered how to apply machine learning to detect legal violations across massive datasets and aligning those findings with the mechanics of class actions. You successfully estimated a cumulative valuation of legal risk of billions of dollars.
This just goes to show that there are so many ways to incorporate AI into legal work to deliver tangible, high-impact results without relying on the typical chatbot or document generator that everyone’s fixated on right now,” he explains, making reference to the more than $5 billion in legal risk Darrow estimated in 2024.
He continues by again stressing the importance of human oversight, both in law firms and in legal tech companies that create AI tools. Both should implement dedicated teams to test products for both technical performance and legal accuracy.
Figueroa recognizes that while it may seem redundant to build AI tools aimed at efficiency only to hire lawyers to run quality checks, this is how you build reliable legal AI systems. “Building trust requires transparency and accountability at every level,” he concludes.
This might interest you: