“The practice of law, like most professions, is being substantially affected — you might even say disrupted — by AI. But law is unique in that how it resolves AI issues will affect every other industry and use of AI, because it will set the ground rules by which AI must operate,” says Gary Merchant, Regents and Foundation Professor of Law and Faculty Director at Arizona State University.
AI is quickly transforming the legal field and changing the way attorneys operate. As this technology is increasingly integrated into practice, lawyers face the challenge of navigating a complex web of emerging tools and evolving regulations.
Ensuring that AI is used ethically and responsibly is vital, not only to ensure that attorneys are compliant with their requisite professional standards, but for setting broader societal standards and shaping how this powerful technology is utilized across industries.
State and federal regulations on the use of AI
Despite congressional awareness of the need to regulate AI, the United States Government has yet to implement legislation governing the use of AI in the United States. The Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (Executive Order 14110) remains the most comprehensive commitment from the federal government to date.
Merchant says:
“There is much uncertainty about the applicable legal requirements for AI, which creates challenges for lawyers to advise clients on relevant requirements. Congress is institutionally incapable of providing an effective legal framework, nor do we want them to try, because any statutes would be obsolete by the time the ink dries. It will therefore be the courts that decide legal issues for AI, but this process will take several years to resolve, so we must all live with uncertainty in the meantime.”
Merchant aptly reiterates the ongoing and inevitable challenge of legislative developments keeping up with the rapid pace of innovation and AI development. Despite the foregoing, it is apparent that legislatures at the federal and state level are cognizant of the imminence to regulate AI, materializing in numerous regulations, and congressional discourse, the latter of which could lead to stricter policy governing AI use.
A few recent examples of AI-related federal regulations include:
- 2022 Blueprint for an AI Bill of Rights: The Blueprint offers a framework for ethical AI development, emphasizing transparency, accountability and fairness. Although not legally binding nor enforceable, it serves as a set of principles aimed at protecting individuals from potential harms posed by AI systems.
- AI Insight Forums: Introduced by Senate Majority Leader Chuck Schumer in 2023, this was a series of closed-door meetings to guide Congress in crafting effective AI legislation.
- 2024 AI Risk Management Framework: Developed by The National Institute of Standards and Technology, this framework was developed to better manage risks associated with AI use.
- 2024 CREATE Act: This bipartisan effort proposes the establishment of a national resource for AI research to bolster research and development in the US.
- 2024 National Security Memorandum (NSM) on AI: This memo establishes guidelines and strategies for the development, use, and regulation of AI in ways that enhance national security, protect critical infrastructure, and promote US leadership in AI innovation while addressing potential risks.
Similar to privacy law, other jurisdictions are leading the way, both in their timing of legislation, as well as the strength of their legislation. The EU AI Act, passed in 2024, is seemingly the first substantive law regulating the use of AI. The regulation defines rules for risk-based classifications and requires transparency in applications like facial recognition and medical devices.
Similar to GDPR adherence, American companies and attorneys conducting business overseas must similarly comply with the EU AI Act. For firms handling cross-border cases, the EU's influence reinforces the need to keep pace with global developments to mitigate liability.
Until the federal government catches up, states are filling the legislative void. Multiple states, including California, Utah, Tennessee and New York, have passed AI legislation, although each of these laws seemingly focuses on a specific use of AI, such as deep fakes in Tennessee.
California, home to Silicon Valley and a hub for the Big Five tech companies, will implement its first AI-specific law on January 1, 2026. The Generative Artificial Intelligence Accountability Act focuses on protecting consumers from biased or harmful AI applications, echoing broader concerns about algorithmic fairness. Meanwhile, other states like Colorado, Connecticut, and Illinois are pushing forward with their own regulations, targeting areas such as automated hiring and consumer privacy.
Attorneys working in or across these states will need to navigate an increasingly fragmented legal landscape, adapting strategies to align with these more stringent standards.
Legal and ethical risks of using AI in law—and how to mitigate them
While the American Bar Association has yet to issue explicit guidelines on the use of AI in legal practice, attorneys need to exercise even more diligence in adhering to their ethical obligations relating to competence, confidentiality, and transparency. AI introduces unique challenges in these areas.
Here are some key legal risks attorneys face when incorporating AI into their practice, along with strategies on how to mitigate them:
1. Data privacy and confidentiality
AI tools may require inputting personally identifiable information (PII), which creates risks of exposing client information. If client data is processed through external AI systems, there is a potential for breaches or misuse. Attorneys using AI must ensure compliance with data protection laws and implement safeguards to protect sensitive information.
Mitigation strategies:
- Redact PII before inputting data into AI systems.
- Maintain data security through encryption and regular audits of your firm’s data protection policies to ensure compliance.
- Anonymize data and content whenever possible to depersonalize sensitive data before analysis.
- Establish internal protocols to guide AI usage, ensuring alignment with confidentiality requirements.
2. Inaccurate outputs and hallucinations
AI tools, including general tools like ChatGPT, are prone to hallucinations, the generation of outputs that appear credible but are factually incorrect. This is also true of AI featured on legal databases. Underscoring AI’s often-forgotten fallability, AI is programmed by developers, and can deliver false outputs.
Mitigation strategies:
- Always validate AI-generated research by cross-referencing it with authoritative legal databases.
- Treat AI outputs as a starting point, not a final answer, and conduct independent due diligence.
- Train legal teams to critically evaluate AI-generated insights before integrating them into casework - but not without validating insights through additional research.
3. Bias
AI systems are only as unbiased as the data they are trained on. When historical biases are embedded in training data, they can distort the outputs, potentially compromising the accuracy and fairness of the AI’s results.
Further, AI sometimes opts to give an affirmative answer to a person’s queries, skewing the quality of its results and even going so far as to mischaracterize the law.
Mitigation Strategies:
- Choose reputable AI tools.
- Continue to validate - or disprove - answers and data provided by AI.
- Be cognizant of AI’s bias and challenge its answers when needed.
- Regularly audit AI outputs for signs of bias or skewed reasoning.
- Train attorneys to be cognizant of bias detection and correction techniques to ensure accurate outcomes.
4. Accountability in AI-generated decisions
Although evident, attorneys cannot abdicate responsibility for decisions or advice derived from AI. Attorneys remain liable for their advice and work, regardless of whether it derives from an incorrect result or recommendation from AI.
Mitigation strategies:
- Be transparent with clients about the use of AI in your work, and do not evade responsibility or liability for errors made by AI.
- Once again, validate any advice, results, or product that AI generates for you. Even if using AI for something as simple as generating a template Complaint, it is your name on the pleadings.
- Maintain clear documentation of how AI tools were used in decision-making processes.
- Implement policies to ensure that human oversight is part of every critical decision.
- Remember: AI is a tool to facilitate or supplement legal work, not to replace it. Use it accordingly.
Best practices for safe and ethical use of AI in law
1. Create an AI policy for your firm
Creating an AI policy for your firm or legal department is a constructive way to ensure AI is integrated into your practice responsibly and in compliance with professional standards. A good policy sets clear guidelines for using AI tools, helping to safeguard client confidentiality, confirm the veracity of AI-generated outputs and address potential biases.
Implementing a policy reduces legal risks, builds client trust and demonstrates your firm’s commitment to responsible, ethical legal practices.
Include the following in your policy:
- Approved AI tools and their use cases.
- Guidelines and trainings for safeguarding client confidentiality and compliance with data protection and privacy laws, both domestic and internationally.
- Procedures for validating AI-generated research and outputs.
- Accountability frameworks ensure that attorneys maintain oversight and responsibility for AI-driven decisions.
- Training materials or programs to educate staff on ethical AI use and potential risks.
Remember that AI is constantly evolving, so your AI policy should be periodically audited and revised as the technology advances.
2. Appoint a data protection officer (DPO)
Law firms incorporating AI into their workflow should consider appointing a Data Protection Officer (DPO) to manage data privacy and compliance. A DPO's role includes ensuring that client information, regardless of its sensitivity, is handled in accordance with applicable laws and professional standards, and that an entity is compliant with the requisite privacy and data protection regulations. This is even more crucial when using AI tools that process large volumes of data, some of which may involve confidential or personal information.
A DPO helps establish clear data protection policies, oversees and advises on compliance with legal and regulatory requirements, educates employees on their respective roles in protecting personal information, and conducts regular audits to maintain standards. A DPO may also act as a liaison between their firm or organization and the data subjects and regulatory authorities, where relevant.
Keep in mind that a DPO must operate independently of its entity and reports directly to the highest management level, which ensures impartiality, independence, along with underscoring the commitment to privacy compliance.
Having a DPO on board reduces the risk of breaches, protects the firm from legal exposure, and reinforces client trust. It signals a commitment to confidentiality and professional ethics while allowing the firm to use AI responsibly and effectively.
3. Train staff on AI tools and risks
According to Forbes, new research from the National Cybersecurity Alliance finds that 55% of employees using AI at work have no training on its risks, and 65% of those same employees expressed worry and concern regarding AI-related cybercrimes.
This lack of training poses significant challenges, especially for legal professionals, where improper use of AI can compromise client confidentiality, lead to inaccurate outputs, or even result in legal or ethical violations.
Creating an AI training program in your firm is one of the most effective ways to avoid these risks. Such a program not only educates staff on the capabilities and limitations of AI but also equips them to proactively identify and address potential risks, in addition to ensuring proper use of AI in practice.
Here are some tips for creating an effective AI training program:
- Focus on professional responsibility: Incorporate guidance on the ethical use of AI in legal practice, emphasizing attorneys’ ongoing duty to maintain competence, confidentiality, and transparency.
- Discourage overreliance on AI: AI is a tool to facilitate and assist employees, not to replace their own outputs. Recalling AI’s fallabilities, combined with reinforcing lawyers’ liability for their own work - regardless of their use of AI - is vital.
- Highlight bias mitigation: Train staff to recognize and mitigate biases in AI outputs, particularly when working on sensitive cases like discrimination or class actions.
- Review state-specific requirements: Include training on compliance with state-specific AI regulations and privacy laws, if applicable to your firm. Where relevant, this training should include the GDPR and the EU AI Act.
- Encourage continuing legal education: In addition to ensuring compliance with legislation already in place, lawyers should be attuned to upcoming regulatory changes, both domestically and internationally, to ensure future compliance and become better versed in AI’s treatment in the law.
- Use real-world scenarios to simulate AI usage in various legal contexts. For example, present a scenario where AI provides inaccurate case law, teaching participants how to identify and address errors.
- Integrate AI risk management practices: Include lessons on using anonymization and encryption when processing client data through AI systems, and make these practices mandatory firm-wide. You should also teach staff to evaluate AI tools for compliance with firm policies and security protocols.
- Measure effectiveness and adapt: Use assessments, feedback surveys, and performance tracking to evaluate the success of your training program. Adapt modules to reflect advancements in AI technology or changes in regulations.
Take ethical and legal risks seriously
For firms to embrace AI in a responsible way, they need systems and protocols that prioritize ethical use, data security and compliance. Innovative solutions, like those developed at Darrow, show how AI can support legal professionals while maintaining the highest ethical standards and protecting sensitive information.
We have built a data intelligence platform, consisting of large language models and machine learning algorithms, with strict safeguards in place. The platform analyzes and clusters publicly available data, allowing our Legal Intelligence team to connect the dots between common anomalies and identify potential legal violations.
Darrow’s protocol emphasizes data security, transparency, and compliance with ethical and regulatory standards for AI use. Sensitive client information is never inputted into our systems, and we enforce rigorous validation processes for AI-generated outputs. Data is also always anonymized and/or desensitized, where relevant. We maintain human oversight at every stage of the violation detection process to ensure our technology supports legal professionals without compromising ethical or legal standards.
This commitment enables us to use AI responsibly while helping our partners build stronger, evidence-backed cases.
This might interest you: