“The practice of law, like most professions, is being substantially affected — you might even say disrupted — by AI. But law is unique in that how it resolves AI issues will affect every other industry and use of AI, because it will set the ground rules by which AI must operate,” says Gary Merchant, Regents and Foundation Professor of Law and Faculty Director at Arizona State University.
AI is quickly transforming the legal field and changing the way attorneys operate. As this technology is increasingly integrated into practice, lawyers face the challenge of navigating a complex web of emerging tools and evolving regulations.
Ensuring that AI is used ethically and responsibly is vital, not only to ensure that attorneys are compliant with their requisite professional standards, but for setting broader societal standards and shaping how this powerful technology is utilized across industries.
State and federal regulations on the use of AI - 2025 updates

Despite congressional awareness of the need to regulate AI, the United States Government has yet to implement legislation governing the use of AI in the United States. In October 2023, Biden passed The Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (Executive Order 14110), which was the most comprehensive commitment from the federal government to date.
Biden's order required AI system developers that pose risks to US national security, the economy, public health or safety to share the results of safety tests with the federal government before they released new systems to the public.
However, as of January 20, 2025, President Trump has revoked this order.
Merchant says:
“There is much uncertainty about the applicable legal requirements for AI, which creates challenges for lawyers to advise clients on relevant requirements. Congress is institutionally incapable of providing an effective legal framework, nor do we want them to try, because any statutes would be obsolete by the time the ink dries. It will therefore be the courts that decide legal issues for AI, but this process will take several years to resolve, so we must all live with uncertainty in the meantime.”
Merchant aptly reiterates the ongoing and inevitable challenge of legislative developments keeping up with the rapid pace of innovation and AI development. Despite the foregoing, it is apparent that legislatures at the federal and state level have been cognizant of the imminence to regulate AI, materializing in numerous regulations, and congressional discourse. However, with the Trump administration now in power and a Republican-controlled Senate, the future of federally-mandated AI oversight remains to be seen.
The following are a few recent examples of AI-related federal regulations and directives. However as of January 20, 2025, two have been revoked by the Trump administration:
- (Now revoked) 2022 Blueprint for an AI Bill of Rights: This Biden-issued executive regulation offered a framework for ethical AI development. Although not legally binding nor enforceable, it served as a set of principles aimed at protecting individuals from potential harms posed by AI systems.
- AI Insight Forums: Introduced by Senate Majority Leader Chuck Schumer in 2023, this was a series of closed-door meetings to guide Congress in crafting effective AI legislation.
- 2024 AI Risk Management Framework: Developed by The National Institute of Standards and Technology, this framework was developed to better manage risks associated with AI use.
- 2024 CREATE Act: This bipartisan effort proposes the establishment of a national resource for AI research to bolster research and development in the US.
- (Now revoked) 2024 National Security Memorandum (NSM) on AI: This memo established guidelines and strategies for the development, use, and regulation of AI to enhance national security, protect critical infrastructure, and promote US leadership in AI innovation.
This might interest you: AI Policy 2025: The Diverging Visions of Biden & Trump
Similar to privacy law, other jurisdictions are leading the way, both in their timing of ratification and the strength of their respective legislation. The EU AI Act, passed in 2024, is seemingly the first substantive law regulating the use of AI. The regulation defines rules for risk-based classifications and requires transparency in applications like facial recognition and medical devices.
Much like GDPR adherence, American companies and attorneys conducting business overseas must similarly comply with the EU AI Act. For firms handling cross-border cases, the EU's influence reinforces the need to keep pace with global developments to mitigate liability.
Until the federal government catches up, states are filling the legislative void. Multiple states, including California, Utah, Tennessee and New York, have passed AI legislation, although each of these laws seemingly focuses on a specific use of AI, such as deep fakes in Tennessee.
California, home to Silicon Valley and a hub for the Big Five tech companies, will implement its first AI-specific law on January 1, 2026. The Generative Artificial Intelligence Accountability Act focuses on protecting consumers from biased or harmful AI applications, echoing broader concerns about algorithmic fairness. Meanwhile, other states like Colorado, Connecticut, and Illinois are pushing forward with their own regulations, targeting areas such as automated hiring and consumer privacy.
Regarding the specific use of AI in law, the Illinois Supreme Court, for example, just implemented a policy effective January 1, 2025 permitting judges and lawyers to utilize AI tools, provided they do not compromise due process, equal protection, or access to justice. This policy emphasizes that legal professionals must review all AI-generated content and safeguard sensitive information.
With no concrete oversight of the use of AI in law, its up to attorneys to navigate how to ethically use AI tools within their practices.
Legal and ethical risks of using AI in law—and how to mitigate them

In July 2024, The American Bar Association's Standing Committee on Ethics and Professional Responsibility published a formal opinion on Generative AI tools in law. The opinion emphasizes the importance of supervision, confidentiality, and client consent, particularly when using AI-based legal tools. Lawyers must exercise due diligence when selecting vendors and ensure that their work meets the same ethical and professional standards as if it were performed without AI.
While the ABA’s opinion provides ethical guidance for lawyers integrating AI into their practice, concerns over AI’s broader societal impact have led to increasing global regulatory efforts. Mauricio Figueroa, a legal researcher focusing on law and digital technologies, points out in his article, The Drawbacks of International Law in Governing Artificial Intelligence, that the discourse around AI governance has begun to shift as awareness of its potential risks and impacts grow. This has prompted international organizations like UNESCO and the UN to take steps to address governance gaps at a global level.
In November 2021, UNESCO released the Recommendation on the Ethics of Artificial Intelligence, endorsed by all 193 Member States and designed for universal application. Subsequently, the United Nations Secretary General's High-Level Advisory Body on Artificial Intelligence published its final report, Governing AI for Humanity, in September 2024. Although non-binding, the report highlights the necessity for global AI governance and encourages member states to participate in collaborative efforts to tackle this multifaceted challenge.
While international bodies focus on shaping high-level AI policies and regulations, attorneys must grapple with the immediate legal and ethical challenges of integrating AI into their practice. The following are some of the key legal and ethical risks attorneys may face when incorporating AI into their practice, along with strategies on how to mitigate them:
1. Data privacy and confidentiality
AI tools may require inputting personally identifiable information (PII), creating risks of exposing client information. If client data is processed through external AI systems, there is a potential for breaches or misuse. Attorneys using AI must ensure compliance with data protection laws and implement safeguards to protect sensitive information.
Mitigation strategies:
- Redact PII before inputting data into AI systems.
- Maintain data security through encryption and regular audits of your firm’s data protection policies to ensure compliance.
- Anonymize data and content whenever possible to depersonalize sensitive data before analysis.
- Establish internal protocols to guide AI usage, ensuring alignment with confidentiality requirements.
2. Inaccurate outputs and hallucinations
AI tools, including general tools like ChatGPT, are prone to hallucinations, the generation of outputs that appear credible but are factually incorrect. This is also true of AI featured on legal databases. Underscoring AI’s often-forgotten fallability, AI is programmed by developers, and accordingly, can deliver false outputs.
In addition, courts are starting to crack down on those who present briefs that contain hallucinations. For example, in Mojtabavi v. Blinken (C.D. Cal. Dec. 12, 2024), a federal court sanctioned a pro se plaintiff for “continuing to provide falsified or inaccurate case citations in support of his arguments.” This followed a previous warning from the court that using “a text-generative artificial intelligence tool (e.g., ChatGPT) that has generated fake case citations” was “‘unacceptable.’” US District Court Judge Percy Anderson of the Central District of California found that the plaintiff’s briefs were “filled with inaccurate and falsified case citations,” which were recognized as generative AI hallucinations. While the court considered the underlying arguments on their merits, it ultimately dismissed the case with prejudice as a sanction for the plaintiff’s misconduct.
Mitigation strategies:
- Always validate AI-generated research by cross-referencing it with authoritative legal databases.
- Treat AI outputs as a starting point, not a final answer, and conduct independent due diligence.
- Train legal teams to critically evaluate AI-generated insights before integrating them into casework - but not without validating insights through additional research.
3. Bias
AI systems are only as unbiased as the data they are trained on. When historical biases are embedded in training data, they can distort the outputs, potentially compromising the accuracy and fairness of the AI’s results. Further, AI sometimes opts to give an affirmative answer to a person’s queries, skewing the quality of its results and even going so far as to mischaracterize the law.
Mitigation strategies:
- Continue to validate - or disprove - answers and data provided by AI.
- Be cognizant of AI’s bias and challenge its answers when needed.
- Regularly audit AI outputs for signs of bias or skewed reasoning.
- Ensure attorneys are cognizant of this bias, and train them on how to detect and mitigate potential bias.
4. Accountability in AI-generated decisions
Although evident, attorneys cannot abdicate responsibility for decisions or advice derived from AI. Attorneys remain liable for their advice and work, regardless of whether it derives from an incorrect result or recommendation from AI.
Mitigation strategies:
- Be transparent with clients about the use of AI in your work, and do not evade responsibility or liability for errors made by AI.
- Once again, validate any advice, results, or product that AI generates for you. Even if using AI for something as simple as generating a template Complaint, it is your name on the pleadings.
- Maintain clear documentation of how AI tools were used in decision-making processes.
- Implement policies to ensure that human oversight is part of every critical decision.
- Remember: AI is a tool to facilitate or supplement legal work, not to replace it. Use it accordingly.
Best practices for safe and ethical use of AI in law

1. Create an AI policy for your firm
According to Summize's Legal Disruptors 2025 Report, 89% of in-house legal teams use AI tools, but 53% have no formal AI policy in place
Creating an AI policy for your firm or legal department is a constructive way to ensure you integrate AI into your practice responsibly and in compliance with professional standards. A good policy sets clear guidelines for using AI tools to safeguard client confidentiality, confirm the veracity of AI-generated outputs, and address potential biases. Doing so also reduces legal risks, builds client trust, and demonstrates your firm’s commitment to responsible, ethical legal practices.
Tip: NIST's AI Risk Management Framework: Generative Artificial Intelligence Profile provides guidance on identifying risks associated with generative AI and recommend strategies for effective risk management. This publication offers strategic insights for developing an AI policy for your firm.
Include the following in your policy:
- Approved AI tools and their use cases.
- Guidelines and trainings for safeguarding client confidentiality and compliance with data protection and privacy laws, both domestic and internationally.
- Procedures for validating AI-generated research and outputs.
- Accountability frameworks ensure that attorneys maintain oversight and responsibility for AI-driven decisions.
- Training materials or programs to educate staff on ethical AI use and potential risks.
Remember that AI is constantly evolving, so your AI policy should be periodically audited and revised as the technology advances.
Download Darrow's AI policy template and customize it for your firm.
2. Appoint a data protection officer (DPO)
Law firms incorporating AI into their workflow should consider appointing a Data Protection Officer (DPO) to manage data privacy and compliance. A DPO's role includes ensuring that client information, regardless of its sensitivity, is handled in accordance with applicable laws and professional standards, and that an entity is compliant with the requisite privacy and data protection regulations. This is even more crucial when using AI tools that process large volumes of data, some of which may involve confidential or personal information.
A DPO helps establish clear data protection policies, oversees and advises on compliance with legal and regulatory requirements, educates employees on their respective roles in protecting personal information, and conducts regular audits to maintain standards. A DPO may also act as a liaison between their firm or organization and the data subjects and regulatory authorities, where relevant.
Keep in mind that a DPO must operate independently of its entity and reports directly to the highest management level, which ensures impartiality, independence, along with underscoring the commitment to privacy compliance.
Having a DPO on board reduces the risk of breaches, may protect the firm from legal exposure, and reinforces client trust. It signals a commitment to confidentiality and professional ethics while allowing the firm to use AI responsibly and effectively.
3. Train staff on AI tools and risks
According to Forbes, new research from the National Cybersecurity Alliance finds that 55% of employees using AI at work have no training on its risks, and 65% of those same employees expressed worry and concern regarding AI-related cybercrimes.
This lack of training poses significant challenges, especially for legal professionals, where improper use of AI can compromise client confidentiality, lead to inaccurate outputs, or even result in legal or ethical violations.
Creating an AI training program for your firm is one of the most effective ways to avoid these risks. Such a program not only educates staff on the capabilities and limitations of AI but also equips them to proactively identify and address potential risks, in addition to ensuring proper use of AI in practice.
Here are some tips for creating an effective AI training program:
- Focus on professional responsibility: Incorporate guidance on the ethical use of AI in legal practice, emphasizing attorneys’ ongoing duty to maintain competence, confidentiality, and transparency.
- Discourage overreliance on AI: AI is a tool to facilitate and assist employees, not to replace their own outputs. Recalling AI’s fallabilities, combined with reinforcing lawyers’ liability for their own work - regardless of their use of AI - is vital.
- Highlight bias mitigation: Train staff to recognize and mitigate biases in AI outputs, particularly when working on sensitive cases like discrimination or class actions.
- Review state-specific requirements: Include training on compliance with state-specific AI regulations and privacy laws, if applicable to your firm. Where relevant, this training should include the GDPR and the EU AI Act.
- Encourage continuing legal education: In addition to ensuring compliance with legislation already in place, lawyers should be attuned to upcoming regulatory changes, both domestically and internationally, to ensure future compliance and become better versed in AI’s treatment in the law.
- Use real-world scenarios to simulate AI usage in various legal contexts. Lawyers could be presented with an example of AI providing a legally inaccurate output, that may not be immediately obvious to a user. Such examples would teach participants to exercise diligence, validate the AI engine’s answer, and how to identify and address errors in future use.
- Integrate AI risk management practices: Include lessons on using anonymization and encryption when processing client data through AI systems, and make these practices mandatory firm-wide. You should also teach staff to evaluate AI tools for compliance with firm policies and security protocols.
- Measure effectiveness and adapt: Use assessments, feedback surveys, and performance tracking to evaluate the success of your training program. Adapt modules to reflect advancements in AI technology or changes in regulations.
This might interest you:
The following image is an excerpt from Gartner’s 2024 report, Generative AI Solutions in the Legal Marketplace, which outlines key questions to ask generative AI vendors to ensure their products address risks, ensure compliance, and promote the responsible use of AI technologies.

Understanding ethical and legal risks
For firms to embrace AI in a responsible way, they need systems and protocols that prioritize ethical use, data security and compliance. Innovative solutions, like those developed at Darrow, show how AI can support legal professionals while maintaining the highest ethical standards and protecting sensitive information.
We have built a data intelligence platform, consisting of large language models and machine learning algorithms, with strict safeguards in place. The platform analyzes and clusters publicly available data, allowing our Legal Intelligence team to connect the dots between common anomalies and identify potential legal violations.
Darrow’s protocol emphasizes data security, transparency, and compliance with ethical and regulatory standards for AI use. Sensitive client information is never inputted into our systems, and we enforce rigorous validation processes for AI-generated outputs. Data is also always anonymized and/or desensitized, where relevant. We maintain human oversight at every stage of the violation detection process to ensure our technology supports legal professionals without compromising ethical or legal standards.
This commitment enables us to use AI responsibly while helping our partners build stronger, evidence-backed cases.
This might interest you: