According to Summize's Legal Disruptors 2025 Report, 89% of in-house legal teams now use AI tools, yet 53% lack a formal AI policy.
As artificial intelligence (AI) becomes more prevalent in the legal field, many attorneys are still trying to ensure they use it responsibly. And while competent use of AI does not require attorneys to become technology experts, firms are absolutely responsible for understanding its capabilities, risks, and limitations to remain in compliance with their ethical and legal responsibilities.
Risks of AI in law: why your firm should have an AI policy

In 2024, the adoption rate of legal AI tools jumped from 19% in 2023 to 79%. There’s no doubt this technology is evolving fast, but attorneys must be aware of the associated risks. Implementing an AI policy in your law firm is a strategic way to help mitigate the following:
- Hallucinations: Hallucinations occur when AI generates information that appears accurate but is actually false. This information might look convincing but can include incorrect data, like non-existent case law or misinterpreted statutes.
- Bias: This refers to systematic errors in AI systems that result in unfair or inaccurate outcomes, often due to flawed data or algorithm design. In the legal field, AI bias can lead to skewed case assessments, misidentification of relevant precedents, or unequal treatment of clients.
- Lack of transparency: Sometimes it can be difficult to fully understand how AI systems generate their outputs. In the legal field, this can create challenges in assessing the reliability and fairness of AI-generated output, which is why it’s crucial that attorneys double check all AI outputs.
- Data privacy concerns: AI systems require access to large datasets, which may include sensitive client information. In the legal field, improper handling of this data can compromise client confidentiality and violate privacy laws and regulations.
Did You Know?
In June 2023, a federal judge fined two lawyers $5,000 after submitting a legal brief created by ChatGPT, in which they cited several nonexistent cases.
AI policies and regulations

There are currently no comprehensive legal policies or regulations pertaining to the use of AI in law.
The American Bar Association (ABA) did, however, publish its first formal ethics opinion on generative AI in July 2024. While the opinion doesn’t provide explicit rules, it emphasizes that lawyers must fully consider their ethical obligations when using AI, including maintaining competence, safeguarding client confidentiality, ensuring effective communication, and adhering to fair and reasonable fees.
At the state level, regulations are developing. For example, Illinois implemented the Illinois Supreme Court Policy on Artificial Intelligence on January 1, 2025. This policy allows judges and lawyers to use AI tools as long as they uphold due process, equal protection, and access to justice. It stresses the importance of reviewing all AI-generated content and protecting sensitive information.
Some states now require attorneys to disclose their use of AI in legal proceedings, and courts have sanctioned lawyers for submitting AI-generated filings that contained errors or fabricated citations.
However, federal regulations on AI remain fluid. During his first week in office, President Trump revoked two key policies implemented during President Biden’s administration: The Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, and the Blueprint for an AI Bill of Rights. Trump’s revocations reflect differing views on government oversight, with an emphasis on allowing the private sector to innovate with fewer restrictions.
With new legal AI tools in constant development and a lack of clear governmental oversight, law firms should establish their own AI policies to ensure ethical use, maintain compliance with evolving regulations, and protect client interests.
Law firm AI policy template for plaintiffs’ attorneys
This customizable template was developed with the ABA’s formal ethics opinion in mind. It’s designed to help plaintiff attorneys integrate AI into their practices while maintaining compliance with ethical obligations and delivering exceptional service to their clients. Feel free to adapt it as you see fit.
The full template is provided below, with each section offering guidance on what to include, along with sample text for reference.
Download Darrow’s AI policy template.
[Law Firm Name]’s Policy on AI Use
1. Purpose:
This section should define the overall goal of the policy. Emphasize the firm’s commitment to using AI responsibly and its mission of advocating for plaintiffs' rights while complying with ethical standards and legal regulations. Include a statement on how AI will support, not replace, legal professionals.
“This policy outlines the ethical and responsible use of artificial intelligence (AI) within the firm, ensuring adherence to the American Bar Association (ABA) Model Rules of Professional Conduct and applicable state regulations. It aims to ensure AI is used to enhance legal services while maintaining professional integrity and protecting client interests.”
2. Competence and professional responsibility:
Describe the firm’s expectations for attorneys’ knowledge and understanding of AI tools, emphasizing the need for ongoing education and professional development. Provide examples of AI-related skills attorneys should develop and how they can do so.
“Attorneys must maintain competence in their legal practice, including understanding the capabilities, limitations, and risks of AI tools used in their work. Continuous education on AI advancements, emerging tools, and ethical considerations is required to ensure attorneys provide effective and informed representation. Attorneys are encouraged to participate in AI-focused legal training, workshops, and professional development programs.”
3. Use of AI in legal practice:
Explain the specific legal tasks where AI is permitted and the importance of maintaining human oversight. Include examples of how AI can support case analysis, research, and drafting.
Attorneys may use AI tools for legal research, document drafting, case analysis, discovery, and to analyze data for patterns that support claims of negligence or misconduct, provided they verify the accuracy and reliability of AI-generated outputs. AI should serve as a tool to support, not replace, human judgment. Attorneys must ensure that AI-generated content aligns with legal standards and ethical guidelines. Disclosure of AI use to clients is not required unless specifically requested. However, attorneys should be transparent about AI usage when relevant to the client’s case.”
4. Confidentiality and data security:
Outline measures to protect client confidentiality when using AI tools, including guidelines for handling sensitive information and using secure platforms.
“Attorneys must not input private or confidential client information into AI systems to protect client confidentiality, including client intake forms and medical records, unless the platform is approved for handling protected health information or personally identifiable information. AI platforms must meet industry standards for data privacy and security, with encryption and access controls in place. The firm will regularly review and update its data security protocols to mitigate risks associated with AI usage.”
5. Verification of AI outputs:
Explain the importance of verifying AI-generated content and outline the process attorneys should follow to ensure accuracy and compliance.
“Attorneys are responsible for independently verifying AI-generated information, including damage calculations, case valuations, and plaintiff eligibility criteria, to ensure that claims are accurately assessed and ethically pursued. Any legal arguments, citations, or references produced by AI must be thoroughly reviewed to ensure accuracy and relevance. AI should be used as a tool to enhance efficiency and accuracy, but attorneys retain full accountability for the final work product.”
6. Compliance with state regulations:
If your state has state specific regulations pertaining to the use of AI in legal practice, mention the importance of adhering to them. Detail the need for compliance with both ABA guidelines and state-specific regulations, including how the firm will monitor changes in laws and ethical rules.
“Attorneys must stay up to date with state regulations regarding AI use in the legal field, including rules from state bar associations and courts. Any new regulations or amendments related to AI use must be incorporated into the firm’s policy, ensuring ongoing compliance.”
7. Monitoring and continuous improvement:
Define how the firm will monitor the effectiveness of AI tools and ensure ongoing compliance with ethical standards. Include processes for gathering feedback and making improvements.
“The firm will review its AI tools monthly/quarterly/yearly to ensure they meet evolving legal standards and best practices. Attorneys and staff are encouraged to provide feedback on the use of AI tools, including how AI impacts client outcomes and case success rates. This will be used to refine and improve the firm’s AI practices. Any issues or concerns related to AI use must be reported to the designated compliance officer or committee.”
8. Acknowledgment and signing:
All attorneys and staff members must review this AI policy and acknowledge their understanding and agreement to comply with its guidelines. By signing this document, employees confirm that they will use AI tools responsibly, verify AI-generated outputs, maintain client confidentiality, and adhere to all applicable ethical and legal regulations. A signed copy of this acknowledgment will be kept on file with the firm’s compliance officer.
Acknowledgment Signature: _______________________________
Printed Name: _______________________________
Position: _______________________________
Date: _______________________________
4 AI policy examples
For some more guidance when writing your firm’s AI policy template, check out the following 4 AI policy template examples from various companies:
- Fisher Philips LLPS’s Acceptable Use of Generative AI Tools [Sample Policy]
- American Inns of Court’s policy template regarding generative AI use
- Trust Community’s Artificial Intelligence Usage Policy Template
- Workable’s AI Tool Usage Policy
Ensuring ethical use of AI at Darrow
This focus on responsible AI use isn’t limited to law firms; it’s just as important for the technology providers supporting them.
We take this seriously at Darrow.
Our Legal Intelligence platform is made up of large language models and machine learning algorithms with strict safeguards in place. The platform analyzes and clusters publicly available data, allowing our Legal Intelligence team to connect the dots between common anomalies and identify potential legal violations.
Darrow’s protocol emphasizes data security, transparency, and compliance with ethical and regulatory standards for AI use. Sensitive client information is never inputted into our systems, and we enforce rigorous validation processes for AI-generated outputs. Data is also always anonymized and/or desensitized, where relevant. We maintain human oversight at every stage of the violation detection process to ensure our technology supports legal professionals without compromising ethical or legal standards.
To learn more about the ethical and legal issues of using AI in law, check out our blog post: Exploring the Legal & Ethical Issues of AI in Law.
This might interest you: