The 2025 presidential election marked a pivotal moment for US policy on artificial intelligence (AI). President Trump assumed office and quickly repealed Biden's Executive Order 14110, The Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, which was signed in late 2023.
Order 14110 emphasized safety, ethics, and accountability related to the use of AI. Trump’s decision to revoke it signals a stark difference in how the two presidents view the role of government, which is likely to have a significant impact on the legal issues surrounding AI in years to come.
Biden’s vision: more government oversight
In October 2022, the White House introduced the Blueprint for an AI Bill of Rights, outlining five guiding principles to ensure the ethical development and deployment of AI. The principles emphasized fairness, privacy, and transparency, with the aim of protecting civil rights and individual freedoms in the face of rapidly advancing AI technologies.
A year later, President Biden built upon this foundation and signed Executive Order 14110 on October 30, 2023, titled The Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. This order translated the Blueprint's principles into actionable federal policies, reflecting the administration’s commitment to balancing technological innovation with public accountability.
It mandated that AI developers, particularly those dealing with national security, public health, and the economy, share safety testing results with the federal government to mitigate risks prior to deployment. The order also established ethical standards to ensure that AI systems operate without bias or discrimination. Another key point was the importance of collaboration among federal agencies to study AI's impact on cybersecurity, education, and the workforce to establish a coordinated approach to managing AI integration into daily life.
Trump’s shift: deregulation and market-driven AI innovation
Trump revoked the existing executive order on AI on January 23, 2025, signing a new executive AI order in its place. The Trump administration claimed Biden's order "hampered the private sector’s ability to innovate in AI by imposing government control over AI development and deployment."
The new order states its purpose as the following:
"The United States has long been at the forefront of artificial intelligence (AI) innovation, driven by the strength of our free markets, world-class research institutions, and entrepreneurial spirit. To maintain this leadership, we must develop AI systems that are free from ideological bias or engineered social agendas. With the right Government policies, we can solidify our position as the global leader in AI and secure a brighter future for all Americans. This order revokes certain existing AI policies and directives that act as barriers to American AI innovation, clearing a path for the United States to act decisively to retain global leadership in artificial intelligence."
While his policy doesn’t lay out too much detail, Trump's new order establishes a 180-day deadline for an Artificial Intelligence Action Plan to "sustain and enhance America’s global AI dominance in order to promote human flourishing, economic competitiveness, and national security."
So, we can expect to hear more from the Trump administration on its stance on AI in the next few months.
One key figure to watch in this unfolding policy shift is Elon Musk. Although Musk has not yet had a formal role in shaping Trump’s AI agenda, his appointment to the Department of Governmental Efficiency and his past advocacy for responsible AI suggest that he could play a role in future discussions. Musk has long warned about the dangers of AI but has also been critical of what he considers excessive regulation and bias in AI models. His own AI company, xAI, recently launched Grok, a chatbot designed to counteract what he calls 'woke' AI. How Musk’s views, both his concerns about AI risks and his push for fewer guardrails, will influence Trump’s policy remains to be seen.
The decision to revoke Biden’s order reflects Trump’s emphasis on market-driven solutions to address the risks of AI without extensive federal oversight. His repeal aligns with the belief that reducing government restrictions will allow the private sector to freely innovate and enable businesses to scale at a faster rate.
Shared goals amid contrasting strategies
Despite their differences, Biden and Trump do share some common ground.
Both recognize AI’s critical role in maintaining US leadership on the global stage and its importance in driving economic and technological advancement. They also both believe in AI’s potential to strengthen national security, particularly by modernizing defense capabilities, and agree on the need to allocate more federal land for AI infrastructure and data centers.
On January 16, just four days before leaving office, Biden signed a 40-page executive order aimed at bolstering government oversight of AI and addressing certain cybersecurity challenges. One notable provision called for identifying and designating federal land to house AI infrastructure and data centers, a measure designed to lower costs and accelerate the deployment of these facilities.
While Trump repealed much of this last-minute order, he did retain and expand this particular provision. Trump’s new policy not only preserves the focus on federal land for data centers but also introduces expedited approvals for companies seeking to develop such facilities.
Given the immense computational power required for AI research and deployment, access to strategically located data centers has become a bipartisan national priority.
Stargate: A bold step toward AI leadership
The Stargate initiative is a cornerstone of the Trump administration’s AI policy. While the project was introduced in 2022 by global tech companies, OpenAI, SoftBak, Oracle, and MGX, the President formally announced it as a flagship initiative on January 21, 2025.
“This infrastructure will secure American leadership in AI, create hundreds of thousands of American jobs, and generate massive economic benefit for the entire world,” Trump’s statement read.
When asked about Stargate, Trump responded: "This monumental undertaking is a resounding declaration of confidence in America's potential under a new president...we want to keep it in this country. China is a competitor and others are competitors…we have to get this stuff built."
Stargate involves a private $500 billion investment towards building state-of-the-art AI infrastructure, including advanced data centers designed to enhance computational power and support large-scale AI research. The program has ambitious goals, including the creation of over 100,000 jobs within the next four years, targeting roles in technology, engineering, and operations.
While Stargate represents a significant opportunity for innovation and job creation, it also raises questions about how such large-scale initiatives will balance economic objectives with ethical and societal considerations. As the US moves forward with projects like Stargate, these dynamics will likely shape the broader conversation about AI governance and its role in the country’s future.
Effects on the legal tech industry
The repeal of Biden’s order raises questions about how the US should approach AI’s growing challenges. Supporters of Trump’s stance argue that deregulation gives businesses the freedom to innovate more rapidly, driving progress that benefits society. Critics, however, warn that without proper safeguards, the risks of AI could outweigh its advantages.
Like other industries, the legal tech sector will also have to live with a bit of uncertainty for the moment regarding federal AI policy. While Biden’s order did lead to federal initiatives promoting AI safety and accountability, the broader development of AI remained market-driven, with innovation, risk management, and ethical oversight left to private companies. Trump's repeal of Biden’s order reinforces this, removing federal oversight mechanisms and allowing AI developers to continue without government-imposed safety reporting requirements.
Despite these shifts, legal tech is growing at a remarkable pace. The market is projected to reach $32.54 billion by 2026, and AI adoption in law firms has surged from 19% in 2023 to 79% in 2024. Law firms are increasingly using AI for contract analysis, legal research, document automation, predictive analytics, amongst other areas, completely reshaping how attorneys work.
With no immediate federal oversight on the horizon, courts will play a central role in determining how AI can and should be used in legal settings. This means that case law, rather than legislation, will likely define the future of AI in the legal industry. Until then, legal tech companies and law firms will need to remain adaptable, finding ways to integrate AI while addressing ethical and legal considerations.
This might interest you: