xAI has asked a federal court to stop Colorado from putting into effect a newly enacted law that targets certain artificial intelligence systems, escalating a legal contest over whether oversight of advanced AI should be controlled by state legislatures or centralized in Washington.
The lawsuit, filed in U.S. District Court in Colorado, challenges Senate Bill 24-205, which is set to take effect on June 30. Under the statute, creators of so-called "high-risk" AI systems would face disclosure obligations and required risk-mitigation processes when their systems are used to make decisions in employment, housing, education, health care and financial services.
In its court filing, xAI - the artificial intelligence company founded by Elon Musk that recently completed a merger with SpaceX - argues the Colorado law infringes its First Amendment rights. The company says the statute unlawfully restricts how developers may design AI systems and compels speech on divisive public topics. xAI further asserts the law would force alterations to its flagship model, Grok, to mirror the state nd its views on issues such as diversity and discrimination rather than allowing the system to remain objective.
"Government regulation that is applied at the state level in a patchwork across the country can have the effect to hamper innovation and deter competition in an open market," xAI said.
As relief, xAI is requesting a judicial declaration that the Colorado statute is unconstitutional and asking the court to issue an injunction preventing the law from being enforced. The lawsuit also references White House executive orders that criticize a state-by-state approach to AI oversight, and cites federal warnings that a patchwork of differing state laws could undercut U.S. leadership in artificial intelligence and pose risks to national security.
The Colorado Attorney General's Office declined to comment on the litigation.
The filing comes amid competing views among policymakers about the proper locus for AI regulation. Some technology firms and Republican lawmakers favor leaving regulatory responsibility to the federal government. California's attorney general, however, has cautioned against relying exclusively on Congress to act, pointing to prolonged delays in passing data privacy and technology legislation in the past.
Notably, advisers to President Donald Trump have expressed a preference for federal oversight that would establish a streamlined national framework for AI regulation rather than permitting a patchwork of state-level rules.
Implications and context
The litigation highlights an accelerating tension between state-level regulatory initiatives and calls for a uniform federal approach. The dispute could affect AI developers that deploy systems in sectors where automated decision-making can materially affect individuals - including employment, housing, education, health care and financial services - and could shape how companies design, document and mitigate risks in their models going forward.