US Drafts New AI Rules for Federal Contracts, Mandates Government Access
US Drafts New AI Rules for Federal Contracts, Mandates Access

US Government Prepares Sweeping New Rules for AI Companies in Federal Contracts

The United States government is reportedly in the process of drafting new and comprehensive regulations specifically targeting artificial intelligence (AI) companies that seek federal contracts. These proposed guidelines emerge against the backdrop of a significant and public dispute between the Pentagon and Anthropic, the creator of the Claude AI model, concerning the potential military applications of its advanced technology.

Broad Government Access Mandated for AI Models

Under the forthcoming rules, AI firms will be required to grant the US government an irrevocable and broad license to utilize their AI systems for "any lawful" purpose as a prerequisite for securing federal contracts. According to a draft of the guidance obtained by the Financial Times, the US General Services Administration (GSA) intends to impose this requirement on AI companies collaborating with civilian agencies. This initiative is a key component of a larger governmental effort to tighten and standardize procurement standards for AI services across federal departments.

The report further indicates, citing an anonymous source familiar with the internal discussions, that the Pentagon is actively considering the adoption of similar principles for its own military procurement contracts. This policy development gained significant traction and public attention following the Department of Defense's decision to cancel a substantial $200 million contract with Anthropic. The cancellation occurred after the company refused to provide unrestricted access to its AI technology, citing profound ethical concerns regarding potential misuse in domestic surveillance programs and the development of lethal autonomous weapons systems.

Wide Pickt banner — collaborative shopping lists app for Telegram, phone mockup with grocery list

Anthropic Designated a National Security Risk

In a consequential move, the Pentagon subsequently designated Anthropic as a supply-chain risk, a classification typically reserved for companies with connections to nations like China or Russia. This action marked a historic precedent, as Anthropic became the first American company to receive an official 'national security risk' label from the US government. Anthropic, a prominent AI startup valued at an estimated $380 billion, had argued that handing over its technology for "all lawful use" without stringent safeguards could enable domestic surveillance overreach. Defense Secretary Pete Hegseth countered this stance, asserting that the company's "true objective" was to "seize veto power over the operational decisions of the United States military."

Additional Mandates in the Draft Guidance

The proposed GSA guidelines encompass several other critical mandates for AI companies aspiring to become US government contractors:

  • Neutral and Non-Partisan AI Tools: Contractors must provide AI systems that are "a neutral, non-partisan tool that does not manipulate responses in favour of ideological dogmas such as diversity, equity and inclusion." This provision aligns with an executive order from former US President Donald Trump targeting what he described as "woke" AI models. The draft explicitly states, "The contractor must not intentionally encode partisan or ideological judgments into the AI systems' data outputs."
  • Disclosure of Foreign Compliance: Another clause requires AI companies to disclose whether their models have been "modified or configured to comply with any non-US federal government or commercial compliance or regulatory framework," such as the European Union's Digital Services Act. This is intended to provide transparency regarding potential foreign influence on AI systems used by the US government.

GSA's Role and Industry Impact

The General Services Administration (GSA), under the leadership of Ed Forst, is the primary agency responsible for overseeing software and technology procurement for the entire US federal government. Its subsidiary, the Federal Acquisition Service, led by former KKR director Josh Gruenbaum, has recently secured agreements with several major AI firms—including OpenAI, Meta, xAI, and Google—to supply their AI models to various US agencies at reduced costs.

Pickt after-article banner — collaborative shopping lists app with family illustration

Following the high-profile dispute with the Pentagon, the GSA terminated its existing agreement with Anthropic. The agency has announced plans to "solicit further comments" from industry stakeholders and participants before finalizing the new regulatory guidelines, ensuring a period of review and potential revision based on feedback from the AI sector.