US Appeals Court Denies Anthropic's Bid to Halt Pentagon Blacklisting as Supply Chain Risk
Court Denies Anthropic's Bid to Stop Pentagon Blacklisting

US Appeals Court Denies Anthropic's Bid to Halt Pentagon Blacklisting as Supply Chain Risk

In a significant legal setback for the artificial intelligence sector, Anthropic has lost a court bid to temporarily prevent the Pentagon from blacklisting the company and labeling it as a 'supply chain risk'. A federal appeals court in Washington, DC has denied Anthropic's request for a stay, allowing the Department of Defense's designation to proceed while litigation continues.

Court Ruling and Rationale

The ruling comes after a judge in a San Francisco federal court, in a separate but related case, granted Anthropic a preliminary injunction last month that bars the Trump administration from enforcing a ban on the use of its Claude AI model. However, the appeals court in Washington, DC took a different stance, emphasizing the balance of equities in favor of the government.

In its decision, the court stated, "In our view, the equitable balance here cuts in favor of the government. On one side is a relatively contained risk of financial harm to a single private company. On the other side is judicial management of how, and through whom, the Department of War secures vital AI technology during an active military conflict. For that reason, we deny Anthropic's motion for a stay pending review on the merits."

Wide Pickt banner — collaborative shopping lists app for Telegram, phone mockup with grocery list

The court acknowledged that Anthropic "will likely suffer some degree of irreparable harm absent a stay," but noted that the company's interests "seem primarily financial in nature." While Anthropic argued that the DOD's actions violated its right to free speech, the court found that the company did not demonstrate its speech had been chilled during the litigation. Due to the potential harm, the court ordered "substantial expedition" in the review process.

Background and Implications

The Department of Defense declared Anthropic a supply chain risk in early March, asserting that use of the company's technology poses a threat to U.S. national security. This designation requires defense contractors to certify that they do not use Anthropic's Claude AI models in their work with the military.

In its appeal, Anthropic contended that the blacklisting is unconstitutional, arbitrary, capricious, and not in accordance with legal procedures, framing it as a form of retaliation. With the split decisions by the two courts, Anthropic is currently excluded from DOD contracts. However, the company can continue working with other government agencies as the litigation unfolds. Defense contractors are prohibited from using Claude in military-related projects but may employ it for other purposes.

Anthropic's Response

Following the ruling, an Anthropic spokesperson expressed gratitude that the court recognized the need for a swift resolution, stating, "We are confident the courts will ultimately agree that these supply chain designations were unlawful." The company added, "While this case was necessary to protect Anthropic, our customers, and our partners, our focus remains on working productively with the government to ensure all Americans benefit from safe, reliable AI."

This case highlights the growing tensions between AI innovation and national security concerns, with potential ramifications for the tech industry and defense procurement processes.

Pickt after-article banner — collaborative shopping lists app with family illustration