Former Trump AI Advisor Slams White House Move as "Attempted Corporate Murder" Against Tech Giants
Dean Ball, a former artificial intelligence advisor to President Donald Trump, has issued a stark warning that the Biden administration's recent decision to designate AI startup Anthropic as a "supply chain risk" could have devastating consequences for some of America's largest technology corporations. Ball specifically highlighted Google, Amazon, and Microsoft as companies with billions of dollars at stake in Anthropic, whose investments and partnerships now face unprecedented jeopardy.
Defense Order Forces Tech Titans to Potentially Divest Billions
The controversy stems from an order by Defense Secretary Pete Hegseth that prohibits any military contractor or supplier from conducting business with Anthropic. Ball reacted with fierce criticism, labeling the administration's move as nothing short of "attempted corporate murder." He elaborated that if Hegseth's directive is fully enforced, it would effectively compel Google, Amazon, and even chipmaker Nvidia to divest their substantial holdings in the AI firm.
Amazon has committed a staggering $8 billion to Anthropic, while Google has invested approximately $2 billion. Although Microsoft is not a direct investor, the company relies heavily on Anthropic's advanced AI models through its Azure cloud platform. A forced divestiture or sudden business cutoff from any of these technology behemoths would send powerful shockwaves throughout the entire artificial intelligence investment landscape. This comes at a critical moment when hundreds of billions of dollars are flowing into the AI sector globally, making the potential disruption even more significant.
"Supply Chain Risk" Designation Historically Reserved for Foreign Adversaries
The "supply chain risk" designation under United States Code 10 USC 3252 has traditionally been applied to foreign companies considered genuine threats to national security, such as China's telecommunications giant Huawei. Anthropic presents a stark contrast, as it was actually the first frontier AI company to deploy its models on classified government networks back in June 2024. Its artificial intelligence systems are actively utilized by key agencies including the Central Intelligence Agency (CIA), the National Security Agency (NSA), and across the Department of Defense for vital intelligence analysis and operational planning.
Alan Rozenshtein, a law professor at the University of Minnesota, emphasized that this legal label "clearly was not designed for an American company that has a contract dispute with the government." In response to the designation, Anthropic has announced its intention to challenge the decision in federal court, setting the stage for a major legal battle over the interpretation and application of national security statutes.
Chilling Effect on US AI Investment and Innovation
Ball extended his critique beyond the immediate corporate fallout, arguing that this precedent makes it nearly impossible to recommend starting or investing in an American artificial intelligence company. If the federal government can arbitrarily apply a national security label to a domestic firm over a contractual disagreement, the entire risk calculus changes dramatically for every venture capitalist, private equity firm, and sovereign wealth fund currently eyeing the lucrative United States AI market.
The timing of this development exacerbates concerns within the investment community. On the very same day, rival AI firm OpenAI announced a monumental $110 billion funding round, with Amazon contributing an enormous $50 billion investment. This creates a peculiar and contradictory dynamic where Amazon is simultaneously backing OpenAI while potentially being forced to sever its multibillion-dollar ties with Anthropic due to government intervention.
Broader Implications for National Security and Corporate Governance
For the present moment, Anthropic maintains that its commercial customers and API users remain completely unaffected by the Defense Department's order. However, the larger, more profound question looms: whether Washington policymakers can weaponize national security designations against American companies that push back against government demands or engage in contractual disputes.
This fundamental issue regarding the balance between national security imperatives and corporate autonomy will likely require resolution by the federal judiciary. The outcome could establish a landmark precedent affecting not only the artificial intelligence industry but potentially any technology sector deemed critical to United States interests. As billions in investments hang in the balance, the technology and financial communities are watching closely to see whether this represents an isolated incident or a new regulatory approach with far-reaching consequences for innovation and economic competitiveness.
