The Journey of AI Model Claude into the U.S. Military

This article explores how the AI model Claude became integral to the U.S. military, highlighting its applications and ethical controversies.

Introduction

On February 27, 2026, U.S. President Trump signed an executive order mandating all federal agencies to immediately cease using the AI model Claude, developed by Anthropic. The Department of Defense labeled Anthropic as a “security threat” and “supply chain risk,” a designation previously reserved for foreign adversaries.

The Initial Ban and Military Action

Just hours after the ban was issued, the U.S. Central Command, in coordination with Israel, launched a significant airstrike against Iran, targeting the core activities of Iranian Supreme Leader Khamenei, resulting in his death. According to the Wall Street Journal (WSJ), during this operation, the U.S. Central Command continuously utilized the Claude model for intelligence assessments, target identification, and battle scenario simulations, all executed within a secure network.

This ironic situation transformed Claude from a commercial AI model into a symbol of the militarization of artificial intelligence, reshaping the temporal and spatial dimensions of modern warfare. Once embedded in the kill chain, it became a digital nerve integrated with aircraft, ships, and satellites, compressing intelligence analysis from days to seconds and shifting decision-making from human brains to human-machine interactions.

Claude’s Integration into Military Operations

Claude’s integration into the Pentagon’s core operational chain was not a sudden development. Anthropic’s collaboration with the U.S. national security apparatus began well before the formal contract was signed. In 2024, the company had already made significant strides in defense AI, becoming the first to deploy advanced AI models in U.S. government secure networks and to customize AI models for national security clients.

By June 2024, Claude was providing ongoing services to U.S. military personnel and rapidly deployed across the Department of Defense and other national security agencies. In November of the same year, Anthropic announced a partnership with Palantir to integrate the Claude model into the Palantir AIP platform, running on AWS GovCloud infrastructure to support complex data processing and analysis in intelligence and defense operations.

The integration allowed Claude to process vast amounts of unstructured battlefield data in real-time, receiving reconstructed intelligence streams from Palantir and outputting structured target identification results, scenario simulation predictions based on a 200K+ token context window, and multi-step operational planning suggestions, all within a physically isolated secure network.

Contractual Developments

In July 2025, a milestone was reached when the Department of Defense signed a two-year Prototype Other Transaction Agreement (POTA) with Anthropic, with a cap of $200 million. This agreement aimed to advance cutting-edge AI technology in defense applications, with Anthropic collaborating directly with the Chief Digital and Artificial Intelligence Office (CDAO) and various commands to identify high-impact AI scenarios and develop fine-tuned prototype models based on proprietary Department of Defense data.

Simultaneously, the Department of Defense signed similar contracts with OpenAI, Google, and xAI, marking a significant entry point for AI language models into the U.S. military system.

Claude’s Unique Capabilities

Claude operates on Amazon Web Services’ Bedrock platform, authorized for FedRAMP High security and DoD Impact Level 4/5 (IL4/5) environments, capable of processing classified information up to “secret” levels. It is the only commercial large language model that meets both the “frontier model” technical standards and has been deployed in the Department of Defense’s secure networks.

Claude’s applications span critical military tasks such as intelligence analysis, modeling and simulation, operational planning, and cyber operations. Unlike other models, it is deeply integrated into the Pentagon’s core data analysis systems through its partnership with Palantir, becoming a vital component of the military’s data processing and decision support.

Ethical Concerns and Tensions

As Claude’s integration deepened, tensions arose between Anthropic and the U.S. government. During contract negotiations, Anthropic rejected the Pentagon’s demand for unrestricted use of its technology in all legal scenarios, citing ethical concerns over AI applications, particularly regarding autonomous weapons and mass surveillance. The company emphasized two major exceptions: opposition to using Claude for mass surveillance on U.S. soil and the development of fully autonomous weapon systems.

This dispute escalated into a high-level conflict involving national security and supply chain safety, culminating in a series of articles from major media outlets analyzing the standoff over military AI ethics.

The Executive Order and Continued Use

On February 27, 2026, Trump issued an executive order to halt the use of Anthropic’s AI tools across federal agencies, designating the company as a security threat. However, just hours after the order, Claude was still utilized by the Central Command during a major airstrike against Iran, underscoring its deep integration into military operations.

Anthropic quickly released a statement reaffirming Claude’s extensive deployment within the Department of Defense and other national security agencies, highlighting its role in intelligence analysis, modeling, simulation, operational planning, and cyber operations as crucial to the military’s current operational framework.

Conclusion

The relationship between Anthropic and the Department of Defense has evolved into one of the most significant cases in the history of military AI policy development. This situation not only reveals the structural contradictions between the development of commercial frontier AI technology and the actual needs of U.S. national security but also illustrates that Claude has transformed from a mere experimental tool into a core infrastructure of the military’s intelligence and operational systems. Even in the face of a high-level executive order, its irreplaceability in real military operations has been starkly demonstrated.

Was this helpful?

Likes and saves are stored in your browser on this device only (local storage) and are not uploaded to our servers.

Comments

Discussion is powered by Giscus (GitHub Discussions). Add repo, repoID, category, and categoryID under [params.comments.giscus] in hugo.toml using the values from the Giscus setup tool.