
The real war AI–Claude is just the underlying layer, Palantir is helping the U.S. military fight

The Maven system built by Palantir has covered the entire chain of intelligence analysis, target identification, and strike plan generation, with Claude acting as the "language engine"—military operators can complete decision-making simply by conversing with the chatbot, from drone reconnaissance to tank strike routes. This deep experiment in the militarization of AI is being forced into the open due to the legal conflict between Anthropic and the Pentagon
The ongoing dispute between the Pentagon and Anthropic has drawn public attention to a previously rarely scrutinized area: how AI technology actually operates in U.S. military actions.
According to an article in Wired last week, Palantir is embedding AI chatbots into the core layers of the U.S. military's operational systems, with Anthropic's Claude being just one of the interchangeable underlying models.
Based on Palantir software demonstrations, public documents, and Pentagon records reviewed by Wired, Palantir has built an AI-assisted system covering intelligence analysis, target identification, strike plan generation, and operational route planning, and has deployed it across multiple levels of U.S. military command. Claude's role is as the "language engine" for the chatbot, rather than the system itself.
The exposure of this architecture coincides with the escalation of legal conflicts between Anthropic and the Trump administration. Anthropic filed two lawsuits this week, accusing the Pentagon of illegal retaliation by classifying its products as "supply chain risks." Meanwhile, Claude has been continuously used in several U.S. military overseas operations, including the conflict in Iran, and is reportedly playing a key role in the military operation that led to the arrest of Venezuelan President Nicolás Maduro.
Palantir's Military System Landscape
Palantir's partnership with the Pentagon dates back to 2017. Since that year, Palantir has been the main contractor for "Project Maven," which is the Department of Defense's core project specifically for deploying AI in warfare environments.
The core product developed by Palantir for this project is called the Maven Smart System (abbreviated as Maven), managed by the National Geospatial-Intelligence Agency. The Army, Air Force, Space Force, Navy, Marine Corps, and the U.S. Central Command responsible for military operations in Iran all have access to this system. Cameron Stanley, the Pentagon's Chief Digital and AI Officer, stated at a recent Palantir meeting that Maven is being deployed "across the entire department."
According to publicly available military assessment documents, Maven can apply "computer vision algorithms" to images captured by "space-based assets" such as satellites and automatically identify targets that may belong to "enemy systems." The system's built-in visualization tools can label potential strike targets and "nominate" them for ground or air bombing. A feature called "AI Asset Task Recommender" can suggest which bombers and munitions to allocate to which targets. Maven also facilitates the communication of "target intelligence data and enemy reports" among military officials.
Recent reports from The New York Times and The Washington Post indicate that Maven relies on Anthropic's AI technology. Since 2022, Palantir has also sold another intelligence platform to the U.S. Army—the Army Intelligence Data Platform (AIDP). This platform integrates data from Maven and at least four other government systems, with capabilities for preparing intelligence before military operations, graphically presenting troop and weapon locations, and includes a tool called "Dossier." Used to generate continuously updated battlefield intelligence estimates. It is currently unclear whether Claude has been integrated into AIDP.
AIP: AI Interface Layer Embedded in Combat Systems
Palantir integrates Claude into military systems through its Artificial Intelligence Platform (AIP). AIP is not an independent platform but an application layer that runs on top of Palantir's existing commercial products (such as Foundry or Gotham), providing users with a chatbot interface for executing queries and tasks, which Palantir refers to as the "AIP Assistant" or "AIP Agent."
The AIP Assistant is powered by third-party large language models from companies like Anthropic, Google, and Meta, allowing customers to choose which model to use and which training data the model calls to generate responses. This design has special significance in intelligence and national security scenarios—classified intelligence data can be restricted as the model's exclusive data source.
A demonstration video released by Palantir in 2023 showcased how the AIP Assistant assists a "military operator responsible for monitoring activities in Eastern Europe" in planning and issuing ground attack commands against several tanks. The entire process is completed through dialogue with the chatbot: the system first issues an automatic alert about "potential abnormal enemy activity," then the analyst requests reconnaissance from an MQ-9 Reaper drone through conversation, followed by asking the AIP Assistant to "generate three action plans to strike the enemy equipment." The Assistant provides three options within seconds: air assets, long-range artillery, or tactical teams.
Subsequently, the analyst requests the Assistant to "analyze the battlefield," "generate routes" for troops to reach enemy positions, and "allocate jammers" to disrupt enemy communication devices. After final review, the analyst orders the troops to mobilize. In this scenario, Claude acts as the "language layer" of the AIP Assistant, responsible for understanding commands and generating responses.
Another demonstration involving NATO showcased similar logic: the analyst views troop and weapon positions on a digital map, and after clicking a button, a tool powered by GPT-4.1 generates five possible military strategies, one of which is named "Support Fire - Then Penetrate - Shock and Destroy." The demonstration also showed that the analyst could select different AI models in the interface, with Claude listed alongside ChatGPT and Meta's Llama as options.
Claude's Role: Intelligence Generation and Analysis
In addition to real-time combat assistance, Claude is also used to generate intelligence assessment reports. Reportedly, in June 2025, Kunaal Sharma, head of the public sector at Anthropic, conducted a demonstration showing how the enterprise version of Claude generates a "high-level" analysis report on Ukraine's drone strike operation "Spider Web Operation."
In this demonstration, Sharma asked Claude to create an "interactive dashboard" containing operational information and convert it into "object types" that can be analyzed on Palantir's Foundry platform, while also drafting a detailed analysis of recent developments in Russian border provinces and a 200-word summary of the operation's "military and political impact." Sharma stated in the presentation that such reports typically take hours to complete manually, while Claude can generate them in a very short time. He added that through collaboration with Palantir, the federal government can also access internal datasets beyond public information.
When Palantir announced its partnership with Anthropic in the military and intelligence sectors in November 2024, it stated that the integration of Claude helps analysts discover "data-driven insights," identify patterns, and support decision-making in "time-sensitive situations."
The exposure of the aforementioned system occurred against the backdrop of a sharp deterioration in relations between Anthropic and the Pentagon. In late February of this year, Anthropic refused to provide the government with unconditional access to the Claude model, insisting that its system should not be used for mass surveillance of American citizens or fully autonomous weapons. The Pentagon subsequently classified Anthropic's products as "supply chain risks," and Anthropic filed two lawsuits this week, alleging that the actions of the Trump administration constituted illegal retaliation and seeking to overturn that designation.
This dispute draws attention to a core issue: how AI technology, which is deeply embedded in combat systems, will be constrained when there are disagreements between AI model developers and the military regarding usage boundaries. Currently, reports indicate that Claude continues to be used in some U.S. defense operations, including those related to the Iran conflict
