Palantir Technologies (NASDAQ: PLTR) has spent two decades building data integration and operational intelligence platforms for governments and enterprises. But lately, one name keeps surfacing in every Palantir conversation: Anthropic, and its AI model Claude. Since the official partnership announcement in November 2024, a pointed question has been circulating in tech and defense circles — without Anthropic, is Palantir just a legacy data pipeline company dressed up in AI marketing? This piece digs into the numbers and the architecture to give a straight answer.

Key figures at a glance:
| Metric | Figure |
|---|---|
| Palantir FY2025 Revenue | $4.47B (+56% YoY) |
| U.S. Commercial Growth (Q4 2025) | +137% YoY |
| Anthropic ARR (Early 2026) | $30B+ |
| DoD–Anthropic Contract (July 2025) | $200M |
| Palantir’s Classified Security Clearance | IL6 (highest tier) |
1. How Did This Partnership Start?
The story starts in April 2023, when Palantir launched AIP (Artificial Intelligence Platform) with a clear design philosophy: no single LLM dependency. From day one, AIP was built to support multiple models — GPT, Claude, Llama, Gemini — through a unified interface. Despite that architecture, the Anthropic partnership became its own story. The reason: Claude is the only frontier AI model currently deployed on classified Pentagon networks, accessed via Palantir’s AI Platform.
Key partnership milestones:
- April 2023 — Palantir launches AIP. Architecture built from the start to support multiple LLMs on private, air-gapped networks
- November 7, 2024 — Official three-way partnership announced: Palantir + Anthropic + AWS. Claude 3 and 3.5 deployed on Palantir’s DISA IL6-accredited classified environment. Palantir and AWS are among a limited number of companies to receive this accreditation, which requires some of the strictest security protocols in existence
- April 2025 — Anthropic joins Palantir’s FedStart program, deploying Claude to federal workers via Google Cloud FedRAMP High and IL5 infrastructure. Google Public Sector CEO Karen Dahut participated in the announcement
- June 2025 — Anthropic introduces Claude Gov, a government-focused version of the model designed specifically for national security agencies, deployed across classified networks via Palantir
- July 2025 — The Department of Defense awards contracts worth up to $200 million each to four frontier AI developers: Anthropic, OpenAI, Google, and xAI. Only Claude, however, operates on classified Pentagon networks at this point
- March 5, 2026 — Reports confirm that Claude, integrated into Palantir’s Maven Smart System, was used to identify approximately 1,000 prioritized military targets during U.S.-Israeli strikes on Iranian facilities — despite a presidential executive order having been issued days earlier
- March 26, 2026 — A federal court stays the supply chain risk designation against Anthropic. Palantir CEO Alex Karp confirms Claude remains integrated into Palantir’s tools but signals plans to add other LLMs going forward
When the partnership launched in 2024, Palantir CTO Shyam Sankar shared a real-world example: a major U.S. insurer deployed 78 AI agents powered by AIP and Claude, compressing a two-week underwriting process down to three hours. That same template, he argued, was being applied to the most time-sensitive government and defense workflows.
2. Palantir’s Revenue Mix — Where Does the Money Come From?
Palantir’s Q4 2025 revenue surged 70% year-over-year, the highest growth rate since going public. Full-year 2025 revenue hit $4.475 billion, with U.S. commercial revenue growing 109% for the year and 137% in Q4 alone.

FY2025 Revenue by Segment:
| Segment | Revenue | YoY Growth |
|---|---|---|
| U.S. Government | $1.86B | +55% |
| U.S. Commercial | $1.47B | +109% |
| International (Gov + Commercial) | $1.15B | +10% |
The U.S. commercial explosion is the headline. International growth at +10% makes clear that AIP adoption is still overwhelmingly a domestic American story — the export wave hasn’t arrived yet.
Rule of 40 score of 127%, Net Dollar Retention at 139%, and Ontology-based lock-in with extreme switching costs — these are the three pillars behind the bull case. The government business alone carries a compound moat: 20+ years of accumulated trust, security clearance infrastructure, and battlefield data expertise that simply cannot be replicated quickly.
3. How AIP Actually Works — Where Does the LLM Sit?
The most common misread of Palantir is treating it as “an LLM company.” The architecture tells a different story.
The Ontology — Palantir’s Real Moat
Palantir’s competitive core is its Ontology layer — a semantic model that transforms raw enterprise data into real-world objects: assets, transactions, personnel, processes. Once a customer builds out their Ontology, migrating that to another platform is extraordinarily difficult. Switching costs are so high that the difficulty of leaving has been compared to changing your bank account — except exponentially worse. Palantir’s data pipelines are also designed for real-time data, whereas most competitors use batch updates at regular intervals.
Platform switching costs run between $2.5M and $7.5M per enterprise client, with implementation periods of 6–9 months. Morningstar assigns Palantir a Narrow Economic Moat on the basis of these switching costs and its intangible assets.
A UBS analyst in April 2026 stated that the Ontology layer had proven Palantir’s AI moat, with low likelihood of LLM-driven disruption of the platform. “Not a single Palantir customer or partner has cited any real risk from Claude models being used to DIY an equivalent of Palantir,” the analyst noted.
The counterargument deserves airtime. Michael Burry has argued that Palantir’s moat is simply obstruction of data transfer — that the difficulty of leaving isn’t a technical advantage but rather an obstructionist wall. In an era where data portability and open architectures are becoming requirements, a proprietary black-box model may face diminishing returns. That critique isn’t entirely wrong. But it only matters when a viable alternative actually exists. At the moment, there isn’t one at Palantir’s level for classified defense work.

The AIP Stack — top to bottom:
| Layer | Role |
|---|---|
| Apollo | Autonomous software deployment and management — proprietary, 20 years in production |
| Gotham / Foundry | Defense (Gotham) and enterprise (Foundry) data integration platforms |
| Ontology Engine | Converts enterprise data into real-world semantic objects — the core moat |
| AIP Logic / Agent Studio | No-code to pro-code agent builder and workflow orchestration |
| LLM Security Layer | PII masking, content filtering, audit logging, zero data retention guarantee |
| 🔌 LLM API (Replaceable) | Claude · GPT-5 · Gemini · Grok · Llama — unified interface, plug-and-play |
Palantir’s official architecture documentation confirms that AIP enables secure access to the full range of commercial LLMs and open-source models through Palantir-managed infrastructure that ensures no transmitted data is retained by third-party providers. The LLM is a replaceable plug-in at the bottom of the stack. The real value lives in the five layers above it.
4. Why Claude Specifically — When GPT Is Also Available?
In classified defense environments, Claude rose to the top for three reasons that go beyond raw benchmark performance.
① Constitutional AI — Anthropic’s Constitutional Classifiers reduced the jailbreak success rate from 86% to 4.4% compared to unguarded models, with no universal jailbreak discovered during public red-teaming exercises. For defense customers who need AI decision-making to be auditable and resistant to manipulation, this matters.
Constitutional AI in plain English: Traditional AI alignment worked by having humans compare outputs and pick the better one — effective but slow and hard to scale. Anthropic’s approach gives the model a written “constitution” of principles and trains it to critique and revise its own responses against those principles. The constitution draws from sources including the UN Declaration of Human Rights, principles from other AI labs, and an effort to incorporate non-Western perspectives. AI supervising AI scales better than human labeling — and produces a model that resists manipulation while remaining genuinely useful.
② 200K Context Window — Claude 3+ handles 200,000 tokens in a single pass. For intelligence analysts processing lengthy field reports and mission briefs without losing context, this is operationally significant.
③ First-Mover on Classified Networks — Having Palantir as a partner helped Anthropic build direct lines with the DoD and fast-tracked its integration into the highest-level classified projects. The partnerships were crucial in helping Anthropic become the first model company to officially deploy across classified networks, according to a senior research analyst at Georgetown’s Center for Security and Emerging Technology.
“Our partnership with Anthropic and AWS provides U.S. defense and intelligence communities the tool chain they need to harness and deploy AI models securely, bringing the next generation of decision advantage to their most critical missions.” — Shyam Sankar, CTO, Palantir Technologies (November 2024)
5. So Is Palantir a Shell Without Anthropic?
The honest answer: it depends on which business segment you’re examining.
Commercial and International — Palantir Functions Independently
In the U.S. commercial market and internationally, Palantir can run AIP workflows on GPT-5, Gemini, and Llama with no meaningful operational gap. The +109% commercial revenue growth in FY2025 was driven by the Ontology and agent architecture — not by which model happened to be plugged in.
U.S. Classified Defense — Here, the Dependency Is Real
Claude is currently the only LLM that can be used by the Pentagon in classified settings. The $200M DoD contract was structured around that reality. If Anthropic is forced out, Palantir’s classified AI services face an immediate gap — temporary, but not trivial.

Claude Dependency by Palantir Segment — Qualitative Estimate, April 2026:
| Segment | Dependency | Notes |
|---|---|---|
| Classified Defense & Intelligence (IL6) | 🔴 Very High | No viable LLM replacement currently |
| Federal Civilian (FedStart) | 🟡 Moderate | GPT-5 parallel possible but needs re-certification |
| U.S. Commercial Enterprise | 🟢 Low | Multi-LLM strategy already in motion |
| International | 🟢 Very Low | Regional models routinely used |
Palantir’s Independent Strengths vs. Anthropic Exposure:
| Palantir Independent Strengths | Anthropic Dependency Risks |
|---|---|
| 20-year Ontology and data integration IP | No viable IL6-cleared LLM replacement short-term |
| Model-agnostic AIP architecture | AI ethics conflicts can disrupt live operations |
| U.S. commercial revenue +137% in Q4 2025 | $200M DoD contract built on Claude infrastructure |
| 26 partnerships across 15 industries | Anthropic’s ARR growth shifts negotiating leverage |
| Sole IL6-accredited commercial AI platform | Political risk: executive order standoff is live |
6. The View From Anthropic’s Side
This dependency runs both ways. Anthropic needs Palantir just as much.
The Gateway to Classified Markets — U.S. classified AI markets don’t open for anyone who knocks. Palantir’s IL6-accredited infrastructure and its two decades of DoD trust represent a pathway Anthropic could not have built independently in any reasonable timeframe.
The $200M Contract Enabler — Anthropic’s DoD contract was made possible by Claude’s operational track record inside Palantir’s classified deployments. Anthropic became the first model company to officially deploy across classified networks specifically because it was able to work effectively with partners like Palantir, AWS, and Google. Without that proof point, the $200M contract likely doesn’t materialize on that timeline.
ARR Growth Acceleration — Anthropic’s ARR grew from roughly $9B at end-2025 to $30B by early 2026. Government contract expansion, channeled through Palantir’s infrastructure, is a meaningful part of that story.
The relationship is genuinely mutual. Palantir holds the infrastructure and customer relationships; Anthropic holds the reasoning capability and the safety credentials. Each fills a gap the other cannot easily replicate.
7. The 2026 Crisis — Partner and Risk Factor Simultaneously
By early 2026, the partnership had become one of the most politically charged technology relationships in Washington.
The roots of the conflict lie in the changing nature of the software stacks used by the Pentagon. As AI models become more powerful and general purpose, the same underlying models powering consumer chatbots could one day make life-and-death decisions on the battlefield. Anthropic is one of the few frontier LLMs available for classified government use because it is accessible through Amazon’s Top Secret Cloud and through Palantir’s AIP. That is how Claude ended up on the screens of officials monitoring the capture of Venezuelan President Nicolás Maduro.
The Pentagon signed the $200M contract in July 2025 fully aware of Anthropic’s usage restrictions. Claude was deployed at national nuclear laboratories, used for intelligence analysis, and integrated into defense operations — the only frontier AI model operating in the military’s most sensitive environments. The dispute was never about deploying Claude. It was about what Claude was allowed to refuse.
Anthropic maintains two non-negotiable conditions: no mass domestic surveillance of Americans, and no fully autonomous weapons without meaningful human oversight. xAI has reportedly agreed to “all lawful use” at any classification level, while OpenAI and Google have shown flexibility on unclassified work but continue negotiating classified access terms.
Despite the Pentagon’s supply chain risk designation, defense officials continued using Claude during active operations. Palantir CEO Alex Karp confirmed Claude remained integrated into Palantir’s tools while also signaling plans to add other LLMs as the situation evolves. The Department of Defense CTO acknowledged it would take time for the government to transition away from Anthropic’s models.
Two things are simultaneously true about this episode. First, Anthropic’s ethical guardrails created real friction inside a live operational context — putting Palantir in the middle of a dispute it didn’t choose. Second, the fact that military operations continued using Claude despite a presidential directive, and that a federal court had to intervene to prevent Anthropic’s removal, proves how deeply the Palantir-Claude combination is embedded in U.S. defense infrastructure. You can’t easily evict a model running active military operations.
8. Three Scenarios for What Comes Next
| Scenario | Conditions | Impact on Palantir | Probability |
|---|---|---|---|
| ① Deeper Symbiosis | Claude Gov expands, political friction resolves | National defense AI dominance solidified | High |
| ② Controlled Diversification | GPT-5/Grok obtain IL6 clearance alongside Claude | Single-provider risk reduced, leverage maintained | High |
| ③ Forced Separation | Political pressure forces Anthropic exit | Short-term capability gap, 6–12 month transition | Low (court stay, operational dependency) |
Scenarios ① and ② are most likely to unfold simultaneously. An internal Pentagon memo acknowledged that exemptions to the Anthropic ban would be considered for “mission-critical activities” where no viable alternative exists — and that if operations are ongoing six months from now, exceptions will be made to avoid putting those operations at risk.
That qualifier — “no viable alternative” — remains applicable in classified environments for the foreseeable future.
Final head-to-head:
| Comparison | Palantir | Anthropic (Claude) |
|---|---|---|
| Core Asset | Ontology, data integration, agent orchestration | Reasoning capability, Constitutional AI, 200K context |
| Security Clearance | DISA IL6, FedRAMP High, IL5 accredited | Claude Gov model — classified-network custom build |
| Customer Lock-In | Very Strong — $2.5M–$7.5M switching costs | Moderate — API-level swap possible but re-certification required |
| Replaceability | Low — 20-year specialized platform | Moderate — commercial use cases can shift to GPT-5 or Gemini |
| 2025–26 Growth | Revenue +56% FY2025, U.S. commercial +137% in Q4 | ARR tripled from ~$9B to $30B+ in ~3 months |
| Mutual Need | No viable IL6 LLM replacement short-term | Classified market access requires Palantir’s infrastructure channel |
Strip out the classified defense angle, and Palantir runs perfectly well without Anthropic — its commercial growth proves that. But in the narrow, high-stakes domain of U.S. classified defense AI, the two companies are currently inseparable. Palantir holds the infrastructure and the data; Anthropic holds the only model cleared to reason inside it. Calling Palantir a shell without Anthropic misses the architecture entirely. It’s closer to a brain and a body that happen to be in different corporate structures — and right now, neither functions fully in that market without the other.