Top AI Firms Gain Access to Classified Secrets

A hand interacting with a laptop displaying an AI symbol

The federal government is inviting eight of America’s most powerful AI firms into its highest-classified networks—an upgrade meant to outpace China and Russia, but one that also raises hard questions about oversight, security, and who really controls the tools of war.

Quick Take

  • The War Department signed agreements with eight frontier AI companies to deploy capabilities on Impact Level 6 and Impact Level 7 classified networks.
  • The move signals a shift from experimental AI projects to AI as core defense infrastructure across warfighting, intelligence, and enterprise operations.
  • GenAI.mil, launched in December 2025 on an unclassified environment, reportedly reached 1.3 million users in five months—helping justify expansion into classified systems.
  • Officials say vendor diversity and interoperability are priorities, aiming to avoid long-term “vendor lock-in” while still scaling fast.

Classified AI Moves From Pilot Programs to Core Defense Infrastructure

The War Department’s May 4 announcement formalized agreements with SpaceX, OpenAI, Google, NVIDIA, Reflection, Microsoft, Amazon Web Services, and Oracle to deploy advanced AI on classified networks. The stated target environments—IL6 and IL7—sit at the top of the federal cloud security classification ladder, where secret and highly sensitive data is handled. Officials framed the expansion as central to building an “AI-first fighting force” with faster, better decisions.

The pace matters because the Department is treating AI less like a lab experiment and more like standard equipment. War Department leadership has connected the agreements to three mission areas: warfighting, intelligence, and enterprise operations. In practice, that can mean tools that summarize reporting faster, fuse sensor data into clearer situational pictures, and reduce planning timelines. The Department has not publicly provided a detailed rollout schedule, leaving timelines and milestones partly opaque.

GenAI.mil Adoption Helps Explain the Rush Into IL6 and IL7

The clearest public metric behind the push is GenAI.mil, a secure but unclassified platform launched in December 2025. The War Department has said the system quickly scaled to 1.3 million personnel, generating tens of millions of prompts and hundreds of thousands of agents in roughly five months. That kind of adoption can be read as both a success story and a warning: once a tool becomes routine, pressure builds to bring it into more sensitive environments.

From a conservative, limited-government perspective, rapid adoption inside the federal workforce cuts two ways. On one hand, it can improve readiness and reduce bureaucracy by giving warfighters and analysts better tools. On the other, speed can outstrip governance, especially when new systems are embedded before Congress and the public can evaluate costs, performance, and failure modes. The War Department has emphasized maintaining “control over data,” but details on auditing and accountability remain limited in public releases.

Anthropic’s Reported Exclusion Highlights the Ethical and Policy Fault Lines

One of the most politically charged details in the research is the reported exclusion of Anthropic, attributed to refusal of Pentagon demands related to surveillance and lethal autonomous systems. Publicly, this underscores that companies may be asked to accept military operational parameters as the price of admission. For Americans already skeptical that “elites” steer policy behind closed doors, the episode reinforces a familiar concern: major national-security decisions can be shaped by a small circle of contractors and officials.

At the same time, the available public information is incomplete. The War Department has not published a full, detailed explanation of why any firm was excluded, nor a comprehensive policy framework describing what is permitted or prohibited for AI use in lethal operations across these agreements. That limitation matters because the public debate often collapses into slogans—“AI will save lives” versus “AI will automate killing”—when the real question is governance: human control, audit logs, rules of engagement, and consequences for mistakes.

What to Watch: Oversight, Spending, and Human Accountability

Republicans controlling Congress in 2026 gives the governing party the ability—and responsibility—to demand measurable standards: secure deployment checklists, red-team testing, procurement transparency, and reporting that distinguishes hype from capability. The War Department says the goal is decision superiority, but taxpayers should still ask basic questions. What tasks will AI be authorized to perform on IL7? How will model updates be vetted? Who is accountable if a system’s recommendation contributes to an irreversible mistake?

For voters across the spectrum who believe government is failing the people, this story lands in a familiar place: Washington is moving fast, spending big, and partnering with powerful private actors, while the public gets only partial visibility. The agreements could strengthen national defense in a dangerous world. They could also deepen distrust if policymakers cannot clearly explain guardrails, prove security, and show that human judgment—not automated outputs—remains ultimately responsible for decisions made in America’s name.

Sources:

War Department signs agreements to deploy AI on classified networks

Classified Networks AI Agreements

U.S. War Department Announces AI Agreements with Eight Leading Technology Companies for Classified Network Deployment

U.S. War Department Announces AI Agreements with Eight Leading Technology Companies for Classified Network Deployment

War Department AI Deals: Oracle, OpenAI, Google, SpaceX

Pentagon clears 7 tech firms to deploy their AI on its classified networks