Early last week, the Pentagon hailed Anthropic's Claude as best-in-class for military intelligence, granting it access to classified networks. But on Friday, President Trump ordered all federal agencies to sever ties with Anthropic after it refused to lift safeguards against mass domestic surveillance and autonomous weapons, labelling it a "supply chain risk" akin to Huawei.
Hours later, OpenAI inked a rushed deal for its models in classified systems, claiming identical red lines backed by law and technical safeguards. The tangle escalated when reports emerged that the US military deployed Claude in weekend US-Israel strikes on Iran, hours after the ban, exposing deep AI integration. Consumer backlash surged, with Claude topping Apple's App Store and a "Cancel ChatGPT" campaign exploding on X and Reddit, as Sam Altman admitted poor optics but defended the deal's safety alignment.
Anthropic plans to fight the blacklist in court. Governments grapple with AI's rapid evolution, as this dispute reveals: the Pentagon's January 2026 AI Strategy pushes rapid deployment via projects like GenAI.mil for all classification levels, but lacks granular vendor policies, forcing ad-hoc decisions on safeguards versus flexibility. Trump's move prioritizes unrestricted "lawful" military use over Anthropic's ethical stance, but is criticized as reactive favoritism toward compliant firms like OpenAI.
While the US boasts high-level directives like Executive Order 14179 for AI dominance, no clear-cut policy mandates consistent ethical guardrails or vendor vetting for government AI use, leaving room for such flashpoints. The Iran strikes underscore enforcement gaps, blending principle with pragmatism in wartime.
AI IN WARFARE: ETHICS BOW TO EMERGENCY—BUT AT WHAT COST?
