- Action Collective
- Posts
- 031926 Tell Congress: Investigate Use Of Military AI
031926 Tell Congress: Investigate Use Of Military AI
Did artificial intelligence errors contribute to the killing of 175 people in an Iranian school? We deserve to know, and to make sure it never happens again.
The Department of Defense is spending tens of billions of dollars to deploy artificial intelligence in combat with minimal transparency, weakened safeguards, and almost no independent oversight.
This unchecked military AI use is putting soldiers, civilians, and civil liberties at grave risk. Congress must step in now.
Here's what's at stake and what demands your voice:
AI errors in warfare aren't glitches — they can cascade into system failures that misidentify civilians as targets while missing genuine threats. These failures can occur even when humans are nominally "in the loop," because commanders may become conditioned to defer to algorithmic recommendations without independent verification.
The Pentagon has sharply curtailed its own efforts to test and evaluate major weapons systems and assess the risks of civilian harm — making it harder to ensure that AI-augmented systems will work as promised or avoid causing excessive collateral damage.
A small handful of tech companies — most notably Palantir Technologies and Anduril Industries — have grown their share of defense contracts at extraordinary rates, and their executives are now shaping the very acquisition and oversight policies that govern the technology they profit from.
The proprietary nature of these systems raises urgent questions about whether the Pentagon itself has access to the data needed to conduct meaningful due diligence — or whether Congress and the public have any real window into how these systems are performing.
Real-world deployments have already gone wrong: AI-assisted targeting in conflict zones has been linked to civilian deaths, and are widely suspected to be a factor in a missile strike on a school in Minab, Iran that killed 175 people. Autonomous drones sent to Ukraine proved error-prone and easily defeated by basic jamming. The U.S. military is moving toward embedding similar tools into its core combat functions.
The Brennan Center warns that "the accelerating use of AI in warfighting has not been met with commensurate urgency to reckon with its dangers." Rules introduced under the Biden administration to manage AI risk — already inadequate — may be further weakened under the current administration, just as defense spending surges past $1 trillion annually.
Speed and innovation are not incompatible with accountability. But right now, the people building these weapons are writing the rules for them — and the public is being kept in the dark.
In solidarity,
Action Collective