As the U.S.-Israeli war on Iran continues, we look at how the Pentagon is using artificial intelligence in its operations. The system, known as Project Maven, relies on technology by Palantir and also incorporates the AI model Claude built by Anthropic. Israel has used similar AI targeting programs in Iran, as well as in Gaza and Lebanon.
Craig Jones, an expert on modern warfare, says AI technology is helping militaries speed up the “kill chain,” the process of identifying, approving and striking targets. “You’re reducing a massive human workload of tens of thousands of hours into seconds and minutes. You’re reducing workflows, and you’re automating human-made targeting decisions in ways which open up all kinds of problematic legal, ethical and political questions,” says Jones.
For anyone confused about this. The goal here is not to have a working system but to have an excuse.
Before this, if a bomb was dropped on an embassy or a school you had to have an internal investigation and blame someone for it. Someone was responsible. Not that anyone was actually convicted or anything but someone’s career might stumble, they might miss promotion or something.
With this system you simply say “AI failed”. You can drop bombs left and right and no one is ever accountable. You can’t punish AI.
It’s the same as when they qualified every fighting age male as militant. Suddenly there was a lot less civilian deaths because no one was counted as a civilian. Now instead of looking for “fighting age males” they can drop bomb absolutely anywhere and say “AI marked it as a valid target”.
Honestly this, it feels like the real reason AI has been pushed so hard across the board is just accountability shifting. From healthcare denials to literal war crimes, doesn’t matter, an “AI” “did it” so suddenly no one is to blame.
Of course, in a functioning society we would just say the people who enacted the decision of blindly trusting the black box is the perpetrator, but we don’t, so…