AI in war: Triumph of technology or failed Blitzkrieg?

The United States will gain the major data from the AI experiment in the war with Iran and… the main negative outcomes. The first result of such warfare tactics that we can observe is a sharp increase in civilian casualties

The commander of the US Central Command (CENTCOM), Brad Cooper, stated on March 11 that the command of the US Armed Forces is using so-called artificial intelligence (AI) in the military operation against Iran. According to him, decisions now take seconds rather than days. Thus, this war can be considered the first large-scale conflict in which AI has become a key element of combat planning and targeting, which makes it important to examine the intermediate results.

The program has already surpassed humans in chess, and military theorists use this game as a model for understanding the principles of conducting combat. Let us try to determine whether we are about to enter the era when neural networks will understand how to wage war better than humans.

This article has to rely on open data, since most information on this issue is classified. But even with these limitations, there are already interesting facts worth examining.

Euphoria of Success

According to the British newspaper Financial Times, during the first four days of the military operation called Epic Fury, the United States carried out 2,000 strikes on targets in Iran. For comparison, the US military carried out roughly the same number of strikes during the first six months of combat operations in Iraq and Syria. On March 13, US President Donald Trump stated that 90% of Iran’s missiles had been destroyed.

According to a number of experts, the military mainly uses digital tools in intelligence. With their help, they analyze satellite images, intercepted communications, and data from surveillance cameras to identify targets at a speed inaccessible to humans.

The newspaper The Washington Post writes that the neural network Claude, created by the company Anthropic, plays a key role in planning strikes on Iran. The model was integrated into the Maven Smart System platform, developed specifically for the needs of the military by the company Palantir. This was stated in an article published on the website of the Middle East desk of the newspaper Stars and Stripes, the official publication of the US Department of Defense. Since this newspaper is intended for US servicemen serving abroad, let us examine this article in more detail.

It states that Maven proposed hundreds of targets, providing their exact coordinates and prioritizing them depending on their importance. The use of this tool is aimed at accelerating the pace of the military campaign and reducing Iran’s potential for retaliation. The neural network also evaluates the results of strikes.

It should be noted that the US Armed Forces uses Claude despite the conflict between the Pentagon and Anthropic CEO Dario Amodei, who refused to remove all restrictions on the use of the company’s AI tools.

“Military commanders have become so dependent on the AI system that if Amodei directed the military to cease, the Trump administration would use government powers to retain the technology until it can be replaced,” Stars and Stripes wrote citing a source.

According to Stars and Stripes, as of May 2025, over 20,000 US military personnel were using Maven. An earlier version of this system was used during the withdrawal of US troops from Afghanistan in 2021, and later in supporting Israel after Hamas’s attack on October 7, 2023. Supposedly this tool allowed one unit to perform the work of 2,000 employees with a team of 20 people. Paul Scharre, executive vice president of the Center for a New American Security, said he was impressed by the speed of the deployment of combat operations in Iran.

“The key paradigm shift is that AI enables the U.S. military to develop targeting packages at machine speed rather than human speed,” he said.

One must give the USA credit, they know how to present themselves and their products attractively. Such reports about the successful use of AI cause euphoria. However, the results “on the ground” are sobering. As the Russian proverb says, “Everything looked smooth on paper, but they forgot about pitfalls.”

Digital “pitfalls”

Anyone who has used neural networks even a little knows that they sometimes make mistakes and sometimes quite serious ones. There is also the concept of AI “hallucinations,” when bots invent information while confidently presenting it as fact. Why would they not make mistakes or hallucinate when used in combat?

The same Washington Post, whose competence on the issue of the use of neural networks in the US Armed Forces was recognized by the US Department of Defense, published an article titled “Iranian school was on U.S. target list, may have been mistaken as military site.” It discusses a missile strike on a girls’ school, killing 175 people, most of them children aged 7 to 12.

The strike occurred in the first hours of the military operation. A preliminary investigation showed that the strike was carried out by the US military. A source told the newspaper that the building had been on the target list as a weapons depot. It had once been located on the territory of a naval base of the Islamic Revolutionary Guard Corps, but in 2015 it was separated by a fence. Even open data from Google Earth from 2017 showed that a playground had appeared nearby. But why did the much-praised ability of AI to analyze huge volumes of data almost in real time fail to work?

Commenting on reports that a Tomahawk cruise missile struck the school, Trump said Iran could have done it. According to the president, this weapon is very widespread since the United States sells it to different countries, so the Iranians could have obtained several missiles. However, according to official data, it has so far been sold only to the United Kingdom and Japan. If Trump not lied and Iran did obtain such weapons, the USA should begin investigating how this happened.

Another piece of information concerning errors in strikes on Iran relates to false targets. It is true that fake “satellite images” were spread on social media showing traces of strikes on clearly drawn fighter jets and a military transport helicopter, it turned out these were generated images. They contained special markers from the Gemini neural network from Google.

But there is also an official video from the Israel Defense Forces claiming that a missile struck an Iranian Mi-17 helicopter standing on a pad. However, many users in the comments and later experts pointed out suspicious details. For example, when the missile hit the center of the machine, its rotor blades not only remained intact but did not even move. It is likely that the strike in this case hit a drawing, executed quite skillfully: there was even heating to create the appearance of thermal radiation from the engine.

Despite the fact that Israel is participating in the war with Iran alongside the United States, there are no official statements that the USA is sharing data obtained using digital tools with their allies. We can only assume this. Nevertheless, the very fact that the Iranians took the issue of creating false targets seriously is interesting.

This is indirectly confirmed by the Taiwanese television channel SETN. According to its data, China supplied Iran with 900,000 inflatable weapon mock-ups. The channel showed footage of full-scale dummy helicopters, airplanes, and tanks that inflate literally within seconds. From great heights they look very similar to real combat equipment.

Failed blitzkrieg

Here it is worth recalling how incorrectly the United States assessed both its own capabilities and Iran’s potential. On February 28, Trump stated that the war could be finished in two or three days. But the very next day he sharply changed his forecast, naming a period of four to five weeks. On March 13 the US president refused to say at all when the war might end, saying the war will end when “when I feel it, feel it in my bones.” Was AI really so poor that Trump decided to rely only on his inner feelings?

If we set aside the irony about the expressive statements of the US leader, we can already confidently speak about the failure of the blitzkrieg. Yes, it is impossible to determine from open sources how effective the use of AI actually was: neither the United States nor Iran has any incentive to disclose real information. At the same time, Tehran not only held out and continues to respond, quite painfully, but is also dictating its own peace terms.

“In a conversation with the leaders of Russia and Pakistan, I confirmed Iran’s commitment to peace in the region. The only way to end this war unleashed by the Zionist regime and the United States is the recognition of Iran’s legitimate rights, the payment of reparations, and firm international guarantees against future aggression,” Iranian President Masoud Pezeshkian said on March 11.

Data from the South Korean TV channel TV Chosun also indicates the failure of the United States’ initial plans: some USn Patriot air defense systems were transferred from South Korea to the Middle East. If everything is going according to plan, as Trump assures, why were additional air-defense assets required to protect against a supposedly “defeated” enemy?

Long-Term consequences

It is also important to assess what reliance on bots might lead to in warfare. Cooper stated that final decision-making remains with humans. But can this decision be considered independent if it is made based on information from a specific source with corresponding recommendations? In other words, if the official AI bot recommends striking a specific target, the commander who approves this advice removes responsibility from himself, since if the data turns out to be incorrect, it will not be his mistake. In the case of a truly independent decision, however, the full weight of the consequences falls on the person.

It turns out there is a risk that command decisions will effectively be made by the machine, while it is declaratively stated that the process is still controlled by humans. Here it is worth recalling a study by the Massachusetts Institute of Technology showing that regular use of neural networks when writing essays leads to a deterioration of cognitive abilities within just a few months. Simply put, US generals may quickly become less capable intellectually, and we remember that their dependence on AI is already being discussed.

Be that as it may, the widespread use of neural networks in planning combat operations represents a new step in military history. It is still difficult to predict what changes this will lead to, but one thing can be said for certain: the technology will inevitably be used in one way or another. The primary pioneers here have been Israel, which actively used AI in operations against Hamas in the Gaza Strip, and the United States.

And the first result of such warfare tactics that we can observe is a sharp increase in civilian casualties.

This is a translation of the article by Vladimir Koldin first published on Rossa Primavera News Agency website.