US Military Leans Into AI for Attack on Iran, But the Tech Doesn’t Lessen the Need for Human Judgment In War

The U.S. military was able “to strike a blistering 1,000 targets in the first 24 hours of its attack on Iran” , according to The Washington Post. The military has used Claude, the AI tool from Anthropic, combined with Palantir’s Maven system, in support of combat operations in Iran and Venezuela.

While Claude is only a few years old, the U.S. military’s ability to use it, or any other AI, did not emerge overnight. The effective use of automated systems depends on extensive infrastructure and skilled personnel. It is only thanks to many decades of investment and experience that the U.S. can use AI in war today.

In my experience as an studying strategic technology at 鶹 Tech, and previously as an intelligence officer in the U.S. Navy, that are only as good as the organizations that use them. Some organizations squander the potential of advanced technologies, while others can compensate for technological weaknesses.

Myth and Reality in Military AI

Science fiction tales of military AI are often misleading. Popular ideas of killer robots and drone swarms tend to overstate the autonomy of AI systems and understate the role of human beings. Success, or failure, in war usually depends not on machines but the .

In the real world, military AI refers to a huge collection of different systems and tasks. The are automated weapons and decision support systems. Automated weapon systems have some ability to select or engage targets by themselves. These weapons are more often the subject of science fiction and the focus of .

Decision support systems, in contrast, are now at the heart of most modern militaries. These are software applications that provide intelligence and planning information to human personnel. Many military applications of AI, , are for decision support systems rather than weapons. Modern combat organizations rely on countless digital applications for intelligence analysis, campaign planning, battle management, communications, logistics, administration and cybersecurity.

Claude is an example of a decision support system, not a weapon. Claude is embedded in the , used widely by military, intelligence and law enforcement organizations. Maven uses AI algorithms to identify potential targets from satellite and other intelligence data, and Claude helps military planners sort the information and decide on targets and priorities.

The Israeli systems used in the Gaza war and elsewhere are also decision support systems. These AI applications provide analytical and planning support, but human beings ultimately make the decisions.

Researcher Craig Jones explains how the U.S. military is using artificial intelligence in its attack on Iran, and some of the issues that arise from its use.

The Long History of Military AI

Weapons with some degree of autonomy have been used in war for . Nineteenth-century naval mines exploded on contact. German buzz bombs in World War II were gyroscopically guided. Homing torpedoes and heat-seeking missiles alter their trajectory to intercept maneuvering targets. Many air defense systems, such as Israel’s Iron Dome and the U.S. Patriot system, have long offered fully automatic modes.

became prevalent in the wars of the 21st century. Uncrewed systems now perform a variety of “” tasks on land, at sea, in the air and in orbit. Remotely piloted vehicles like the U.S. MQ-9 Reaper or Israeli Hermes 900, which can loiter autonomously for many hours, provide a platform for reconnaissance and strikes. Combatants in the have pioneered the use of first-person view drones as kamikaze munitions. Some drones rely on AI to acquire targets because electronic jamming precludes remote control by human operators.

But systems that automate reconnaissance and strikes are merely the most visible parts of the automation revolution. The ability to see farther and hit faster dramatically increases the information processing burden on military organizations. This is where decision support systems come in. If automated weapons improve the eyes and arms of a military, decision support systems augment the brain.

Cold War era systems anticipated modern decision support systems such as for battle management. like the United States’ Semi-Automatic Ground Environment, or SAGE, in the 1950s produced important innovations in computer memory and interfaces. In the U.S. war in Vietnam, gathered intelligence data into a centralized computer for coordinating U.S. airstrikes on North Vietnamese supply lines. The U.S. Defense Advanced Research Projects Agency’s program in the 1980s spurred advances in semiconductors and expert systems. Indeed, defense funding originally enabled the rise of AI.

Organizations Enable Automated Warfare

Automated weapons and decision support systems rely on complementary organizational innovation. From the of Vietnam to the doctrine of the late Cold War and later concepts of , the U.S. military has .

Particularly noteworthy is the emergence of a during the U.S. global war on terrorism. AI-enabled decision support systems became invaluable for finding terrorist operatives, planning raids to kill or capture them, and analyzing intelligence collected in the process. Systems like Maven became essential for this style of counterterrorism.

The impressive on display in Venezuela and Iran is the fruition of decades of trial and error. The U.S. military has honed complex processes for gathering intelligence from many sources, analyzing target systems, evaluating options for attacking them, coordinating joint operations and assessing bomb damage. The only reason AI can be used throughout the targeting cycle is that countless human personnel everywhere work to keep it running.

AI gives rise to important concerns about , or the tendency for people to give excessive weight to automated decisions, in military targeting. But these are not new concerns. Igloo White was by Vietnamese decoys. A state-of-the-art U.S. Aegis cruiser an Iranian airliner in 1988. Intelligence mistakes led U.S. stealth bombers to accidentally strike the in Belgrade, Serbia, in 1999.

Many Iraqi and Afghan civilians died due to and within the U.S. military. Most recently, evidence suggests that a Tomahawk cruise missile adjacent to an Iranian naval base, killing about 175 people, mostly students. This targeting could have resulted from a U.S. intelligence failure.

 

Automated Prediction Needs Human Judgment

The successes and failures of decision support systems in war are due more to organizational factors than technology. AI can help organizations improve their efficiency, but AI can also amplify organizational biases. While it may be tempting to blame Lavender for excessive civilian deaths in the Gaza Strip, likely matter more than automation bias.

As the name implies, decision support systems support human decision-making; AI does not replace people. Human personnel still play important roles in designing, managing, interpreting, validating, evaluating, repairing and protecting their systems and data flows. .

In economic terms, , which means generating new data based on existing data. But prediction is only one part of decision-making. People ultimately make the judgments that matter about what to predict and how to use predictions. People have preferences, values and commitments regarding real-world outcomes, but AI systems .

In my view, this means that increasing military use of AI is actually making , not less.Image removed.

 

This article is republished from under a Creative Commons license. Read the .