fb-pixel Skip to main content
IDEAS

AI may pull the trigger in war, but it shouldn’t call the shots

Without proper guardrails, autonomous weapons could be as dangerous as nuclear weapons. That’s why the United States needs to lead the way on this technology.

An aerial image taken by a Ukrainian drone in 2022.Associated Press

When a car sped toward our column of Marines advancing northward into Iraq in 2003, the Marines ahead of me did what they were trained to do: They took it out before it could reach us. Terrorists and militants routinely drove car or truck bombs into American troops.

I wasn’t close enough to know the details of the encounter, but I’ll never forget the face of the young boy who lay amid the wreckage when we passed.

Artificial intelligence might have saved the day. Using advanced pattern recognition, an AI-enabled weapon with billions of data points to reference — the weight distribution of the car, its tire pressure, even facial expressions of the passengers — might have been able to differentiate between a car weighed down by a fleeing family and a car weighed down by explosives.

Advertisement



But an AI-enabled weapon could just as easily have been programmed to take out any vehicle in its path, regardless of who or what was inside.

This is why I and many of my colleagues on the House Armed Services Committee are pushing the Department of Defense to move faster on AI. It can help commanders make better-informed decisions, reduce danger both to our troops and to civilians, and ultimately make our forces more effective on the battlefield. That would be AI at its best.

But AI at its worst could usher in an era of warfare more dangerous than anything we’ve seen before — more dangerous not just for America but for humanity itself.

History has shown us that the first adopters of a new technology get to set the norms for its use. Today our adversaries are adopting AI faster than we are, and they are showing less concern for how it gets used. If America falls short in this new AI arms race, someone else will set the moral guardrails for its use. And once that happens, it will be very difficult to pull back.

Advertisement



The rise of AI in warfare

Everyone is talking about AI these days, and for good reason. While primitive AI has played a quiet role in our daily lives for years, sophisticated new programs like ChatGPT have propelled it into popular culture. My colleagues in Congress are using it to write floor speeches and op-eds while journalists are pushing chatbots to their limits. These technologies will transform daily life sooner than we think, accelerating scientific research and improving medicine, education, economic productivity, public safety, and beyond.

There is a potentially sinister side to the rise of AI as well. The former CEO of Google expects social media to get much more toxic as a result of AI, and the man known as the “Godfather of AI,” Geoffrey Hinton, just left Google with a stark warning to the world: AI will make the average person “not be able to know what is true anymore.”

But what is perhaps the most direct threat to human life, AI-enabled warfare, has barely been part of the discussion. Think killer robots. Autonomous weapons, which can identify, select, and apply lethal force to a target, already exist, but as a matter of policy the United States requires that they still have a human operator in the decision cycle.

One example of an AI-enabled weapon is the Patriot antimissile system, currently saving countless Ukrainian civilians from showers of Russian missiles. Patriots select their targets autonomously, but a human is required to hit a button to confirm the launch of the missile.

Advertisement



Another example, loitering munitions, can linger in the air and track potential targets before engaging. Although they were initially developed four decades ago to take out antiaircraft missile systems, today they are increasingly able to integrate autonomous capabilities, meaning they could soon be deployed without any human control.

In the not-so-distant future, we will see thousands of armed drones swarming together, independently formulating attack plans using AI. The Navy has been investing in this technology and is developing ambitious plans for swarms in its own “Super Swarm” mission. By design, these will react to whatever or whomever they identify as an adversary and determine how to neutralize them — entirely on their own.

That means algorithms would decide who lives and who dies. Vladimir Putin said in 2017 that “the leader of AI will rule the world.” He is probably right.

Like autocrats, AI has no moral compass

Throughout human history, warfare has required an elaborate chain of human decisions. In Iraq, I made decisions every day, often over matters of life and death. Whatever happened, I bore the moral and legal responsibility for my actions. When extremely bad decisions are made in war, we have a justice system to handle them.

But AI has no moral compass. It cannot weigh the ethical costs and benefits of an action.

Advertisement



If a robot makes a life-or-death decision, who is deemed responsible? The manufacturer of the weapon? The programmer who wrote the code? Or the commander who ordered its deployment? Those who would use these tools for evil could use this ambiguity to engage in egregious aggression without fear of direct repercussions.

Compounding the danger is the fact that our adversaries not only understand the power of AI, they readily disregard moral guardrails when they deploy it. The Chinese Communist Party is already using AI to build a surveillance state and enable genocide of the Uyghurs, an ethnic minority in northwestern China. Given the party’s use of AI against China’s own civilian population, it’s hard to imagine that they would care much about collateral damage in war waged by soulless robots.

AI could be as bad as nuclear weapons

The lack of moral guardrails and accountability surrounding autonomous weaponry is precisely why it could end up being the most dangerous weapon we have ever seen.

Consider the way our government deals with nuclear weapons. For decades, we have designed both our own launch systems and our treaty relationships with all kinds of fail-safes, from the twin keys necessary to authorize a nuclear launch on a submarine to arms control treaties structured to ensure it would be suicidal to initiate a nuclear conflict.

That’s because following World War II, the international scientific and political communities immediately recognized the danger that the new atom bomb posed to humanity, and the discussion was led by several nuclear scientists themselves. The sheer horror of nuclear war drove the world to design and accept the treaty and regulatory frameworks that, while not perfect, have constrained proliferation and prevented nuclear conflict ever since.

Advertisement



But AI-powered weapons could pose a similar level of danger. For example, a genocidal dictator could program an autonomous weapon to target all civilians matching a certain physical profile. Terrorists or other non-state groups could acquire and use autonomous weapons to wreak havoc on cities. And we must also consider the risk that with its increasing autonomy, the AI powering these weapons might make decisions that its human designers did not intend — with virtually limitless consequences.

Because we have not fully grappled with AI’s destructive potential, there are few taboos around using autonomous weapons or policies governing their development. These risks make it even more urgent that the world’s leading military powers work together to limit their use.

The path forward

As America’s defense community develops AI-enabled weapons, we need to ensure that the technology adheres to our core values, such as avoiding civilian casualties and collateral damage, and that humans have ultimate control over what actions the weapons take. Doing this requires a lot of work and ultimately makes the weapons themselves less lethal. America would do this work whether or not our adversaries did, because it’s the right thing to do. But if we make this tradeoff and our adversaries do not — if their weapons have no constraints at all — we will be at a permanent disadvantage.

That’s why American leadership matters.

Leading on AI issues doesn’t just mean having better algorithms or more computing power; it means setting an ethical example that others will want to follow. Just as American nuclear physicists led the charge to limit nuclear weapons after World War II, when we were the only ones with the bomb, America must lead the way in determining both the technical and moral future of AI-enabled warfare.

What we need is a Geneva Convention for AI

Critics will say that our adversaries will simply ignore a new set of AI norms, just as Russia is ignoring parts of the Geneva Conventions every day in its criminal war in Ukraine. But that’s too cynical a view. Although neither a perfect treaty nor perfectly followed, the Geneva Conventions have helped limit the worst of human behavior in warfare.

I remembered them often as a Marine in Iraq, and our adversaries feel the international pressure to follow them as well, which is why, for example, Putin was so insistent on denying the massacre of civilians in Bucha at the beginning of the Ukraine conflict.

There’s recent precedent for regulating new and potentially dangerous technologies. In the 1980s and early 1990s, the invention of blinding lasers could have changed the course of modern warfare. Imagine the devastation that could be caused by weaponized lasers capable of burning out a human retina from a mile away.

So the international community did something before we had the chance to find out: The Protocol on Blinding Laser Weapons was issued by the United Nations in 1995 and came into force in 1998. Today, 109 nations have formally agreed to the protocol, including China and Russia.

Although the task of regulating AI might seem intimidating, we should start with core principles. The most vital one is that a human being should always be involved in the decision to use lethal force.

This is a deceptively difficult requirement, however, given how hard human involvement will be to define and achieve. It doesn’t always mean that a human has to pull the trigger, but a human should know exactly where, how, and against whom that force will be used.

Such an agreement would require a new verification organization, analogous to the International Atomic Energy Agency, that can inspect everyone’s weaponry and ensure it requires a human decision maker.

Achieving this kind of agreement among world powers, including Russia and China, at a time of mistrust and profound strategic divergence will be incredibly challenging. But if we could achieve pathbreaking agreements on nonproliferation and norms of use at the height of the Cold War, there’s no reason to think we couldn’t do so now.

And a worse alternative is to not even try.

Seth Moulton, a former Marine Corps officer, represents Massachusetts’ 6th Congressional District.