What will we do when AI goes to war?
After all, a future of “killer robots” isn’t far off. We have unmanned aircraft already—the U.S.-pioneered drone warfare in the Middle East in the post-9/11 era, and subsequent conflicts, including Russia’s invasion of Ukraine, have led other countries and combatants to get in on the action. The technology needed to make drones, drone swarms, and other weapons operate autonomously is in active development or, more likely, already exists. The question isn’t whether we’ll soon be able to bomb by algorithm, but whether we’ll judge it good and right.
That’s the future Lt. Gen. Richard G. Moore Jr., deputy chief of staff for plans and programs of the U.S. Air Force, was considering when he made widely reported comments about ethics in AI warfare at a Hudson Institute event last week. While America’s adversaries may use AI unethically, Moore said, the United States would be constrained by a foundation of faith.
“Regardless of what your beliefs are, our society is a Judeo-Christian society, and we have a moral compass. Not everybody does,” he argued, “and there are those that are willing to go for the ends regardless of what means have to be employed.”
More on Western Hemisphere
April 25, 2026
April 23, 2026
Featuring Jennifer Kavanagh
April 23, 2026
