UN Secretary-General António Guterres warned this week about the consequences of using artificial intelligence in warfare, saying, "We will not move as slowly in the future as we do now," stressing the urgent need to regulate AI. The speed of technological development amid geopolitical upheavals is blurring the lines between theoretical concepts and real-world events. The political debate over the capabilities of the US military's AI coincided with its unprecedented use in the Iranian crisis. The AI company Anthropic stated that it was unable to remove safeguards preventing the US Department of Defense (Pentagon) from using its technology for mass surveillance or autonomous lethal weapons. The Pentagon stated it has no interest in such uses but emphasized that such decisions should not be left to companies. Strangely, the administration did not stop at severing ties with Anthropic but also blacklisted it, deeming it a threat to the supply chain. AI research and development company OpenAI intervened, reaffirming its commitment to the red lines set by Anthropic. However, its response was limited to internal user and employee backlash. CEO Sam Altman acknowledged that the company does not control the Pentagon's use of its products and that the way the agreement with the Department of Defense was managed made OpenAI appear "opportunistic and in need of more sophistication."But the executive director of the 'Stop Killer Robots' campaign, which advocates for human control over the use of force, Nicole van Royen, warned: "The issue is not whether these weapons will be used, but how they are already changing existing war-making systems. Human control becomes secondary or a mere formality."This radical shift is already underway. Despite the controversy, some have suggested that Anthropic may have facilitated the widespread and escalating attacks in this war, which have cost many lives. Experts told The Guardian this week that "the age we live in is the age of bombing, faster than the speed of thought, where AI identifies and prioritizes targets, suggests weapons, and assesses the legal basis for a strike."AI is not primarily responsible for civilian casualties, military errors, or impunity. Ultimately, humans are responsible because AI cannot operate on its own without the humans who start wars without thinking about the potential victims. However, even without addressing the issues of inaccuracy and bias in AI, its impact is clear to its users. One military officer said, "There are many targets, and each one only takes a few seconds to be bombed." He added that he felt useless because AI was handling the entire process, which literally facilitated mass killings, creating more distance from moral and emotional feelings and diminishing accountability.Democratic oversight and multilateral constraints are crucial, rather than leaving decisions to the hands of weapons manufacturers and defense ministries.While the war rages and bombing intensifies between the parties in the ongoing conflict in Iran, nations have gathered in Geneva to discuss lethal autonomous weapons systems. The draft text being discussed could form a solid foundation for a treaty that humanity needs more than ever. Most governments want clear guidelines on the military use of AI, but the biggest perpetrators of war are resisting, despite being at least in the decision-making circle. Unfortunately, the intensity of AI-driven wars may be interpreted by those conducting them as they see fit. Some may interpret a decrease in the intensity of the war and bombing as ceding ground to the enemy. However, as technology workers and military officials themselves understand, the risks of uncontrolled expansion are far greater.
Guterres' AI War Warning and US Debates
UN Secretary-General António Guterres called for the regulation of AI in warfare. A debate rages in the US between AI companies and the Pentagon over control of technologies that are changing the nature of war, increasing speed and reducing human control.