Ukrainian Robot Soldiers Enter the Fray: What Does the Future of War Look Like?
Nils Adler
Ukraine is deploying armed, AI-powered ground robots on the battlefield, transforming combat tactics and raising urgent questions about the role of humans and ethics in armed conflict. A video from Ukrainian defense firm DevDroid showed Russian soldiers surrendering to a machine gun-mounted robot. President Zelenskyy announced that an enemy position had been fully seized by unmanned platforms for the first time.
A video released in January by Ukrainian defense firm DevDroid showed three Russian soldiers in white camouflage raising their hands in surrender to a machine gun mounted on a ground robot. According to the news agency, it was the moment Russian troops were captured by a Ukrainian AI-operated robot.
By April, Ukrainian President Volodymyr Zelenskyy declared that, for the first time in the war's history, an enemy position had been fully seized by unmanned platforms, including ground systems and drones. He wrote on X: “Ground robotic systems have carried out over 22,000 missions on the front line in just three months.”
Analysts say the event reflects an inevitable step in the evolution of warfare, reaching beyond Ukraine's borders as the world confronts the ethical implications of controlling this technology.
UAVs, naval robots, and robot dogs
For decades, militaries primarily used ground robots for bomb disposal and reconnaissance. In Ukraine, however, their role has expanded rapidly. Some brigades report that up to 70% of frontline supplies are transported by robotic systems instead of soldiers. These machines carry ammunition, food, medical supplies, and evacuate wounded from dangerous positions.
This shift is part of a larger transformation in warfare that began with the debate over unmanned aerial vehicles (UAVs) in the early 2000s. In 2002, the U.S. used an MQ-1 Predator drone for one of the first targeted airstrikes in Afghanistan. As AI advances, the debate has moved to systems capable of autonomously identifying targets, prioritizing attacks, and making battlefield decisions.
Experts say autonomy must remain the central issue. Toby Walsh, an AI specialist at the University of New South Wales, describes AI-driven military operations as “the third revolution in warfare.” This revolution is also spreading to naval robots and robot dogs being tested for surveillance and reconnaissance missions.
The human role
The emergence of fully autonomous drones—so-called “killer robots”—sparked fierce debate after a UN report suggested that a Turkish-made Kargu-2 drone autonomously attacked soldiers in Libya in 2020. The incident fueled ethical discussions about a machine making life-and-death decisions.
However, Anna Nadibaidze, a researcher at the University of Southern Denmark, argues that more attention should be paid to regulations on semi-autonomous weapons, where “humans are still in the loop.” She worries whether there is enough “time and space” for human judgment in a war context.
In Ukraine, an operator still controls the robot. But in the case of Israel's offensive in Gaza, AI's involvement in decision-making led to “great civilian harm,” challenging international humanitarian law and the principle of proportionality. A Stockholm International Peace Research Institute report warns that the fragmented AI supply chain, heavily reliant on civilian technology, complicates efforts to control military AI use.
Mid-last year, the U.S. Department of Defense awarded OpenAI a $200 million contract to integrate generative AI into the military. Expert Walsh warns: “If we're not careful, war will become more horrific, faster, and humans will not be able to participate because they lack the speed, accuracy, and reaction time.”
Ukraine as a testing ground
Experts stress that technology and AI are not inherently harmful—it is how they are used that matters. In Ukraine, robots are also used to rescue civilians and support logistics in treacherous minefields. However, what is happening on the front line is a testing ground, compelling the international community to consider regulating this technology in future conflicts.
Despite ethical failures, there is recognition that these issues must be addressed. The UN Institute for Disarmament Research (UNIDIR) is scheduled to meet in June to examine AI's impact on international peace and security. As with chemical weapons, international agreements, though imperfect, were eventually established to control them. “Many countries in the Global South want regulation, so regional initiatives could take shape,” Nadibaidze says, adding that even without major powers' participation, these can help shape emerging norms.