What Does “Human Control” Over Autonomous Systems Mean?

The challenge of human control over autonomous weapons

4 min read
Conceptual illustration of two humanoid heads facing away from each other with a pattern of nested squares.
SSNJAYTUTURKHI/GETTY IMAGES

This article is part of our Autonomous Weapons Challenges series. The IEEE Standards Association is looking for your feedback on this topic, and invites you to answer these questions.

Two Boeing 737 Max planes crashed in 2018 and 2019 due to sensor failures that led to autopilot malfunctions that two human pilots were unable to overcome. Also in 2018, an Uber autonomous vehicle struck and killed a pedestrian in Arizona, even though a person in the car was supposed to be overseeing the system. These examples highlight many of the issues that arise when considering what “human control” over an autonomous system really means.

The development of these autonomous technologies occurred within enormously complex bureaucratic frameworks. A huge number of people were involved—in engineering a number of autonomous capabilities to function within a single system, in determining how the systems would respond to an unknown or emergency situation, and in training people to oversee the systems. A failure in any of these steps could, and did, lead to a catastrophic failure in which the people overseeing the system weren’t able to prevent it from causing unintended harm.


These examples underscore the basic human psychology that developers need to understand in order to design and test autonomous systems. Humans are prone to over-trusting machines and become increasingly complacent the more they use a system and nothing bad happens. Humans are notoriously bad at maintaining the level of focus necessary to catch an error in such situations, typically losing focus after about 20 minutes. And the human response to an emergency situation can be unpredictable.

Ultimately, “human control” is hard to define and has become a controversial issue in discussions about autonomous weapons systems, with many similar phrases used in international debates, including “meaningful human control,” “human responsibility,” and “appropriate human judgment.” But regardless of the phrase that’s used, the problem remains the same: Simply assigning a human the task of overseeing an AWS may not prevent the system from doing something it shouldn’t, and it’s not clear who would be at fault.

Responsibility and Accountability

Autonomous weapons systems can process data at speeds that far exceed a human’s cognitive capabilities, which means a human involved will need to know when to trust the data and when to question it.

In the examples above, people were directly overseeing a single commercial system. In the very near future, a single soldier might be expected to monitor an entire swarm of hundreds of weaponized drones, with testing already taking place for soldiers. Each drone may be detecting and processing data in real time. If a human can’t keep up with a single autonomous system, they certainly wouldn’t be able to keep up with the data coming in from a swarm. Additional autonomous systems may thus be added to filter and package the data, introducing even more potential points of failure. Among other issues, this raises legal concerns, given that responsibility and accountability could quickly become unclear if the system behaves unexpectedly only after it’s been deployed.

Human-Machine Teams

Artificial intelligence often relies on machine learning, which can turn AI-based systems into black boxes, with the AI taking unexpected actions and leaving its designers and users uncertain as to why it did what it did. It remains unclear how humans working with AWS will respond to their machine partners or what type of training will be necessary to ensure the human understands the capabilities and limitations of the system. Human-machine teaming also presents challenges both in terms of training people to use the system and of developing a better understanding of the trust dynamic between humans and AWS. While the human-robot handoff may be a technical challenge in many fields, it quickly becomes a question of international humanitarian law if the handoff doesn’t go smoothly for a weapons system.

Ensuring responsibility and accountability for AWS is a general point of agreement among those involved in the international debate. But without sufficient understanding of human psychology or how human-machine teams should work, is it reasonable to expect the human to be responsible and accountable for any unintended consequences of the system’s deployment?

What Do You Think?

We want your feedback! To help bring clarity to these AWS discussions, the IEEE Standards Association convened an expert group in 2020, to consider the ethical and technical challenges of translating AWS principles into practice and what that might mean for future development and governance. Last year, the expert group published its findings in a report entitledEthical and Technical Challenges in the Development, Use, and Governance of Autonomous Weapons Systems.” Many of the AWS challenges are similar to those arising in other fields that are developing autonomous systems. We expect and hope that IEEE members and readers of IEEE Spectrum will have insights from their own fields that can inform the discussion around AWS technologies.

We’ve put together a series of questions on the Challenges document that we hope you’ll answer, to help us better understand how people in other fields are addressing these issues. Autonomous capabilities will increasingly be applied to weapons systems, much as they are being applied in other technical realms, and we hope that by looking at the challenges in more detail, we can help establish effective technical solutions while contributing to discussions about what can and should be legally acceptable. Your feedback will help us move toward this ultimate goal. Public comments will be open through 7 December 2022.

The Conversation (1)
Robert Ro
Robert Ro06 Nov, 2022
"How One Man Stopped World War 3 In 1983 - (Stanislav Petrov)", i think this should answer a lot of your questions, even if the technology has advanced.