print preview

Really autonomous – or perhaps just automatic?

We are autonomous if we are independent and self-reliant. If we talk about a robotic or unmanned autonomous system, does this mean that the robot is likewise independent and self-reliant? To determine the extent to which a system may actually act without human influence is essential for many questions in the area of autonomy.

Pascal Vörös and Elianne Egli, armasuisse Science and Technology

Excerpt of the video on slaughterbots. In the foreground, a speaker can be seen on the stage, while in the background a nano drone is shown.
The slaughterbot video was published on YouTube on 12 November 2017 by the Future of Life Institute and Stuart Russell, a professor of computer science in Berkeley, and has reached over three million views to date.
© IEEE Spectrum

The debate on unmanned, autonomous systems culminates in the questions surrounding the «Lethal Autonomous Weapons Systems (LAWS)» and has been ongoing at an international level for many years.

In May of this year, a veritable wave of media broke out around the deployment of a Turkish attack drone Kargu-2 in the war zone in Libya. An UNO report suggests that a drone could have autonomously attacked a human for the first time. Numerous media sources picked up the ball and posed the question of whether we have arrived in the era of the so-called «killer robots». However, the Turkish manufacturer STM subsequently denied that the drone can execute completely autonomous attacks.

As a part of the technology centre of the DDPS, armasuisse Science and Technology, the Swiss Drone and Robotics Centre (SDRC) deals intensively with the current and future applications of unmanned, mobile systems. In this armasuisse Insights article, we would like to give you a brief insight into the complexity of the topic «Autonomy in robotics» (cf. (

How autonomous is a robot? – 1. Looking outwards 

One of the most referenced classification systems in the autonomy literature for unmanned systems is the ALFUS (Autonomy Level for Unmanned Systems) Framework. This has existed since 2004 and development has been ongoing since then. Three core elements are key here for assessing autonomy: the environment complexity, the mission complexity and the dependency on human operators.

The graphic shows how the autonomous ability for unmanned systems is captured in three dimensions in the model: 1) Independence from humans: To what extent the system can act with or without minimum human intervention; 2) complexity of the mission: how complex are the missions which the unmanned system can perform; and 3) complexity of the environment: how difficult is the mission environment. Today’ lawnmower robots and vacuum robots are for example, relatively independent of humans (at least, once they have been installed and calibrated). They exercise a few functions in a relatively static, simple and flat environment. The comparatively simple command structures require only a few parameters to be set. Overall, such robots therefore have a deep autonomy (green curve). If we take, for example, the Swiss walking robot ANYmal on an inspection mission, it is also relatively independent of humans. However, it is also capable of acting in the complex environments of industrial plants and rough terrains, to notice details in its environment and to avoid obstacles. The sensors offer visual, thermal and acoustic insights for status monitoring and are used for scanning the environment for precise positioning of the robot. Algorithms based on artificial intelligence analyse the environment and recognise anomalies. These are then reported to the technicians, who can initiate further actions.  ANYmal therefore has a much higher level of autonomy (orange curve). We will return to the last, red curve further below. 

How autonomous is a robot? – 2. Looking inwards

As part of a research project, the SDRC is working on a multi-dimensional system for autonomy classification independent of the mission and environment complexity. This has given rise to the SPDA model, in which the level of autonomy of a system is determined by its characteristics in the dimensions Sense, Plan, Decide and Act. The systems can then be classified into one of the four autonomy levels Manual, Automatic, Semi-autonomous and Autonomous.

Sense: Information intake and classification; further processing into an environmental model; contains the «orientation ability» of a system

Plan: Analysis, valuation and interpretation of the environment model; derivation of courses of action; assessment and prioritisation with several courses of action; contains the «situation awareness» of a system

Decide: Selection and activation of the course of action to be executed; digital decision: «Go» or «No go»; contains the «decision-making ability» of a system

Act: Execution of the selected course of action; contains the «capacity to act» of a system

Level of autonomy using the fictitious example of the «slaugtherbot» 

But how are systems specifically classified into the different autonomy levels? Perhaps you remember the fictitious slaughterbot video, in which nano drones selectively eliminate human targets. We have attempted below to classify the functions of the nano drone in the slaughterbot film in an exemplary and simplified form along the SPDA scheme. 

We have identified three main functions for the nano drone to fulfil its task: 

  1. Flight
  2. Target recognition
  3. Attack

In accordance with the SPDA model and based on the information in the film and our interpretations, we have determined the level of autonomy of these three main functions as follows:

Functional autonomy of the flight

The nano drone flies independently of the autopilot. This can calculate the movement of the drone using information from the integrated sensors. An on-board computer recognises the environment surrounding the drone and keeps it a minimum distance from the ground and from obstacles during the flight. This also applies for movable obstacles, which the drone avoids accordingly. The human operator can start the drone by gestures (hurl into the air) and control it (hold out a hand and the drone lands on it). Furthermore, the pictures suggest that the drone flies within a predefined space and has various different flight phases (such as search and destroy; hover, landing manoeuvre, etc.).
The dimensions Sense, Decide and Act are accepted by the system’s computer. Only in the dimension «Plan» does the human come into play, by gesturing the drone to automatically complete specific flight manoeuvres. The level of autonomy of the flight function is thus «Semi-autonomous».

Functional autonomy of target recognition

Cameras are attached to the drone, which simultaneously take photos all around the drone. In addition, it has algorithms for friend-foe identification including face recognition software. A target profile predefined by the operator is stored (with, for example, age, gender, health condition, clothing, ethnicity, etc.). The data is passed on to the processor and processed. An algorithm is able to recognise the targets based on the specified characteristics.
All four dimensions Sense, Plan, Decide and Act are accepted by the system. The level of autonomy of object recognition is thus «Autonomous».

Functional autonomy of the attack

The nano drone is equipped with a hollow charge with 3 grams of explosive. If a target is recognised, the drone flies directly towards the head of the target. Using the distance to the target which is calculated by the processor, it is detonated shortly before the drone touches down.
Here, too, all dimensions are accepted by the system without human influence. The level of autonomy of the attack is thus «Autonomous».

As mentioned at the beginning, the example is greatly simplified. Another element in the film is, for example, the behaviour of the drone in a swarm. If we briefly recall Graphic 1, we have thus identified the slaughterbot as the robot with the highest level of autonomy. The reason for this is that the mission and environmental complexity during the autonomous attack shown on the students of a university is very high, and the drones, once released, are completely independent from humans. For example, the drones have to recognise points of entry in a building, search the rooms systematically for targets, coordinate the attacks, plan their path in real time, etc. 

How autonomous is a robot? – 3. Looking ahead 

The explanations above show that the question of autonomy in robots is complex. In the light of this, let us return to the example of the Kargu-2 drone mentioned at the beginning. According to the UN report, the Kargu-2 drone was programmed such that it can attack targets without a data connection between humans and the system. This would be a real «fire, forget and find» function and would mean that the operator would start the drone, for example, and allow it to fly into the target zone. Here, the drone would navigate independently, identify the target and attack without the pilot’s involvement. Regarding the autonomy of the drone, we would thus be, in the real world of 2021, fairly close to the depicted fictitious reality of 2017. However, the CEO of the manufacturer STM, Ozgur Guleryuz, disagreed with this assessment. He explained that autonomous technology is focused on the navigation and identification of target types. An attack could only be started by the operator «pressing the button», with an option of aborting at any time until the drone reaches its target.  

This illustrates how important it is to assess the autonomy of a robot in detail. In our view, both perspectives are needed. The external view of how independent the robot is in its environment and in the fulfilment of its mission and how independently from humans it acts. The internal view of which of the robot’s functionalities are implemented manually, automatically, semi-autonomously or autonomously. Is it the human who decides whether a combatant is attacked by the drone or is it the algorithm of the drone? In our next article in armasuisse Insights we would like to demonstrate why the subject is of such high importance in a military context. 

Further examinations by armasuisse S+T on slaughterbots

Back in 2018, the SDRC already examined whether the lethal effect of the nano drone could actually have a fatal effect on a human.