Artificial Intelligence for warfare or for maintaining peace

Richard Benjamins    12 July, 2019

On July 3, 2019, I attended an event organized by the Spanish Center for National Defense Studies (CESEDEN) and the Polytechnic University of Madrid (UPM) on the impact of AI on defense and national security. Not coming from the military area, I was asked to speak about my view on what AI will look like in 20 years from now. But the interesting part was not my presentation, but the message conveyed by several generals, in particular by Major General José Manuel Roldán Tudela and Major General Juan Antonio Moliner González.

Situations where AI may be helpful in wars are often related to safety of soldiers, fighting at the frontline, situations that require high endurance and persistence, lethal or very dangerous environments, to avoid physical or mental exhaustion, and when extreme fast reactions are required. 

AI can also improve different tactical levels in warfare related to for example, the decision-making cycle, improving the understanding of the situation, capacity to maneuver,  protection of soldiers, performance of soldiers, capacity of perseverance.

But there are several rules when applying AI to such situations, relating to supervision, teams, and security.

  • Supervision
    • The (AI) systems needs an advanced user interface for fast interaction
    • All activities should be registered for later inspection
    • The system can be activated and deactivated under human control
  • Teams
    • Humans and AI systems work together in teams
    • AI systems should be able to explain themselves
  • Security
    • Hostile manipulation should be avoided
    • No intrusion should be possible
    • Cybersecurity

AI is improving defense and national security on land, sea and in air. Some examples include:

  • Land
    • Remove landmines
    • Recognition of important routes
    • Battles in urban zones
    • Air support
  • Sea
    • Mine detection and removal
    • Anti-submarine warfare
    • Maritime Search and Rescue
  • Air
    • Precision attacks
    • Search and rescue in combat
    • Suppression of Enemy Air Defenses

There are of course also ethical aspects with the use of AI for Defense. For instance, the final responsibility of all actions needs to stay with humans. Humans should be “in the loop” (decide all), “on the loop” (be able to correct), and only in very specific cases “out of the loop.” An important lesson seems to be that when ethical principles are relaxed, armed conflicts increase.  Specific aspects that were mentioned include:

  • The principle to reduce unnecessary risks to own soldiers – machines seems to make less errors than people
  • Discriminate between soldiers and civilians – AI is likely to better discriminate
  • Overall, there is aversion against lethal autonomous weapon systems (LAWS)

Raising a specific question about LAWS, the answer was that humans always need to stay in control of life or death decisions. But it was also recognized that there is a serious risk for an AI arms race. Even though many countries may be completely against the use of LAWS, if one country starts to develop and threat with LAWS, other countries might feel obliged to follow. This is probably behind the withdrawal of France and Germany from the “Killer Robots” ban. Humanity has experience with the nuclear arms race, and so far, has been wise enough to use it only as a threat. However, nuclear arms have a very high entrance barrier, probably much higher than LAWS.

Let’s hope that humanity is also wise enough with LAWS and that no one has to brace for impact.

Don’t miss out on a single post. Subscribe to LUCA Data Speaks.

You can also follow us on TwitterYouTube and LinkedIn

Leave a Reply

Your email address will not be published. Required fields are marked *