THE MIL & AERO COMMENT – We hear a lot of talk about capabilities artificial intelligence (AI) and machine learning can offer the military. It is less common to discuss when or why AI might be the right thing to do. This is a much deeper, more complex and philosophical question.
At its core, the military AI dilemma boils down to this: Where in the military chain of command does human reasoning and decision-making stop, and where do computers take over? It’s an extremely delicate subject, encompassing the question of whether we can trust computers to make life-or-death decisions – ranging from the strategic deployment of military forces to whether or not to pull the trigger against a suspected terrorist. It also involves thinking deeply about who is really responsible for crucial military decisions: people or machines?
What makes many people nervous is the growing use of military AI and the question of how far can we go before we get too close to that line.
Remove from science fiction
Science fiction aside, AI and machine learning prove to be valuable assistants to human decision-makers. Machines process data much faster than human brains and can offer a range of suggestions on what path to take in difficult situations. The more the military integrates AI into its reconnaissance and combat systems, the more commanders become familiar with it, and the more difficult it becomes to draw a clear line between where the use of AI stops and the moment when humans must take over.
Related: Artificial Intelligence and Machine Learning for Unmanned Vehicles
Answering these questions is not an enviable task, but the military is nevertheless beginning to confront them. In October, U.S. military researchers announced an $8 million contract with COVAR LLC in McLean, Virginia, for the Autonomy Standards and Ideals with Military Operational Values (ASIMOV) project.
ASIMOV seeks to find ways to measure ethics the use of autonomy of military machines and the availability of autonomous systems to operate in military operations. The project aims to develop criteria to measure the ethical use of autonomy of future military machines and the readiness of autonomous systems to operate in military operations.
The ASIMOV program aims to create an ethical autonomy language to enable the testing community to assess the ethical difficulty of specific military scenarios and the ability of autonomous systems to operate ethically in those scenarios.
COVAR will develop prototype modeling environments to explore military machine automation scenarios and its ethical challenges. If successful, ASIMOV will develop some of the standards against which future autonomous systems can be judged.
Ethical difficulties
COVAR will develop criteria for autonomy – not autonomous systems or algorithms for autonomous systems – to include a group of ethical, legal and societal implications to advise and provide guidance throughout the program.
The company will develop prototype generative modeling environments to explore scenario iterations and variability in the face of increasing ethical challenges. If successful, ASIMOV will lay the foundation for defining the benchmark against which future autonomous systems can be evaluated.
ASIMOV will use the Responsible AI (RAI) Strategy and Implementation Pathway (S&I) published in June 2022 as a guideline for developing benchmarks for responsible military AI technology. This document outlines the five ethical principles of responsible U.S. military AI: responsible, fair, traceable, trustworthy, and governable.
A framework for measuring and benchmarking military machine autonomy will help inform military leaders as they develop and evolve autonomous systems – much like the Technology Readiness Levels (TRLs) developed in the 1970s and widely used today.
The ASIMOV project will not resolve all the questions related to the military use of AI and machine learning, far from it, but it is a start. Not only will the project start discussions and find real ways to measure AI ethics in critical decisions, but it will also be a step toward taking science fiction out of the equation.