EGLIN AIR FORCE BASE, Florida – US military researchers wanted to explore the ethical and technical challenges of using artificial intelligence (AI) and machine autonomy in future military operations. They found a solution with COVAR LLC in McLean, Virginia.
Officials with the U.S. Air Force Research Laboratory Munitions Directorate at Eglin Air Force Base, Fla., announced last month an $8 million contract with COVAR for the Autonomy Standards and Standards (ASIMOV) project. Ideals with Military Operational Values).
The Air Force Research Laboratory awarded the contract on behalf of the U.S. Defense Advanced Research Projects Agency (DARPA) in Arlington, Virginia.
Related: Artificial Intelligence (AI) in Unmanned Vehicles
ASIMOV aims to develop benchmarks to measure ethical use the autonomy of future military machines and the readiness of autonomous systems to operate in military operations.
Ethical performance
The rapid development of machine autonomy and artificial intelligence (AI) technologies requires ways to measure and evaluate the technical and ethical performance of autonomous systems. ASIMOV will develop and demonstrate autonomy benchmarks and is not developing autonomous systems or algorithms for autonomous systems.
The ASIMOV program aims to create the language of ethical autonomy to enable the testing community to assess the ethical difficulty of specific military scenarios and the ability of autonomous systems to operate ethically in those scenarios.
COVAR will develop prototype modeling environments to explore military machine automation scenarios and its ethical challenges. If successful, ASIMOV will develop some of the standards against which future autonomous systems can be judged.
Related: Technology Trends in Autonomous Vehicles
COVAR will develop criteria for autonomy – not autonomous systems or algorithms for autonomous systems – WHICH will include a group of ethical, legal and societal implications to advise artists and guide them throughout the program.
The company will develop prototype generative modeling environments to explore scenario iterations and variability in the face of increasing ethical challenges. If successful, ASIMOV will lay the foundation for defining the benchmark against which future autonomous systems can be evaluated.
Responsible AI
ASIMOV will use the Responsible AI (RAI) Strategy and Implementation Pathway (S&I) published in June 2022 as a guideline for developing benchmarks for responsible military AI technology. This document outlines the five ethical principles of responsible U.S. military AI: responsible, fair, traceable, trustworthy, and governable.
A framework for measuring and benchmarking military activities machine autonomy will help inform military leaders as they develop and scale autonomous systems – much like the Technology Readiness Levels (TRLs) developed in the 1970s and which are widely used today.
ASIMOV is a two-phase, 24-month program. For more information, contact COVAR LLC online at https://covar.comthe Ammunition Directorate of the Air Force Research Laboratory at https://www.afrl.af.mil/RW/or DARPA to https://www.darpa.mil/program/autonomy-standards-and-ideals-with-military-operational-values.