As the AI keeps growing rapidly, Pentagon published a set of ethical guidelines both for combat and non-combat military scenarios for its use of artificial intelligence.
While more AI robot videos are making their way to the internet each day, Pentagon decided to set ethical guidelines for killer warrior robots and other AI devices. The document is published by the Defense Innovation Board, including former Google CEO Eric Schmidt, Hayden Planetarium director Neil deGrasse Tyson and Linkedin co-founder Reid Hoffman.
Table of Contents
5 main principles
The document is called AI Principles: Recommendation on the Ethical Use of Artificial Intelligence by the Department of Defense. It is mainly focussed on determining the ground rules for the use of AI during the early stages. The framework is citing civil engineering and nuclear-powered vessels as examples. Guidelines are focussing on five important principles to keep the AI-driven system under control:
Responsible
Human beings should exercise appropriate levels of judgment and remain responsible for the development, deployment, use, and outcomes of DoD AI systems.
Equitable
DoD should take deliberate steps to avoid unintended bias in the development and deployment of combat or non-combat AI systems that would inadvertently cause harm to persons.
Traceable
DoD’s AI engineering discipline should be sufficiently advanced such that technical experts possess an appropriate understanding of the technology, development processes, and operational methods of its AI systems, including transparent and auditable methodologies, data sources, design procedures, and documentation.
Reliable
DoD AI systems should have an explicit, well-defined domain of use, and the safety, security, and robustness of such systems should be tested and assured across their entire life cycle within that domain of use.
Governable
DoD AI systems should be designed and engineered to fulfill their intended function while possessing the ability to detect and avoid unintended harm or disruption and for human or automated disengagement or deactivation of deployed systems that demonstrate unintended escalatory or other behavior.
There are also recommendations such as the creation of an AI steering committee, investment in research, and workforce AI training to support these principles. The Board spent 15 months consulting experts at Facebook, MIT Media Lab, Elon Musk’s OpenAI research institute, and Stanford University.