Operationalizing DoD's Ethical Principles for AI
About DIU's Responsible AI Initiative
DIU launched a strategic initiative in March 2020 to implement the DoD’s Ethical Principles for Artificial Intelligence (AI) into its commercial prototyping and acquisition programs. For over a year, DIU explored methods for implementing these principles with DoD partners in several AI prototype projects that cover applications including, but not limited to, predictive health, underwater autonomy, predictive maintenance, and supply chain analysis. The result is a set of Responsible AI Guidelines that are informed by DIU’s practical experience, but also draw upon best practices from government, non-profit, academic, and industry partners.
DIU will continue collaborating with experts and stakeholders from government, industry, academia, and civil society to further develop the RAI Guidelines. To provide feedback on the RAI Guidelines or schedule a discussion on how to implement these guidelines in your department or agency, please feel free to email: firstname.lastname@example.org.
Responsible AI Guidelines in Practice
DIU's RAI Guidelines aim to provide a clear, efficient process of inquiry for personnel involved in AI system development (e.g.: program managers, commercial vendors, or government partners) to achieve the following goals:
ensure that the DoD's Ethical Principles for AI are integrated into the planning, development, and deployment phases of the technical lifecycle;
effectively examine, test, and validate that all programs and prototypes align with DoD's Ethical Principles for AI; and,
leverage a process that is reliable, replicable, and scalable across a variety of programs.
DIU's RAI Guidelines are presented in the form of detailed worksheets that instruct and guide AI vendors, DoD stakeholders, and DIU program managers on how to properly scope AI problem statements. They also provide detailed guidance on the considerations that each of these stakeholders should keep in mind as they proceed through each phase of AI system development.
The Responsible AI report provides a summary of the Responsible AI Guidelines that resulted from DIU's efforts to operationalize the DoD’s Ethical Principles for AI within its prototyping efforts. It also provides detailed case studies demonstrating the value of RAI Guidelines in practice while identifying specific lessons learned from these efforts.
Phase I: Planning
Planning refers to the process of conceptualizing and designing an AI system to solve a given problem. In the planning phase, personnel who wish to build an AI system define its prospective functionality, the resources required to create it, and the operational context into which it will be deployed.
Phase II: Development
Development refers to the iterative process of writing and evaluating the computer code that comprises that system. In the development phase, DoD and/or company personnel focus on five lines of inquiry – manipulation of data models, system performance monitoring, output verification, audit mechanisms, and governance roles – as they work to build out the planned AI system.
Phase III: Deployment
Deployment refers to the process of using that system to solve the problem in practice. The deployment phase focuses on three sets of continuous evaluation procedures that must be scoped and performed on an ongoing basis throughout an AI system’s life cycle.