The Office of Management and Budget (OMB) has released guidance for regulating AI applications, pursuant to Executive Order 13859, “Maintaining American Leadership in Artificial Intelligence.”

The guidance – issued in coordination with the directors of the Office of Science and Technology Policy (OTSP), Domestic Policy Council, and National Economic Council – informs the developments of regulatory and non-regulatory approaches to technologies and organizations enabled by AI, in addition to reducing barriers to developing AI technologies.

“The deployment of AI holds the promise to improve efficiency, effectiveness, safety, fairness, welfare, transparency, and other economic and social goals, and America’s continued status as a global leader in AI development is important to preserving our economic and national security,” the guidance states. “The importance of developing and deploying AI requires a regulatory approach that fosters innovation and growth and engenders trust, while protecting core American values, through both regulatory and non-regulatory actions and reducing unnecessary barriers to the development and deployment of AI.”

The guidance goes on to list “Principles for the Stewardship of AI Applications” that are relevant to promoting the innovative use of AI and encouraging growth of AI. Those principles include:

  1. Public Trust in AI— Because AI applications pose risks to privacy, individual rights, personal choice, civil liberties, public health, safety and security, the government’s regulatory and non-regulatory approaches to AI should contribute to public trust.
  2. Public Participation— This improves agency accountability and regulatory outcomes while boosting public trust and confidence.
  3. Scientific Integrity and Information Quality— Agencies should leverage processes and hold scientific and technical information “that is likely to have a clear and substantial influence on important public policy or private sector decisions to a high standard of quality and transparency.”
  4. Risk Assessment and Management— Approaches to AI, both regulatory and non-regulatory, should be based on consistent application of risk assessment and risk management.
  5. Benefits and Costs— Significant investments in applying and deploying AI into already-regulated industries should not take place unless there is significant economic potential at stake.
  6. Flexibility— Federal agencies should pursue performance-based and flexible approaches o AI that are technology neutral, while not imposing mandates on companies that would harm innovation.
  7. Fairness and Non-Discrimination— Agencies should transparently consider the impacts that AI may have on discrimination.
  8. Disclosure and Transparency—This can improve public trust by allowing “non-experts to understand how an AI application works and technical experts to understand the process by which AI made a given decision.”
  9. Safety and Security— Promote safe and secure AI systems and develop systems that operate as intended. Additionally, encourage safety and security throughout the AI design, development, deployment, and operation process.”
  10. Interagency Coordination— A whole-of-government approach to AI oversight requires interagency coordination and sharing experiences to ensure consistency and predictability of AI-related policies.
Read More About
About
Jordan Smith
Jordan Smith
Jordan Smith is a MeriTalk Senior Technology Reporter covering the intersection of government and technology.
Tags