Witnesses at a House Committee on Science, Space, and Technology joint subcommittee hearing on Wednesday praised the work the Federal government has done to start building a foundation for a regulatory framework for artificial intelligence, but said more work is needed.
The Subcommittee on Investigations and Oversight and the Subcommittee on Research and Technology held a joint hearing on Oct. 18 and heard from witnesses about effective risk management practices being the ‘bedrock’ to safe and effective AI implementation and deployment.
Elham Tabassi, associate director for Emerging Technologies, Information Technology Laboratory at the National Institute of Standards and Technology (NIST), explained that “identifying, mitigating, and minimizing risks and potential harms associated with AI technologies are essential steps towards the development of trustworthy AI systems and their appropriate and responsible use.”
“More than ever before, our economy and society sorely need and depends on … standards and measurements [for technology]. That is especially true when it comes to AI,” Tabassi said.
Earlier this year, NIST released the AI Risk Management Framework, a voluntary framework that provides a flexible, structured, and measurable process to address AI risks purposefully and continually throughout the AI lifecycle.
As the government moves forward in its efforts with AI, Tabassi highlighted that AI risk management is crucial in that effort as it offers a path to minimize potential negative impacts of AI systems and points to opportunities to maximize positive impacts and create opportunities for innovation.
Michael Kratsios, managing director for Scale AI and the 4th chief technology officer of the United States, also highlighted several actions that the government has taken over the years to implement a regulatory framework for AI.
However, he said existing efforts – including AI legislation and presidential executive orders – have fallen short especially because of slow implementation.
“While considering additional legislation or pursuing new administrative actions on AI, policymakers should first ensure that federal agencies fully implement existing laws and are following through with requirements from executive orders relating to responsible AI adoption,” Kratsios said.
In addition, Kratsios suggested that moving forward when it comes to regulating AI systems, lawmakers should build upon existing methodologies and policies. For example, Kratsios recommended that the Federal government pursue a use case and sector-specific, risk-based approach rooted in high-quality testing and evaluation.
“We are not starting from scratch on this approach. Test and evaluation are a standard part of bringing products to market in nearly every industry,” Kratsios said.
“In short, there has been a tremendous amount of work on AI oversight to date across the Federal government. Any new legislative and regulatory actions should build upon and supplement this existing robust body of work and should focus on ensuring a use case and sector-specific, risk-based approach to regulation,” Kratsios concluded.