Secretary of Commerce Gina Raimondo released a strategic vision for the U.S. AI Safety Institute (AISI) today, outlining the steps that the AISI plans to take to advance the science of AI safety and facilitate safe and responsible AI innovation.

At the direction of President Biden’s October 2023 AI executive order, the National Institute of Standards and Technology (NIST) within the Department of Commerce launched the AISI and established an executive leadership team.

Raimondo released the AISI’s strategi vision as the AI Seoul Summit begins today. The strategic vision describes the AISI’s philosophy, mission, and strategic goals.

“Recent advances in AI carry exciting, lifechanging potential for our society, but only if we do the hard work to mitigate the very real dangers of AI that exist if it is not developed and deployed responsibly. That is the focus of our work every single day at the U.S. AI Safety Institute, where our scientists are fully engaged with civil society, academia, industry, and the public sector so we can understand and reduce the risks of AI, with the fundamental goal of harnessing the benefits,” said Raimondo.

According to the nine-page document released today, the AISI is rooted in two core mission principles – that beneficial AI depends on AI safety and that AI safety depends on science.  The AISI aims to address key challenges, including a lack of standardized metrics for frontier AI, underdeveloped testing and validation methods, limited national and global coordination on AI safety issues, and more.

The AISI’s strategic vision lists three key goals to advance its vision of a future where safe AI innovation enables a thriving world:

  • Advancing the science of AI safety;
  • Articulating, demonstrating, and disseminating the practices of AI safety; and
  • Supporting institutions, communities, and coordination around AI safety.

To achieve these goals, the AISI plans to, among other activities, conduct testing of advanced models and systems to assess potential and emerging risks; develop guidelines on evaluations and risk mitigations; and perform and coordinate technical research. The AISI said it will work closely with diverse AI industry, civil society members, and international partners to achieve these objectives.

“The strategic vision we released today makes clear how we intend to work to achieve that objective and highlights the importance of cooperation with our allies through a global scientific network on AI safety,” Raimondo said. “Safety fosters innovation, so it is paramount that we get this right and that we do so in concert with our partners around the world to ensure the rules of the road on AI are written by societies that uphold human rights, safety, and trust.”

In addition to releasing a strategic vision, Raimondo also shared the Commerce Department’s plans to help launch a global scientific network for AI safety through meaningful engagement with AI Safety Institutes and other government-backed scientific offices focused on AI safety and committed to international cooperation.

This network will expand on AISI’s previously announced collaborations with the AI Safety Institutes of the UK, Japan, Canada, and Singapore, as well as the European AI Office, and it will catalyze a new phase of international coordination on Al safety science and governance.

To further collaboration between this network, Raimondo also announced that AISI intends to convene international AI Safety Institutes and other stakeholders later this year in San Francisco.

Read More About
Recent
More Topics
About
Cate Burgan
Cate Burgan
Cate Burgan is a MeriTalk Senior Technology Reporter covering the intersection of government and technology.
Tags