The Biden-Harris administration announced today that the Department of Commerce, through its National Institute of Standards and Technology (NIST) component, will establish the U.S. Artificial Intelligence Safety Institute (AISI).

The AISI will support the responsibilities assigned to the Commerce Department under the historic AI executive order (EO) that President Biden signed earlier this week.

Specifically, the AISI will facilitate the development of standards for safety, security, and testing of AI models, develop standards for authenticating AI-generated content through watermarking, and provide testing environments for researchers to evaluate emerging AI risks and address known impacts.

The NIST-led team will leverage outside expertise – including working with partners in academia, industry, government, and civil society – to advance AI safety. In addition, the AISI will work with similar institutes in allied and partner nations – including with the UK’s AISI – to align and coordinate work in this sphere.

Today’s announcement coincides with Vice President Kamala Harris’ participation in the UK’s AI Summit in London, alongside U.S. Secretary of Commerce Gina Raimondo.

“President Biden and I believe that all leaders from government, civil society, and the private sector have a moral, ethical, and societal duty to make sure that AI is adopted and advanced in a way that protects the public from potential harm and that ensures that everyone is able to enjoy its benefits,” Vice President Harris said during her policy speech on AI in the UK today. “I am proud to announce that President Biden and I have established the United States AI Safety Institute, which will create rigorous standards to test the safety of AI models for public use.”

“Through the establishment of the U.S. AI Safety Institute, we at the Department of Commerce will build on NIST’s long history of developing standards to inform domestic and international technological progress for the common good,” Secretary Raimondo said. “Together, in coordination with Federal agencies across government and in lockstep with our international partners and allies, we will work to fulfill the President’s vision to manage the risks and harness the benefits of AI.”

“I am thrilled that NIST is taking on this critical role for the United States that will expand on our wide-ranging efforts in AI,” NIST Director Laurie Locascio said in a statement. “The U.S. AI Safety Institute will harness talent and expertise from NIST and the broader AI community to build the foundation for trustworthy AI systems.”

VP Announces Additional Efforts to Advance AI in US Government, Globally

Vice President Harris announced six additional AI initiatives today during her visit to the UK, highlighting the administration’s commitment to advance the safe and responsible use of the emerging technology.

She unveiled the Office of Management and Budget’s (OMB) first-ever draft policy guidance for U.S. Federal government use of AI. The document – which is open for public comment through Dec. 5 – outlines concrete steps to advance responsible AI innovation in government, increase transparency and accountability, protect Federal workers, and manage risks from sensitive uses of AI.

“Today, we are also taking steps to establish requirements that when the United States government uses AI, it advances the public interest,” the vice president said. “And we intend that these domestic AI policies will serve as a model for global policy, understanding that AI developed in one nation can impact the lives and livelihoods of billions of people around the world.”

Building on the principles of OMB’s draft policy guidance, the Biden-Harris administration, through the State Department, intends to work with the Freedom Online Coalition of 38 countries to develop a pledge to incorporate responsible and rights-respecting practices in government development, procurement, and use of AI.

Such a pledge is important to ensure AI systems are developed and used in a manner that is consistent with applicable international law, the White House said.

The vice president also announced that 31 nations have joined the United States in endorsing its Political Declaration on the Responsible Military Use of AI and Autonomy, originally unveiled in February. The U.S. is calling on others to join the effort.

The declaration establishes a set of norms for responsible development, deployment, and use of military AI capabilities that can help responsible states around the globe harness the benefits of AI capabilities – including those enabling autonomous functions and systems for their military and defense establishments – in a responsible and lawful manner.

“It is our belief that technology with global impact deserves global action,” she said.  “And so, to provide order and stability in the midst of global technological change, I firmly believe that we must be guided by a common set of understandings among nations. And that is why the United States will continue to work with our allies and partners to apply existing international rules and norms to AI and work to create new rules and norms.”

As of Nov. 1, countries joining the declaration include Albania, Australia, Belgium, Bulgaria, Canada, Czech Republic, Denmark, Estonia, Finland, France, Georgia, Germany, Hungary, Iceland, Ireland, Italy, Japan, Kosovo, Latvia, Liberia, Malawi, Montenegro, Morocco, North Macedonia, Portugal, Romania, Singapore, Slovenia, Spain, Sweden, and the United Kingdom.

The Biden-Harris administration will also launch an effort to detect and block AI-driven fraudulent phone calls to target and steal from the most vulnerable in the U.S., Vice President Harris announced today.

The White House will host a virtual hackathon, inviting companies to submit teams of technology experts focused on building AI technologies, to come together and build AI models that can detect and block unwanted robocalls and robotexts, particularly those using novel AI-generated voice models which particularly harm the elderly. The Federal Communication Commission is actively exploring creative ideas focusing on using AI to target AI-driven fraud and robocalls.

Following the release of its EO – which tasks the Commerce Department with developing AI guidelines for watermarking – the Biden-Harris administration is also calling on all nations to support the development and implementation of international standards to enable the public to effectively identify and trace authentic government-produced digital content and AI-generated or manipulated content.

Finally, Vice President Harris announced today that ten leading foundations have collectively committed more than $200 million in funding toward initiatives to advance AI and are forming a funders network to coordinate new philanthropic giving to advance work organized around ensuring AI protects democracy and rights, driving AI innovation in the public interest, empowering workers to thrive amid AI-driven changes, improving transparency and accountability of AI, and supporting international rules and norms on AI.

The foundations launching this effort are the David and Lucile Packard Foundation; Democracy Fund; the Ford Foundation; Heising-Simons Foundation; the John D. and Catherine T. MacArthur Foundation; Kapor Foundation; Mozilla Foundation; Omidyar Network; Open Society Foundations; and the Wallace Global Fund.

“As leaders from government, civil society, and the private sector, let us work together to build a future where AI creates opportunity, advances equity, fundamental freedoms and rights being protected,” the vice president’s speech concluded. “Let us work together to fulfill our duty to make sure artificial intelligence is in the service of the public interest.”

Read More About
Recent
More Topics
About
Cate Burgan
Cate Burgan
Cate Burgan is a MeriTalk Senior Technology Reporter covering the intersection of government and technology.
Tags