The Department of Homeland Security (DHS) has released a set of voluntary recommendations for the secure development and deployment of AI for organizations at each layer of the AI supply chain: cloud providers, AI developers, and critical infrastructure owners – as well as the civil society and public sector.

The “Roles and Responsibilities Framework for AI in Critical Infrastructure” was developed by DHS’s AI Safety and Security Board, which is made up of two dozen private and public sector partners including OpenAI CEO Sam Altman and White House Office of Science and Technology Policy Director Arati Prabhakar.

“AI offers a once-in-a-generation opportunity to improve the strength and resilience of U.S. critical infrastructure, and we must seize it while minimizing its potential harms. The Framework, if widely adopted, will go a long way to better ensure the safety and security of critical services that deliver clean water, consistent power, internet access, and more,” DHS Secretary Alejandro Mayorkas said in a statement.

“The choices organizations and individuals involved in creating AI make today will determine the impact this technology will have in our critical infrastructure tomorrow. I am grateful for the diverse expertise of the Artificial Intelligence Safety and Security Board and its members, each of whom informed these guidelines with their own real-world experiences developing, deploying, and promoting the responsible use of this extraordinary technology,” he continued, adding, “I urge every executive, developer, and elected official to adopt and use this Framework to help build a safer future for all.”

DHS said, if adopted, the voluntary framework will enhance the harmonization and help operationalize safety and security practices, improve the delivery of critical services, enhance trust and transparency among entities, protect civil rights and civil liberties, and advance AI safety and security research that will further enable critical infrastructure to deploy emerging technology responsibly.

“Despite the growing importance of this technology to critical infrastructure, no comprehensive regulation currently exists,” the 35-page framework says.

DHS identified three primary categories of AI safety and security vulnerabilities in critical infrastructure: attacks using AI, attacks targeting AI systems, and design and implementation failures. To address these vulnerabilities, the framework recommends actions directed to each of the five key stakeholders supporting the development and deployment of AI in U.S. critical infrastructure.

The framework encourages cloud service providers to support customers further downstream of AI development by monitoring anomalous activity and establishing clear pathways to report suspicious and harmful activities.

The framework recommends that AI developers adopt a secure by design approach, evaluate dangerous capabilities of AI models, and ensure model alignment with human-centric values. It further encourages AI developers to implement strong privacy practices; conduct evaluations that test for vulnerabilities; and support independent assessments for models that present heightened risks to critical infrastructure systems and their consumers.

The framework recommends a number of practices for critical infrastructure owners and operators focused on the deployment-level of AI systems, to include maintaining strong cybersecurity practices that account for AI-related risks, protecting customer data when fine-tuning AI products, and providing meaningful transparency regarding their use of AI to provide goods, services, or benefits to the public.

The framework also encourages critical infrastructure entities to play an active role in monitoring the performance of these AI systems and share results with AI developers and researchers to help them better understand the relationship between model behavior and real-world outcomes.

The framework encourages civil society’s continued engagement on standards development alongside government and industry, as well as research on AI evaluations that considers critical infrastructure use cases. It also envisions an active role for civil society in informing the values and safeguards that will shape AI system development and deployment in essential services.

Finally, the framework encourages continued cooperation between the Federal government and international partners to protect all global citizens, as well as collaboration across all levels of government to fund and support efforts to advance foundational research on AI safety and security.

The report complements other work carried out by the Biden administration on AI safety, such as the guidance from the Commerce Department’s AI Safety Institute, on managing a wide range of misuse and accident risks.

“Ensuring the safe, secure, and trustworthy development and use of AI is vital to the future of American innovation and critical to our national security,” said Secretary of Commerce Gina Raimondo. “This new Framework will complement the work we’re doing at the Department of Commerce to help ensure AI is responsibly deployed across our critical infrastructure to help protect our fellow Americans and secure the future of the American economy.”

Read More About
Recent
More Topics
About
Cate Burgan
Cate Burgan
Cate Burgan is a MeriTalk Senior Technology Reporter covering the intersection of government and technology.
Tags