The Cybersecurity and Infrastructure Security Agency (CISA) said today it is aiming to responsibly use artificial intelligence (AI) technologies in its missions to protect Federal civilian agencies and critical infrastructure sectors, while also assisting government and private sector organizations in making sure that the AI-enabled software they use is secure by design.
Those are some of the top-line takeaways from CISA’s Roadmap for Artificial Intelligence policy document released today by the cyber protection agency and its Department of Homeland Security (DHS) parent.
CISA and DHS are publishing the roadmap document as a follow-on to the Oct. 30 release of the White House’s AI Executive Order, joining other agencies which have done so including the Defense Department which published its latest AI strategy on Nov. 2.
The White House order tasks several agencies including CISA with important follow-up work to put the executive order’s directions into operation, both at the agency and in its missions involving other organizations.
CISA summed up its path forward from the executive order by pledging to “assess possible cyber-related risks to the use of AI and provide guidance to the critical infrastructure sectors that Americans rely on every hour of every day,” and working to “capitalize on AI’s potential to improve U.S. cyber defenses and develop recommendations for the red-teaming of generative AI.”
Five Lines of Effort
In the roadmap document released today, CISA said it will proceed along five lines of effort to reach those broad goals.
First, the agency said it will “use AI-enabled software tools to strengthen cyber defense and support its critical infrastructure mission.” The agency’s adoption of AI tech “will ensure responsible, ethical, and safe use — consistent with the Constitution and all applicable laws and policies, including those addressing federal procurement, privacy, civil rights, and civil liberties,” CISA said.
Second, CISA said it will “assess and assist secure by design, AI-based software adoption across a diverse array of stakeholders, including federal civilian government agencies; private sector companies; and state, local, tribal, and territorial (SLTT) governments.” That work will include “development of best practices and guidance” for secure and resilient AI development and implementation as developed by the National Institute of Standards and Technology (NIST), and development of recommendations for “red-team” testing of generative AI applications.
Third, CISA said it will “assess and recommend mitigation of AI threats facing our nation’s critical infrastructure in partnership with other government agencies and industry partners that develop, test, and evaluate AI tools.” The agency will take on that task by establishing an AI-focused effort based on its existing Joint Cyber Defense Collaborative (JCDC) which is a government-private sector collaborative effort that develops plans and guidance for cyber defense operations. The new JCDC offshoot will be focused on “collaboration around threats, vulnerabilities, and mitigations related to AI systems,” CISA said today.
Fourth, the agency said it will contribute to DHS-led and interagency efforts to create “policy approaches for the U.S. government’s overall national strategy on cybersecurity and AI, and supporting a whole-of-DHS approach on AI-based-software policy issues.” CISA said that work will involve coordinating with international partners to advance global AI security best practices and principles.
And finally, CISA said it will continue efforts “to educate our workforce on AI software systems and techniques,” and actively recruit interns, fellows, and future employees with AI expertise. As part of that work, CISA said it will “ensure that internal training reflects — and new recruits understand — the legal, ethical, and policy aspects of AI-based software systems in addition to the technical aspects.”
“Artificial Intelligence holds immense promise in enhancing our nation’s cybersecurity, but as the most powerful technology of our lifetimes, it also presents enormous risks,” commented CISA Director Jen Easterly today on the release of the AI roadmap document.
“Our Roadmap for AI, focused at the nexus of AI, cyber defense, and critical infrastructure, sets forth an agency-wide plan to promote the beneficial uses of AI to enhance cybersecurity capabilities; ensure AI systems are protected from cyber-based threats; and deter the malicious use of AI capabilities to threaten the critical infrastructure Americans rely on every day,” she said.
“DHS has a broad leadership role in advancing the responsible use of AI and this cybersecurity roadmap is one important element of our work,” said DHS Secretary Alejandro Mayorkas.
“In last month’s Executive Order, the President called on DHS to promote the adoption of AI safety standards globally and help ensure the safe, secure, and responsible use and development of AI,” he said. “CISA’s roadmap lays out the steps that the agency will take as part of our Department’s broader efforts to both leverage AI and mitigate its risks to our critical infrastructure and cyber defenses.”
“CISA’s AI roadmap defines clearly the agency’s vision and the role it will play in the implementation of the AI Executive Order,” said Matt Hayden, vice president of cyber, intelligence and homeland security at General Dynamics Information Technology (GDIT), and a former assistant secretary for cyber, infrastructure, risk, and resilience policy at DHS.
“One thing is clear, AI is going to be a critical part of protecting every application and service across the federal government,” Hayden said. “AI and AI-enabled systems will need to be leveraged in a collective risk management framework that ensures that risks are mitigated as more services and enhanced products are introduced to systems. CISA’s roadmap strongly aligns to GDIT’s view of balancing innovation with safety and our commitment to creating safe, secure, and responsible AI technologies.”