The Cybersecurity and Infrastructure Security Agency (CISA) and a coalition of international partners on May 1 released new guidance urging organizations to take a “careful” approach to adopting agentic artificial intelligence (AI) technologies, and outlining cybersecurity risks and safeguards for deployment.

The guide – titled Careful Adoption of Agentic Artificial Intelligence (AI) Services – was developed by CISA in collaboration with partners, including the Australian Cyber Security Centre.

It aims to help developers, vendors, and operators better understand and mitigate risks tied to agentic AI systems.

Agentic AI – systems capable of acting autonomously to complete tasks – is increasingly being deployed in critical infrastructure and defense sectors to automate operations and improve efficiency, but those benefits come with new and evolving cybersecurity concerns, CISA explained.

Among the risks highlighted in the guidance are expanded attack surfaces, the potential for excessive system permissions known as “privilege creep,” behavioral misalignment in AI decision-making, and limited visibility into system activity due to unclear or incomplete event records.

To address these challenges, the guide recommends that organizations adopt a measured, security-first approach when implementing agentic AI technologies, starting with limited and controlled use cases.

One of the primary recommendations is to avoid granting agentic AI systems broad or unrestricted access – particularly when it comes to sensitive data or mission-critical infrastructure – to reduce the potential impact of compromised or malfunctioning systems.

The guidance also advises organizations to begin deploying agentic AI in low-risk, non-sensitive environments, allowing teams to better understand system behavior and potential vulnerabilities before expanding to more critical applications.

In addition, organizations are encouraged to incorporate agentic AI into their broader cybersecurity and risk management frameworks, ensuring that the technology is evaluated and monitored as part of an overall security posture rather than treated as a standalone capability.

“CISA is committed to supporting the US’s adoption of AI that includes ensuring it aligns with President Trump’s Cyber Strategy for America and is cyber secure,” said CISA Acting Director Nick Andersen.

“CISA encourages agentic AI developers, vendors and operators to review this guide,” he said.

The agency said the guidance is designed not only to address current risks but also to help organizations prepare for future threats as agentic AI systems continue to evolve and become more deeply integrated into critical operations.

Read More About
Recent
More Topics
About
John Curran
John Curran is MeriTalk's Managing Editor covering the intersection of government and technology.
Tags