The Intelligence Advanced Research Projects Agency (IARPA) is seeking information on established vulnerabilities and threats that could impact the safe use of large language model (LLM) AI technologies – like ChatGPT – by intelligence analysts.

According to a request for information (RFI) posted Monday, IARPA is looking to elicit frameworks that categorize vulnerabilities and threats associated with LLM technologies, specifically in the context of their potential use in intelligence analysis.

“LLMs have received much public attention recently due, among other things, to their human-like interaction with users,” the agency’s RFI reads. “These capabilities promise to substantially transform and enhance work across sectors in the coming years. However, LLMs have been shown to exhibit erroneous and potentially harmful behavior, posing threats to the end-users.”

IARPA is seeking information from organizations on four categories:

  • Frameworks for classifying and understanding the range of LLM threats and vulnerabilities;
  • Specific LLM threats and vulnerabilities with descriptions of each and their impacts;
  • Novel methods to detect or mitigate threats to users posed by LLM vulnerabilities; and
  • Novel methods to quantify confidence in LLM outputs.

According to the RFI, IARPA is interested in the characterizations and methods for both “white box” models – some privileged access to parameters or code – and “black box” models – no privileged access to parameters and code.

Responses are due to IARPA by Aug. 21.

Read More About
About
Cate Burgan
Cate Burgan
Cate Burgan is a MeriTalk Senior Technology Reporter covering the intersection of government and technology.
Tags