After months of prodding, 19 big tech companies have shared their efforts to crack down on malicious uses of AI this election cycle with Senate Intelligence Committee Chairman Mark Warner, D-Va.

In February, a group of technology companies – including big names like Amazon, Google, and TikTok – signed a pact at the Munich Security Conference to help combat the use of harmful AI-generated content meant to deceive voters in the 2024 elections.

In May, Sen. Warner pushed for specific answers about the actions that these companies are taking to make good on the Tech Accord.

With under 100 days until the U.S. presidential election, Sen. Warner on Aug. 7 made public responses to his probe by19 companies: Adobe, Amazon, Anthropic, Arm, Google, IBM, Intuit, LG, McAfee, Microsoft, Meta, Open AI, Snap, Stability AI, TikTok, Trend, True Media, Truepic, and X.

For example, Google noted that it was the first tech company to launch new disclosure requirements for election ads containing synthetic content.

Amazon said that its generative AI foundation model – Titan Image Generator – is watermarked “to help reduce the spread of disinformation by providing a mechanism to identify AI-generated images.”

TikTok told Sen. Warner that it requires creators to label AI generated content and has built a first-of-its-kind tool to empower them do that, “which has been used by more than 37 million creators globally so far.”

Meta noted that it has changed its approach to identifying and labeling AI-generated content. “This includes labeling a wider range of video, audio, and image content when we detect industry standard AI image indicators or when people disclose that they are uploading AI-generated content,” Meta wrote. “If we determine that digitally created or altered image, video or audio content creates a particularly high risk of materially deceiving the public on a matter of importance, we may add a more prominent label.”

Sen. Warner highlighted that the tech companies’ responses “indicated promising avenues for collaboration, information-sharing, and standards development, but also illuminated areas for significant improvement.”

However, the lawmaker also said there was a “very concerning lack of specificity and resourcing on enforcement” of policies, and also noted that the companies have failed to sustain relationship building with local institutions – including local media, civic institutions, and election officials to “equip them with resources to identify and address misuse of generative AI tools in their communities.”

“With the election less than 100 days away, we must prioritize real action and robust communication to systematically catalogue harmful AI-generated content,” Sen. Warner said on Aug. 7. “While this technology offers significant promise, generative AI still poses a grave threat to the integrity of our elections, and I’m laser-focused on continuing to work with public and private partners to get ahead of these real and credible threats.”

Read More About
About
Cate Burgan
Cate Burgan
Cate Burgan is a MeriTalk Senior Technology Reporter covering the intersection of government and technology.
Tags