Rep. Yvette Clarke, D-N.Y., issued a call today to her fellow lawmakers to approve artificial intelligence (AI) regulatory legislation that will ensure AI users are protected from harms, especially those in marginalized communities.   

“Today we find ourselves amid a technological revolution, the new age of AI. [And] as we seek to innovate, we must also work to enshrine values into the technology of the future,” Rep Clarke said today during her keynote speech at a Center for Civil Rights and Technology event.  

“Technology cannot be innovative if it’s leaving behind the marginalized,” she emphasized.  

AI systems use algorithms that routinely make decisions about insurance, housing, employment, healthcare, banking, and policing, and the risk of biases becoming deeply embedded in algorithms poses grave risks to citizens.  

AI systems can produce unfair predictions and decisions that have a “critical impact on people’s lives based on sometimes flawed or biased datasets” and there is little insight into how those critical decisions are made, Rep. Clarke said.  

“Bias in AI is the civil rights issue of our time … We must take care to ensure the harms of the past are not entrenched in the technology of the future,” she said, adding it is imperative that Congress pass legislation to regulate how AI algorithms make critical decisions.   

Alongside Sens. Ron Wyden, D-Ore., and Cory Booker, D-N.J., in the Senate, Rep. Clarke on the House side of the Capitol introduced the Algorithmic Accountability Act of 2023 which aims to create protections for individuals who are subject to algorithmic decision making in areas like housing, credit, education and more. If it becomes law, this legislation would require certain companies to conduct impact assessments of augmented critical decision processes and automated decision systems.  

“In other words, this bill would require companies to internally audit whether the algorithms they intend to use or deploy are not leading to biased outcomes that can negatively impact ordinary citizens lives,” Rep Clarke said.  

In addition, Rep Clarke urged her colleagues to also pass legislation that would establish a clear standard for identifying deep fakes and providing prosecutors, regulators, and victims with the necessary tools to combat fake and manipulated content. 

“With few laws to manage the spread of the technology, we stand at the precipice of a new era of disinformation warfare aided by the use of new AI tools,” Rep Clarke said.  

The REAL Political Advertisement Act, introduced by Rep. Clarke last year, would require disclosure of AI generated content in political ads.  

The DEEP FAKES Accountability Act, also introduced by Rep. Clarke in 2021, would give victims of deep fakes the tools to fight back. This legislation would require both developers of deep-fake software and online platforms where deep fakes may be circulated to make clear when an image, video, or audio clip has been significantly digitally altered. 

“If we truly still believe that every single American should have the opportunity to achieve their highest aspirations, then it’s time we ensure the core values of democracy, equity and transparency are at the heart of the technologies of the future,” Rep. Clarke said.  

Read More About
Recent
More Topics
About
Lisbeth Perez
Lisbeth Perez
Lisbeth Perez is a MeriTalk Senior Technology Reporter covering the intersection of government and technology.
Tags