Samuel Altman, chief executive officer (CEO) for OpenAI, – the company that created the famed ChatGPT AI tool – testified before Congress about the need for quick and robust AI regulations.

During the hearing on May 16, Altman made it abundantly clear that Congress still has time to mitigate any possible harm caused by AI tools but must act fast.

“We think that regulatory intervention by governments will be critical to mitigating the risks of increasingly powerful models,” Altman said.

During the hearing, Altman was asked by Senator John Neely Kennedy, R-LA., about what types of regulation are needed to defend against the mal use of the new AI tools he recommended three important key regulations which include the following.

“Number one, I would form a new agency that licenses any effort above a certain scale of capabilities and can take that license away and ensure clients with safety standards. Number two, I would create a set of safety standards focused on …the dangerous capability evaluations,” said Altman. “Third, I would require independent audits. So not just from the company or the agency, but experts who can say the model is or isn’t in compliance with the state and safety thresholds.”

The hearing comes as the proliferation of AI tools have exploded into the public sphere and have shown revolutionary traits, which Altman pushed the committee to begin working on building and negotiating international relations with foreign partners to make AI regulation effective.

“The US should lead here and do things first, but to be effective. We do need something global – there are paths [for] the U.S. to set some international standards that other countries would need to collaborate with and be a part of,” said Altman.

Although current AI tools have been shown to potentially disrupt various work fields there is still great concern about what future AI, and their capability of disrupting while making current AI tools seem insignificant.

At the same hearing Gary Marcus, professor emeritus at New York University, noted how current AI is still in its infancy and that real disruption will come within the coming years.

“When we look back at the AI of today in 20 years, we’ll be like, wow, that stuff was unreliable. It couldn’t do planning which is an important technical aspect – But when we get to Artificial General Intelligence (AGI)… that is going to have I think, profound effects on labor,” said Marcus.

Other important suggestions given during the hearing included a precision regulatory approach from IBM’s Christina Montgomery, chief privacy and trust officer.

“For us, this comes back to the issue of trust, trust in the technology, and trust is our license to operate – that’s why we’re here advocating for a precision regulatory approach. So, we think that AI should be regulated at the point of risk, essentially. And that’s the point at which technology meets society,” said Montgomery.

Read More About
More Topics
Jose Rascon
Jose Rascon
Jose Rascon is a MeriTalk Staff Reporter covering the intersection of government and technology.