Former Google CEO and chairman Eric Schmidt reiterated this week his views on the need for global-level regulations for development of artificial intelligence (AI) technologies, and said that the era of AI computer systems gaining the capability to set their own “objective functions” may only be five to 10 years away.

Schmidt – whose Schmidt Futures philanthropic venture funds science and technology research – talked over those points during an Axios AI event on Nov. 28.

Global Regulation Wanted

The former Google official discussed his support for global regulation of AI technology development through organizations akin to the Intergovernmental Panel on Climate Change (IPCC) created by the United Nations in 1988, and the International Atomic Energy Agency (IAEA) created in 1957 as an autonomous organization within the United Nations system.

Asked about the feasibility of global-level regulation, Schmidt replied, “tech people, along with myself, have been meeting about this for a year, and the narrative goes something like this – we are moving well past regulatory understanding, government understanding of what is possible. We accept that.”

“This is the first time where the senior leaders [in the tech arena] who are engineers have basically said we want regulation,” he continued. “We want regulation, but we want it on in the following ways which is, you know, never works in Washington.”

“There is a complete agreement that there are systems and scenarios which are dangerous,” he said. Currently, he added, “in all of the big [AI] models … all of them have groups that basically look at the guardrails and they put constraints on it.” Examples of those constraints, he said, include “they say thou shalt not talk about death, thou shalt not talk about killing.”

He also offered by way of example that AI developer Anthropic “actually trained the model with its own constitution, which they didn’t just write themselves, they hired a bunch of people to design a constitution for an AI.”

Despite those steps, he said, “the problem here is none of us believe that this is strong enough.”

“And if you look at the history, we have one experience with horrific things – Nagasaki and Hiroshima of course – and during that time, we ended up with treaties that took 18 years or so … to get to a treaty over test bans.”

But in the case of AI development, he said, “We don’t have that kind of time. So, our opinion at the moment is the best path is to build some IPCC-like environment globally that allows accurate information of what’s going on to the policymakers.”

Asked about the likelihood of that happening, he said, “so far, we’re on a whim … [but] the first taste of winning is there.”

He pointed to the AI Safety Summit event in early November hosted by the United Kingdom government, which produced agreement from 28 countries on The Bletchley Declaration on AI safety. That declaration, the U.K. government said, recognizes “the urgent need to understand and collectively manage potential risks through a new joint global effort to ensure AI is developed and deployed in a safe, responsible way for the benefit of the global community.”

Schmidt said that declaration features “very sensible guidelines,” and he also credited the Biden administration’s Oct. 30 release of its AI executive order which he said states that “each of the aspects of our government are to get organized around this.”

But even with that progress, he added, “if you look at what we’re doing collectively, it’s not enough. We’re not claiming this as the end solution, but it’s the beginning.”

Systems Going ‘Objective’

“I want to emphasize that what you see in the media is you see people who don’t really understand this get completely panicked, like we’re going to build bombs and kill everybody, which we’re precisely not doing, so we’re clear,” Schmidt said.

“It is true that eventually we will have these problems and I’m now convinced, and here’s what I want to say clearly, we’re going to be fine for some years and the reason we’re going to be fine is you’ve got lots of smart people, lots of money, lots of consequences, people paying attention,” he said.

“When you become not fine is when the computers can begin to set their own objective function,” he said.

Asked when that day would come, Schmidt replied, “it depends.”

“The answer two years ago was 20 years from two years ago, and it had always been 20 years from two years ago,” he said.

“It now looks like it’s 10 or less from today … and there are people who believe it’s two, three, four years … I’m going to say it’s five to ten,” he said.

Asked to look to the 2028 time period, Schmidt replied, “Today, even if the computer AI system is doing something bad, and we agree it’s bad, a human made it do it.”

“In the next year, this conversation is not going to be very relevant because you’re going to be awash in copyright, misinformation, election interference, and so forth – it’s going to drive everybody crazy – globally, by the way, not just in the U.S. It’s sort of terrible,” he said. “We can’t really fix that one because that technology is all out.”

“I worry about … the point at which the computer can start to make its own decisions to do things,” he said. “And a simple example would be imagine you have a supercomputer, you’ve got a whole bunch of guards around it and so forth, and one day it discovers access to weapons.”

“We don’t know that such a system would tell us the truth. We’ll start working through those scenarios,” he said. “So maybe it’s more important to build a system that tells you the truth before you allow it access to weapons.”

“It’s all theoretical,” Schmidt said. “But these are the concerns that the technical people will talk about, because the loss of human agency is a really big deal, and possible … many people think it’s probable.”

Compelling Use Cases

Talking about the current state of AI development, Schmidt said, “I think the rough statement is that we are building systems that can do things that humans can’t do, but only things that humans can initiate.”

He offered two examples of that opportunity – the first to provide AI-based tutoring help to students, and the second to provide AI-based medical instruction in native languages around the globe.

“Why do we not have an AI doctor that works with nurse practitioners and medical professionals all around the world to bring initially basic medical knowledge – but then ultimately the kind of standard of medical knowledge that we have here in our country – to the entire world in their local languages,” he asked. “It’s an obvious product to build.”

On the teaching front, he asked, “why don’t you just take every student in the world and give them an AI tutor, which is not a substitute for a teacher but works with the teachers in their language to bring them up and learn in whatever way they learn best to their ultimate potential?”

“I defy you to argue that an AI doctor for the world and an AI tutor is net negative,” he said,” adding, “it just has to be good. Smarter, healthier people has got to be good for our future.”

Read More About
Recent
More Topics
About
John Curran
John Curran
John Curran is MeriTalk's Managing Editor covering the intersection of government and technology.
Tags