OpenAI's Sam Altman says an international agency should monitor the 'most powerful' AI to ensure 'reasonable safety'

2024-05-12 20:11:56+00:00 - Scroll down for original article

Click the button to request GPT analysis of the article, or scroll down to read the original article text

Original Article:

Source: Link

OpenAI CEO Sam Altman wants an international agency to regulate artificial intelligence. Altman said an agency approach would be better than inflexible laws given AI's rapid evolution. He compared AI to airplanes, emphasizing the need for a safety testing framework. Sign up to get the inside scoop on today’s biggest stories in markets, tech, and business — delivered daily. Read preview Thanks for signing up! Access your favorite topics in a personalized feed while you're on the go. download the app Email address Sign up By clicking “Sign Up”, you accept our Terms of Service and Privacy Policy . You can opt-out at any time. Advertisement OpenAI CEO Sam Altman says he's keen on regulating AI with an international agency. "I think there will come a time in the not-so-distant future, like we're not talking decades and decades from now, where frontier AI systems are capable of causing significant global harm," Altman said on the All-In podcast on Friday. He believes those systems will have "negative impact way beyond the realm of one country" and wants to see them regulated by "an international agency looking at the most powerful systems and ensuring reasonable safety testing." Related stories In Altman's view, landing on the appropriate level of oversight will be a balancing act. Advertisement "I'd be super nervous about regulatory overreach here. I think we get this wrong by doing way too much or a little too much. I think we can get this wrong by doing not enough," he said. Legislation to regulate the fast-changing technology is already underway. In March, the EU approved the Artificial Intelligence Act , which will categorize AI risk and ban unacceptable use cases. President Joe Biden also signed an executive order last year calling for greater transparency from the world's biggest AI models. And this year the state of California has been leading the charge on regulating AI as lawmakers consider more than 30 bills, according to Bloomberg . But Altman argued that an international agency would offer more flexibility than national legislation — and that's important given how quickly AI evolves. Advertisement "The reason I've pushed for an agency-based approach for kind of like the big-picture stuff and not like a write-it-in-law is in 12 months it will all be written wrong," he said. He thinks that lawmakers, even if they're "true world experts," probably can't write policies that will appropriately regulate events 12 to 24 months from now. In simple terms, Altman thinks AI should be regulated like an airplane. "When like significant loss of human life is a serious possibility, like airplanes, or any number of other examples where I think we're happy to have some sort of testing framework," he said. "I don't think about an airplane when I get on it. I just assume it's going to be safe."