HomeFinanceSam Altman has an idea to get AI to ‘love humanity,’ use...

Sam Altman has an idea to get AI to ‘love humanity,’ use it to poll billions of people about their value systems

Rewrite the

He’s confident that trait could be built into AI systems—but not certain. 

“I think so,” Altman said when asked the question during an interview with Harvard Business School senior associate dean Debora Spar. 

The question of an AI uprising was once reserved purely for the science fiction of Isaac Asimov or the action films of James Cameron. But since the rise of AI, it has become, if not a hot-button issue, then at least a topic of debate that warrants genuine consideration. What would have once been deemed the musings of a crank, is now a genuine regulatory question. 

OpenAI’s relationship with the government has been “fairly constructive,” Altman said. He added that a project as far-reaching and vast as developing AI should have been a government project. 

“In a well-functioning society this would be a government project,” Altman said. “Given that it’s not happening, I think it’s better that it’s happening this way as an American project.”

The federal government has yet to make significant progress on AI safety legislation. There was an effort in California to pass a law that would have held AI developers liable for catastrophic events like being used to develop weapons of mass destruction or to attack critical infrastructure. That bill passed in the legislature but was vetoed by California Governor Gavin Newsom.  

Some of the preeminent figures in AI have warned that ensuring it is fully aligned with the good of mankind is a critical question. Nobel laureate Geoffrey Hinton, known as the Godfather of AI, said he couldn’t “see a path that guarantees safety.” Tesla CEO Elon Musk has regularly warned AI could lead to humanity’s extinction. Musk was instrumental to the founding of OpenAI, providing the non-profit with significant funding at its outset. Funding for which Altman remains “grateful,” despite the fact Musk is suing him. 

There have been multiple organizations—like the non-profit organization the Alignment Research Center and the startup Safe Superintelligence founded by former OpenAI chief science officer—that have cropped up in recent years dedicated solely to this question. 

OpenAI did not respond to a request for comment. 

AI as it is currently designed is well suited to alignment, Altman said. Because of that, he argues, it would be easier than it might seem to ensure AI does not harm humanity. 

“One of the things that has worked surprisingly well has been the ability to align an AI system to behave in a particular way,” he said. “So if we can articulate what that means in a bunch of different cases then, yeah, I think we can get the system to act that way.” 

Altman also has a typically unique idea for how exactly OpenAI and other developers could “articulate” those principles and ideals needed to ensure AI remains on our side: use AI to poll the public at large. He suggested asking users of AI chatbots about their values and then using those answers to determine how to align an AI to protect humanity. 

“I’m interested in the thought experiment [in which] an AI chats with you for a couple of hours about your value system,” he said. It “does that with me, with everybody else. And then says ‘ok I can’t make everybody happy all the time.’”

Altman hopes that by communicating with and understanding billions of people “at a deep level,” the AI can identify challenges facing society more broadly. From there, AI could reach a consensus about what it would need to do to achieve the public’s general well-being.

AI has an internal team dedicated to superalignment, tasked with ensuring that future digital superintelligence doesn’t go rogue and cause untold harm. In December 2023, the group released an early research paper that showed it was working on a process by which one large language model would oversee another one. This spring the leaders of that team, Sutskever and Jan Leike, left OpenAI. Their team was disbanded, according to reporting from CNBC at the time. 

Leike said he left over increasing disagreements with OpenAI’s leadership about its commitment to safety as the company worked toward artificial general intelligence, a term that refers to an AI that is as smart as a human. 

“Building smarter-than-human machines is an inherently dangerous endeavor,” Leike wrote on X. “OpenAI is shouldering an enormous responsibility on behalf of all of humanity. But over the past years, safety culture and processes have taken a backseat to shiny products.”

When Leike left, Altman wrote on X that he was “super appreciative of [his] contributions to openai’s [sic] alignment research and safety culture.” 

How many degrees of separation are you from the globe’s most powerful business leaders? Explore who made our brand-new list of the 100 Most Powerful People in Business. Plus, learn about the metrics we used to make it.

in HTML format to make it easy for teens to read and understand. Create appropriate headings and subheadings to organize the content. Ensure the rewritten content is approximately 1000 words. At the end of the content, include a “Conclusion” section and a well-formatted “FAQs” section.

Author: fortune.com

Orginal Source link

explore more

LEAVE A REPLY

Please enter your comment!
Please enter your name here