A group of OpenAI insiders is calling out what they say is a culture of recklessness and secrecy inside the San Francisco artificial intelligence company, which is striving to build the most powerful AI systems never created.
The group, which includes nine current and former OpenAI employees, has expressed shared concerns in recent days that the company has not done enough to prevent its AI systems from becoming dangerous.
Members say OpenAI, which started as a nonprofit research lab and came to the public eye with the release of ChatGPT in 2022, is prioritizing profits and growth as it attempts to build artificial general intelligence, or AGI, the industry term for a computer. program capable of doing anything a human can do.
They also claim that OpenAI used harsh tactics to prevent workers from expressing concerns about the technology, including restrictive non-disparagement agreements that departing employees were asked to sign.
“OpenAI is really excited about creating AGI, and they are recklessly rushing to be first,” said Daniel Kokotajlo, a former researcher in OpenAI’s governance division and one of the group’s organizers.
The group published a open letter Tuesday calling on leading AI companies, including OpenAI, to establish greater transparency and more protections for whistleblowers.
Other members include William Saunders, a research engineer who left OpenAI in February, and three other former OpenAI employees: Carroll Wainwright, Jacob Hilton and Daniel Ziegler. Several current OpenAI employees approved the letter anonymously because they feared retaliation from the company, Mr. Kokotajlo said. A current and former employee of Google DeepMind, Google’s central AI lab, also signed.
Lindsey Held, a spokesperson for OpenAI, said in a statement: “We are proud of our track record in delivering the best and safest AI systems and believe in our science-based approach to managing risk. “We agree that rigorous debate is crucial given the importance of this technology, and we will continue to collaborate with governments, civil society and other communities around the world. »
A Google declined to comment.
The campaign comes at a difficult time for OpenAI. It’s still recovering from last year’s attempted coup, when company board members voted to fire Sam Altman, the chief executive, over concerns about his candor. Mr. Altman was brought back a few days later and the board was remade with new members.
The company also faces legal battles with content creators who accuse it of stealing copyrighted works to train its models. (The New York Times sued OpenAI and its partner Microsoft for copyright infringement last year.) And its recent reveal of a hyperrealistic voice assistant was married off by a public spat with Hollywood actress Scarlett Johansson, who claimed that OpenAI imitated his voice without permission.
But nothing stuck more than the accusation that OpenAI was too cavalier about security.
Last month, two senior AI researchers – Ilya Sutskever and Jan Leike – left OpenAI under a cloud. Dr Sutskever, who sat on OpenAI’s board and voted to fire Mr Altman, had raised the alarm about the potential risks of powerful AI systems. His departure was seen by some security-conscious employees as a setback.
So did the departure of Dr. Leike, who, along with Dr. Sutskever, had led OpenAI’s “superalignment” team, which focused on managing the risks of powerful AI models. In a series of public messages announcing his departure, Dr. Leike said he believed “safety culture and processes have taken a back seat to shiny products.”
Neither Dr. Sutskever nor Dr. Leike signed the open letter written by former employees. But their departure has prompted other former OpenAI employees to speak out.
“When I signed up for OpenAI, I didn’t subscribe to this attitude of, ‘Let’s put things out in the world and see what happens and then fix them,'” Mr. Saunders said.
Some of the former employees have ties to effective altruism, a utilitarian-inspired movement that has been concerned in recent years with preventing existential threats from AI. Critics have accused the movement of promoting doomsday scenarios about technology, such as the idea that an unchecked AI system could take over and wipe out humanity.
Mr. Kokotajlo, 31, joined OpenAI in 2022 as a governance researcher and was tasked with forecasting AI progress. He was not, to say the least, optimistic.
In his previous job at an AI security organization, he predicted that AGI could arrive in 2050. But after seeing how quickly AI was improving, he shortened his timeline. He now estimates there is a 50% chance AGI will happen by 2027, just three years away.
He also estimates that the probability that an advanced AI will destroy or catastrophically harm humanity – a grim statistic often abbreviated to “p(doom)” in AI circles – is 70%.
At OpenAI, Mr. Kokotajlo found that although the company had security protocols in place – including a joint effort with Microsoft known as the “deployment security committee”, which was supposed to review new models to detect major risks before they were made public – they rarely seemed to slow anything down.
For example, he said, in 2022 Microsoft began quietly testing in India a new version of its Bing search engine that some OpenAI employees said contained a then-unreleased version of GPT-4, the OpenAI’s large state-of-the-art language model. Mr. Kokotajlo said he was told that Microsoft did not get approval from the security council before testing the new model, and after the council learned lessons from the tests – via a series of reports that Bing was acting strangely towards users – this did nothing to stop Microsoft from rolling it out more widely.
A Microsoft spokesperson, Frank Shaw, disputed the claims. He said the Indian tests did not use GPT-4 or any OpenAI models. The first time Microsoft released GPT-4-based technology was in early 2023, he said, and it was reviewed and approved by a predecessor on the security committee.
Eventually, Mr. Kokotajlo said, he became so concerned that last year he told Mr. Altman that the company should “turn to security” and devote more time and resources to guarding against against the risks of AI rather than embarking on improving its models. He said Mr. Altman had pretended to agree with him, but not much had changed.
In April, I stopped. In an email to his team, he said he was leaving because he had “lost confidence that OpenAI would behave responsibly” as its systems got closer to level intelligence. human.
“The world is not ready, and we are not ready,” Mr. Kokotajlo wrote. “And I fear that we will rush anyway and rationalize our actions.”
OpenAI said last week that it had begun training a new flagship AI model and was forming a new safety and security committee to explore risks associated with the new model and other future technologies.
Upon leaving, Mr. Kokotajlo refused to sign OpenAI’s standard documents for departing employees, which included a strict non-disparagement clause prohibiting them from saying negative things about the company or risk losing their acquired capital. .
Many employees could lose millions of dollars if they refuse to sign. The equity acquired by Mr. Kokotajlo was worth about $1.7 million, he said, which represented the vast majority of his net worth, and he was willing to give up all of it.
(A minor firestorm occurred last month after Vox reported news of these agreements. In response, OpenAI claimed that it had never recouped equity acquired from former employees and would not do so. Mr. Altman said he was “genuinely embarrassed” that he was unaware of the agreements, and the company said it would remove non-disparagement clauses from its standard documents and release former employees from their agreements.)
In their open letter, Mr. Kokotajlo and the other former OpenAI employees call for an end to the use of non-disparagement and non-disclosure agreements within OpenAI and other AI companies.
“Broad confidentiality agreements prevent us from expressing our concerns except to the very companies that are failing to address these issues,” they write.
They also call on AI companies to “support a culture of open criticism” and establish a reporting process for employees to anonymously raise security concerns.
They retained the services of a pro bono lawyer, Lawrence Lessig, a prominent jurist and activist. Mr. Lessig also advised Frances Haugen, a former Facebook employee turned whistleblower who accused the company of putting profits over safety.
In an interview, Mr. Lessig said that while traditional whistleblower protections generally applied to reports of illegal activity, it was important that employees of AI companies could freely discuss risks and harms. potential, given the importance of technology.
“Employees are an important line of defense, and if they can’t express themselves freely without retaliation, that channel is going to be shut down,” he said.
Ms. Held, the OpenAI representative, said the company had “means for employees to voice concerns,” including an anonymous integrity hotline.
Mr. Kokotajlo and his group are skeptical that self-regulation alone will be enough to prepare for a world with more powerful AI systems. So they are asking lawmakers to regulate the industry as well.
“There needs to be some sort of democratically accountable and transparent governance structure in charge of this process,” Kokotajlo said, “instead of just a few different private companies fighting against each other and keeping it all secret “.