Thursday, April 25

The paradox at the heart of Elon Musk’s OpenAI trial

It would be easy to dismiss Elon Musk’s lawsuit against OpenAI as a case of sour grapes.

Mr. Musk sued OpenAI this week, accusing the company of violating the terms of its founding agreement and violating its founding principles. According to him, OpenAI was established as a non-profit organization charged with building powerful AI systems for the benefit of humanity and freely disseminating its research to the public. But Mr. Musk says OpenAI broke that promise by creating a for-profit subsidiary that invested billions of dollars with Microsoft.

An OpenAI declined to comment on the lawsuit. In a memo sent to employees on Friday, Jason Kwon, the company’s chief strategy officer, denied Mr. Musk’s claims and said: “We believe the allegations in this lawsuit may stem from Elon’s regret over not not be involved in the business today,” according to a copy of the memo I viewed.

On one level, the trial reeks of personal beef. Mr. Musk, who founded OpenAI in 2015 with a group of other tech heavyweights and provided much of its initial funding but left in 2018 due to conflicts with management, does not resent being left out of conversations about AI. Its own AI projects have not yet come to fruition. as much traction as ChatGPT, OpenAI’s flagship chatbot. And Mr. Musk’s falling out with Sam Altman, chief executive of OpenAI, is well documented.

But amid all this animosity, there is one point worth highlighting, because it illustrates a paradox that is at the heart of much of the current conversation about AI – and a place where OpenAI has really spoken out of both sides of his mouth, insisting both that its AI systems are incredibly powerful and that they fall far short of matching human intelligence.

The claim focuses on a term known as AGI, or “artificial general intelligence.” Defining what constitutes AGI is notoriously complicated, although most people would agree that it is an AI system capable of doing most or all of the things the human brain can do. Mr. Altman defined AGI as “the equivalent of an average human you might hire as a colleague”, while OpenAI itself you define AGI as “a highly autonomous system that outperforms humans at the most economically profitable work.”

Most AI company executives say that not only is AGI possible to build, but also that it is imminent. Demis Hassabis, the chief executive of Google DeepMind, told me in a recent podcast interview that he thinks AGI could happen as early as 2030. Mr. Altman you said this AGI could be only four or five years away.

Building AGI is the explicit goal of OpenAI, and there are plenty of reasons to want to get there before anyone else. A true AGI would be an incredibly valuable resource, capable of automating huge swaths of human work and making money for its creators. It’s also the kind of bright, bold goal that investors like to fund and that helps AI labs recruit the best engineers and researchers.

But AGI could also be dangerous if it is able to outsmart humans, or if it becomes deceptive or inconsistent with human values. The people who started OpenAI, including Mr. Musk, were concerned that an AGI would be too powerful to be owned by a single entity, and that if they were going to build one, they would have to change the structure of control surrounding it. to prevent it from doing harm or concentrating too much wealth and power in the hands of a single company.

That’s why when OpenAI partnered with Microsoft, it specifically granted the tech giant a license that only applied to “pre-AGI” technologies. (The New York Times sued Microsoft and OpenAI for using copyrighted works.)

Under the terms of the agreement, if OpenAI created something that met the definition of AGI – as determined by OpenAI’s non-profit board of directors – Microsoft’s license would no longer apply, And OpenAI board could decide to do whatever he wanted to ensure that OpenAI’s AGI benefits all of humanity. This could mean many things, including turning the technology on or turning it off completely.

Most AI commentators believe that current state-of-the-art AI models cannot be called AGI because they lack sophisticated reasoning skills and frequently make stupid mistakes.

But in his legal filing, Mr. Musk makes an unusual argument. He argues that OpenAI has Already achieved AGI with its GPT-4 language model, released last year, and that the company’s future technology will be even more clearly labeled AGI

“As a matter of information and belief, GPT-4 is an AGI algorithm, and therefore expressly outside the scope of Microsoft’s September 2020 exclusive license with OpenAI,” the complaint states.

What Mr. Musk is saying here is a bit complicated. Basically it says that because it reached AGI with GPT-4, OpenAI is no longer allowed to license it to Microsoft and that its board is required to make the technology and research more freely available.

His complaint cites the now-infamous “Sparks of AGI” paper by a Microsoft research team last year, which claimed that GPT-4 demonstrated early indications of general intelligence, including signs of human-level reasoning. .

But the complaint also notes that OpenAI’s board is unlikely to decide that its AI systems In fact be called an AGI, because as soon as it does, it needs to make big changes in how it deploys and leverages technology.

Additionally, he notes that Microsoft – which now has a non-voting observer seat on OpenAI’s board, after an uprising last year that resulted in the temporary firing of Mr Altman – is strong incentive to deny that OpenAI’s technology qualifies as AGI. This would put an end to his activity. license to use this technology in its products, and jeopardize potentially huge profits.

“Given Microsoft’s enormous financial interest in keeping the door closed to the public, the new, captured, conflicted, and compliant OpenAI, Inc. board of directors will have every reason to delay the conclusion that OpenAI has achieved AGI,” the complaint states. “On the contrary, OpenAI’s achievement of AGI, like “Tomorrow” in “Annie”, will always be a day away.”

Considering its historical of questionable litigation, it’s easy to question Mr. Musk’s motives here. And as the head of a competing AI startup, it’s no surprise that he wants to involve OpenAI in complicated litigation. But his trial reveals a real dilemma for OpenAI.

Like its competitors, OpenAI is desperate to be seen as a leader in the race to create AGI, and it has a vested interest in convincing investors, business partners, and the public that its systems are improving at a breakneck pace.

But because of the terms of its deal with Microsoft, OpenAI’s investors and executives may not want to admit that its technology is actually considered AGI, if and when it actually is.

This put Mr. Musk in the odd position of asking a jury to weigh in on what constitutes AGI and decide whether OpenAI technology has met the threshold.

This pursuit has also put OpenAI in the odd position of downplaying the capabilities of its own systems, while continuing to fuel anticipation that a big breakthrough in AGI is imminent.

“GPT-4 is not an AGI,” OpenAI’s Mr. Kwon wrote in the memo to employees on Friday. “It is capable of solving small tasks in many jobs, but the ratio of work done by a human to work done by GPT-4 in the economy remains incredibly high.”

The personal feud fueling Mr. Musk’s complaint has led some people to view it as a frivolous lawsuit — one commentator compared with it’s “sue your ex because she renovated the house after your divorce” – that will be quickly dismissed.

But even if dismissed, Mr. Musk’s lawsuit raises important questions: Who decides when something qualifies as AGI? Are tech companies exaggerating or playing sand (or both) when it comes to describing the capability of their systems? And what motivations lie behind the various assertions about the proximity or distance of AGI?

A lawsuit brought by a billionaire with a grudge is probably not the right way to resolve these issues. But these are good questions to ask, especially as advances in AI continue to accelerate.