close
close

Nobel Prize-winning AI godfather Geoffrey Hinton says he is proud that his student fired OpenAI boss Sam Altman

Nobel Prize-winning AI godfather Geoffrey Hinton says he is proud that his student fired OpenAI boss Sam Altman

Only a handful of scientists on Earth will ever have the honor of winning a Nobel Prize, a once-in-a-lifetime achievement that will forever be etched in the annals of history.

What made it all the more extraordinary was that Geoffrey Hinton barely spoke for more than a minute before harshly criticizing the CEO of OpenAI during an ad hoc press conference in honor of his award.

Despite only having two hours of sleep, the visibly humbled computer scientist said he had no idea he had even been nominated for the award. After thanking two of his key collaborators over the years – Terry Sejnowski and the late David Rumelhart, both of whom he called “mentors” – Hinton then acknowledged the role his students at the University of Toronto played over the years in making it happen of his project life’s work.

“They’ve done great things,” he said Tuesday. “I am particularly proud that one of my students fired Sam Altman. And I think I’d better leave it at that.”

It is almost the first anniversary of the coup in the boardroom

Hinton was referring to Ilya Sutskever. The former chief scientist at OpenAI joined Helen Toner and two other members of the controlling nonprofit board last November in firing their CEO in a spectacular coup. Sutskever quickly regretted his role in plunging OpenAI into crisis, and Altman was returned to his post within days.

Hinton and Sutskever had teamed up with Alex Krizhevsky in 2012 to develop an algorithm that could identify objects in images fed to it with a degree of certainty that was unprecedented at the time. It’s called “AlexNet” and is often referred to as the Big Bang of AI.

Often referred to as one of the godfathers of artificial intelligence, Hinton praised the work of his colleagues Yoshua Bengio and Yann LeCun before repeatedly making self-deprecating remarks. This included admitting that as a young student he had given up studying physics – the subject in which he was recognized by the Nobel Committee – because he could not cope with mathematics.

News of Hinton’s award comes just weeks before the first anniversary of Altman’s brief, surprising and ultimately unsuccessful ouster – as well as the second anniversary of ChatGPT’s launch in late November 2022.

OpenAI’s generative AI chatbot sparked a wave of interest in the technology as the general public first began to realize that machines could surpass humanity in intelligence this generation.

“Many good researchers believe that AI will become smarter than us at some point in the next 20 years, and we need to think carefully about what happens then,” Hinton said Tuesday.

Artificial Intelligence Security Concerns

Altman is a controversial figure in the AI ​​community. According to a now-departed team leader, former OpenAI board member Helen Toner called him a liar and deprived his AI security team of resources.

Altman is currently attempting to abandon OpenAI’s nonprofit status while attempting to monetize its technology, causing deep divisions within the organization. This has sparked an exodus of the company’s researchers, who are focused on balancing artificial general intelligence with the interests of humanity as the still dominant species on Earth.

When asked about his derogatory remark toward Altman early in the briefing, Hinton explained his reasoning.

“Over time, it became apparent that Sam Altman was much less concerned with safety than with profits,” he said, “and I find that unfortunate.”

Assets has reached out to OpenAI for comment.

Hinton calls for urgent research into AI safety

Luminaries like Hinton, 76, worry that putting profit over ethics at this point is inherently dangerous. It is already difficult for scientists to predict how today’s most advanced AI models, with their trillions of parameters, will actually arrive at their results. In effect, they become black boxes, and once that happens, it becomes increasingly difficult to ensure that humans maintain supremacy.

“If we make things smarter than we are, no one really knows if we can control them,” said Hinton, who vowed to devote his efforts to advocating for AI safety rather than advancing frontier research.

This risk of unknown unknowns is why California lawmakers proposed an AI security law that was the first of its kind in the United States. However, influential Silicon Valley investors like Marc Andreessen lobbied heavily against the bill, and Gov. Gavin Newsom ultimately vetoed it last month.

When asked about the potentially catastrophic risk posed by runaway AI, Hinton admitted there was no certainty.

“We don’t know how to avoid them all right now,” he said. “That’s why we urgently need more research.”

Recommended newsletter
Data sheet: Stay on top of the technology business with thoughtful analysis from the biggest names in the industry.
Register here.

Related Post