Futurism logo

Jeffrey Hinton Proud of Student Who Fired Sam Altman: Nobel Laureate Speaks Out on AI Safety

Is the "Godfather of AI" turning against his own creation? The shocking truth revealed!

By Next KodingPublished about a year ago 3 min read
Warning from the Godfather of AI About the Dangers of Forgetting Social Responsibility

In a surprising turn of events, Jeffrey Hinton, widely regarded as the "Godfather of AI," has been awarded the 2024 Nobel Prize in Physics, sparking both celebration and controversy in the scientific community. Hinton's groundbreaking work in artificial neural networks has revolutionized the field of artificial intelligence, but his recent comments about former student Sam Altman and AI safety have sent shockwaves through the tech world.

Hinton, whose pioneering research laid the foundation for modern AI, received the prestigious award for his work on neural networks, which mimics the human brain's structure and function. This achievement marks a significant milestone in the recognition of AI research within the broader scientific community. However, the decision to award a computer scientist the Nobel Prize in Physics has raised eyebrows among some traditionalists in the field.

In a candid interview following the announcement, Hinton expressed pride in one of his former students who played a role in the temporary dismissal of Sam Altman from OpenAI in November 2023. This revelation has brought to light the ongoing debate about the direction and ethics of AI development, particularly in high-profile organizations like OpenAI.

Hinton stated, "I'm particularly proud of the fact that one of my students fired Sam Altman." This bold statement underscores the growing concern among AI experts about the prioritization of profit over safety in the rapidly evolving field of artificial intelligence.

The temporary ouster of Altman, which lasted for three days around Thanksgiving 2023, was a significant event in the AI industry. Hinton's support for this action stems from his belief that Altman's leadership at OpenAI had shifted focus from the original mission of developing safe and beneficial AI to a more profit-driven approach.

"OpenAI was set up with a big emphasis on safety," Hinton explained. "Its primary objective was to develop artificial general intelligence and ensure that it was safe. But over time, it turned out that Sam Altman was much less concerned with safety than with profits, and I think that's unfortunate."

This criticism from Hinton, a respected figure in the AI community, highlights the growing tension between the rapid commercialization of AI technologies and the need for careful, safety-focused development. It also raises questions about the responsibility of AI leaders in shaping the future of this powerful technology.

Hinton's concerns about AI safety are not unfounded. He points out several immediate risks, including the creation of fake videos that could corrupt elections and the dramatic increase in sophisticated phishing attacks. "Last year, for example, there was a 1200% increase in the number of phishing attacks," Hinton noted, emphasizing how AI has made it easier to create convincing scams.

The Nobel laureate also touched on longer-term concerns about AI surpassing human intelligence. He estimates that within the next 20 years, AI could become more intelligent than humans, a prospect that both excites and alarms experts in the field.

To address these challenges, Hinton advocates for increased focus on AI safety research. He suggests that governments should compel large tech companies to allocate substantial resources to safety studies, proposing that "maybe a third of the effort goes into safety because if this stuff becomes unsafe, that's extremely bad."

While Hinton's Nobel Prize is a cause for celebration in the AI community, it has also sparked debate among physicists. Some argue that awarding the physics prize to a computer scientist blurs the lines between disciplines. However, supporters contend that Hinton's work has profound implications for our understanding of intelligence and information processing, topics deeply rooted in physics.

Reflecting on his journey, Hinton shared advice for aspiring researchers: "If you believe in something, don't give up on it until you understand why that belief is wrong." This perseverance was crucial in Hinton's own career, as he continued to work on neural networks even when they were considered a dead end by many in the field.

The controversy surrounding Hinton's Nobel Prize and his outspoken views on AI safety serve as a reminder of the complex challenges facing the AI industry. As artificial intelligence continues to advance at a rapid pace, the balance between innovation and safety becomes increasingly critical.

Hinton's story is not just about scientific achievement; it's a call to action for the AI community to prioritize ethical development and long-term safety. As we stand on the brink of potentially world-changing AI technologies, the insights and warnings from pioneers like Hinton become more important than ever.

The coming years will likely see increased scrutiny of AI development practices and a growing emphasis on safety research. Hinton's Nobel Prize may well be remembered not just as a recognition of past achievements, but as a pivotal moment that helped shape the future of AI for the better.

artificial intelligencesciencetechhumanity

About the Creator

Next Koding

Reader insights

Be the first to share your insights about this piece.

How does it work?

Add your insights

Comments (2)

Sign in to comment
  • Testabout a year ago

    i love this

  • Testabout a year ago

    An article that highlights that security around the myth of AI development is a paramount issue.

Find us on social media

Miscellaneous links

  • Explore
  • Contact
  • Privacy Policy
  • Terms of Use
  • Support

© 2026 Creatd, Inc. All Rights Reserved.