According to the creator of Google Brain, big tech corporations are making up the idea which AI will wipe out humanity.
Leading AI experts argue over claims that Big Tech incites dread.
Renowned AI researcher and Google Brain co-founder Professor Andrew Ng has shared some intriguing information on the narrative surrounding AI and the extinction of humankind. He asserts that the belief that AI would cause human beings to end is a "bad concept" that major tech companies purposefully promulgate. Their secret agenda? To implement strict laws that would hinder competition in the quickly developing AI field.
As stated by Professor Ng, the intersection of AI's devastating potential and the drive for onerous licensing requirements might lead to terrible suggestions for policy. He highlights that these rules might protect the interests of large information technology companies looking to prevent competition from free to download AI solutions, in along with hampering innovation.
This fascinating viewpoint illuminates a covert The objective in the field of technology, where lobbyists are using the fear of Artificial-caused human extinction as a weapon for promoting regulation that would be damaging to the free and open-source AI community in particular.
Although Professor Ng agrees that control of AI is necessary, he cautions about the direction that legislation is currently taking, speculating that some of the suggested methods would be more detrimental than helpful. He believed that thoughtful legislation should place a high priority on tech company openness, as this is a critical component in preventing any AI catastrophes. It's an appeal to assure ethical advancement of artificial intelligence by taking lessons from previous decades without compromising creativity or competitiveness in every step.
Amongst the heavyweights of the AI field, a fierce discussion has broken out, with Google Brain founding member Andrew Ng and Yann LeCun, chief AI researcher at Meta, a platform at the center of it. The point of disagreement? Perhaps the Big technological industry is shamelessly exaggerating the likelihood that artificial intelligence (AI) would wipe off humanity
When Andrew Ng hinted that certain big corporations could be inciting panic regarding AI's existential peril in order to prevent taking on freely available efforts, it stirred up controversy. Although he avoided naming specific individuals, Elon Musk, Sam Altman of OpenAI, Demis Hassabis of DeepMind, Geoffrey Hinton, and Yoshua Bengio have all been known to express worries about the dangers of AI.
The former Google employee and Technology pioneer Geoffrey Hinton retaliated against Ng, stating that he left the company to openly express his concerns concerning the existential risk posed by AI. Yann LeCun of Meta, in the meantime, sided with Ng, claiming that the open-source AI community would be hampered by those scare tactics.
Because of this disagreement, there has been greater discussion on how AI would affect societies overall. This is because ChatGPT and other sophisticated AI tools have made these debates more intense.
Meredith Whittaker, president of Signal and main adviser to the AI Now Academy, provided an alternative viewpoint to the "AI disaster" storyline during the conflict. She asserted that people who warned that AI posed a serious danger were endorsing a "quasi-religious mindset" that was unsupported by empirical data. Whitaker claims that the tech industry is abusing this mindset to promote its goods and deflect awareness from more pressing real-world problems like infringement of intellectual property and loss of employment.
It's essentially a clash of ideologies in the field of artificial intelligence, with implications for free and open-source artificial intelligence development going forward as well as for the intentions of the biggest names in technology, in addition to the possible risks associated with Intelligence.
A conflict of viewpoints between well-known AI experts is highlighted in this article. Co-founder of Google Brain Andrew Ng made the suggestion that Big Tech firms may be inflating the existential concerns associated with AI for their own benefit. Yann LeCun and Geoffrey Hinton are engaged in a contentious argument about this point of view. Hinton contends that he left Google in order to openly voice his worries about the existential threat posed by AI, whereas LeCun agrees with Ng and highlights the possible effects on the open-source AI community. The continuing debates over the societal implications of AI and the motivations of tech companies have gained attention as a result of this argument. In the end,
it poses significant queries regarding the direction of AI research and how to strike a balance between legitimate worries and possibilities


Comments
There are no comments for this story
Be the first to respond and start the conversation.