AI Has Reached Its Breaking Point
Exploring the Challenges and Limitations of Artificial Intelligence Today

Several research and AI specialists have pointed to the fact that the quantity of training data, processing power, and energy utilization would need to expand dramatically. But it just isn’t doable, and even back in April, OpenAI and other generative AI startups looked to already be butting up against severe restrictions in these areas. However, subsequent reporting, interviews, and research have now officially proven what I and many others expected. What this implies for the AI business and the economy at large might be disastrous.
These stories came from The Information and Reuters. The Information released a piece revealing how their next text AI, dubbed Orion, is just modestly better than their existing Chat GPT-4o model despite utilizing a significantly bigger training dataset. The magazine reported that “some researchers at the company believe Orion isn’t reliably better than its predecessor in handling certain tasks” and that “Orion performs better at language tasks but may not outperform previous models at tasks such as coding.”
According to The Information, Orion attained GPT-4 levels of competence after being taught on only 20% of its training material but scarcely progressed after that. Because AI training approaches have reportedly hit a stalemate in recent years, we may make an informed judgment that this implies Orion has a training dataset five times bigger than GPT-4, but it is not appreciably better. This nicely shows and verifies the diminishing returns problem.
To drive this concern home even harder, Reuters interviewed the newly fired OpenAI co-founder, Ilya Sutskever. In the interview, Sutskever said that the firm’s latest experiments aiming to scale up its models show that such efforts had plateaued. As far as he is concerned, AI cannot grow better by merely giving it more data.
Recent findings also corroborate with Sutskever and explain why Orion is ultimately a bit crap. One of these research indicated that when AI models are given more data and develop bigger, they don’t get widely better but gain better at specialized tasks at the expense of their wider usefulness. You can see this in OpenAI’s o1 model, which is bigger than GPT-4o and is better at solving mathematical problems but is not as good at writing effectively. You may also see this with Tesla’s FSD. As the program became stronger at managing increasingly complicated traffic difficulties, it apparently started to lose fundamental driving abilities and began to curb corners.
Yet another business determined that, at their present pace, generative AI companies like OpenAI would run out of high-quality new data to construct their AIs on by 2026! As such, making AIs better by merely expanding these models bigger won’t be a feasible choice in the very near future. Indeed, others have speculated that the reason Orion is underperforming is because OpenAI can’t gather enough data to make it any better than GPT-4o.
But, any way, this demonstrates that the predictions of generative AI suddenly stalling have come true.
There are several answers to this, such improving how AIs are created to decrease the training data required, operating numerous AIs simultaneously, or introducing new computational architecture to make AI infrastructure significantly more efficient. However, all of these ideas are in their infancy and are years away from becoming deployable. What’s more, these solutions merely push this problem down the road, since all they do is make AI slightly more efficient with its energy and data, and they also don’t tackle the question of where these corporations will obtain more fresh, high-quality data in the future.
So, why does this matter?
Well, big tech has put billions of dollars on AI on the premise that it would become exponentially better and be massively lucrative in the future. Sadly however, we now know that just won’t happen.
Take OpenAI. A few months ago, it was forecast to report a $5 billion yearly deficit and even face bankruptcy. Even worse, the AIs that it did release aren’t lucrative despite hundreds of millions of users. So even if OpenAI didn’t spend a dime on generating any new models, it would still plummet. Yet, even despite this, it was able to secure several billion dollars in credit and an additional $6.6 billion in fresh capital, giving the artificial intelligence business a stunning $157 billion value and preserving it from collapse. However, given their present pace of loss and development expenditures, this is just enough to save them from desolation for another year.
This, paired with the officially certified severely falling profits, implies that some of the largest and most prominent industries and biggest investment companies in the world are supporting a fundamentally defective product. The last time our economy did this, it generated one of the biggest financial crises in living memory: the credit crisis of 2008.
About the Creator
Iron-Pen☑️
I hold an unending passion for words, with every letter carrying a piece of my soul. Each story is a journey to explore myself and the world. I aim to be a voice for the voiceless and sow seeds of hope and change in readers' hearts.



Comments (1)
Nicely done it. Great.