Education logo

Will modern developments prove to us the truth about what is happening around us?

DeepSeek

By amrPublished 12 months ago 5 min read

Typically, OpenAI generates headlines for three reasons:

1. A board member debacle has occurred.

2. A new product is about to be released.

3. Or they’re grumbling about how they’re still not earning profit

They recently hit the headlines for reason number 3.

Sam Altman tweeted this annoying thing in early January.

That being said, "enables scaled access to the best of OpenAI’s models and tools" is what OpenAI claims the $200 ChatGPT Pro membership does, as announced on December 5, 2024.

Despite the high price tag of $200/month, users will have unrestricted access to OpenAI's most advanced models. Among the most cutting-edge AI products available now are: GPT-4o, Advanced Voice, OpenAI o1, o1-mini, and many more.

It does not surprise me in the least that OpenAI is still losing money with this subscription arrangement. For one thing, it reinforces a long-held opinion of mine: OpenAI’s strategy of deploying too many models is not a sensible move.

I lost track of how many more shinny tools OpenAI has created since ChatGPT debuted in 2022. Launching new and improved items appears to be their key approach for profitability.

There is always a new product from the AI creator even before the market could get acclimated to the one they introduced previously.

But this wasn’t how things were for the biggest companies of today. These other tech giants had fewer products when they turned profitable:

  • Google turned profitable when it had only two products in the market: Google Search and Google AdWords.
  • Facebook turned profitable with just the Facebook app and ads as its primary offerings.
  • Apple was pulling in profits two years after it was founded, with the Apple I and II as its main products.
  • Microsoft started making a profit immediately after launch by being the sole supplier of the BASIC interpreter for the Altair 8800 kit.

OpenAI, with more than eight product releases (I know I said I lost count, but I had to catch up so I could say this), is yet to turn profitable.

And I get it, the primary reasons for this unprofitability are the insane cost of training and running AI models, plus the heavy investments that go into research and development.

It costs OpenAI up to $700,000/day to run its models, and they spend billions annually on research and development. The broader implication is that despite the company making more than $3.7 billion in sales last year, it ran a net loss of over $5 billion.

Everything about OpenAI is big: users, valuation, revenue, number of products, and losses. Everything, except profit!

I figured that launching too many products as a means of pursuing profitability is not a wise play because this would dramatically keep shooting up training and running costs.

It is simple math.

If it costs OpenAI a hypothetical $0.1/query when someone uses one of its products. The cost is bound to rise the more users OpenAI incurs—be it for a single product or 10 more. More products and users mean you burn more cash.

Every startup would give anything to have a million users and would go to any length to have more than one product that people run after. With more than 300 million users for ChatGPT alone and up to 10 products in the market, OpenAI ought to be living the dream. But with the cost realities involved, the dream of every startup is, in this case, OpenAI’s nightmare.

I don’t know why OpenAI had kept on threading this path of more products. It takes only the application of first-principle thinking, and it’d be clear that rather than more products, OpenAI ought to instead focus on innovations geared towards reducing training and running costs.

Going back to our hypothetical scenario, if cost per query can be reduced to $0.01 for every user, that would amount to a 10x reduction in running cost. And who said they’d stop there?

SpaceX followed the cost reduction model with rocket building and launching.

Am I saying it is something that is easy to do? Definitely not.

I was in doubt on whether this approach of cost reduction can be applied to AI training and running, especially when you consider how nascent the industry is, and the reality that OpenAI risks losing to competitors if it chooses to focus on cost reduction rather than on launching new and better products.

These had been my doubts, until DeepSeek launched.

Lessons from SpaceX and DeepSeek

Space exploration used to be the biggest deal. But everything about it is dang complex.

From the theories that underpin it to the practical aspect of building the rockets that explore everything beyond Earth, space exploration is a thing only a few dare to go into. It goes over the heads of others.

Worse still, everything about space exploration is ridiculously expensive. Sending a rocket to space would typically costs NASA close to $2 billion. The implications of this expense are twofold:

  • Fewer rockets are sent to space.
  • Space exploration happens at a slower pace as a result.

Then, Elon Musk decided to do something different.

Through SpaceX, Musk was able to make rocket building significantly inexpensive. SpaceX flights could cost between $62m - $90m — that is more than a 90% price reduction. And this has the two-fold implication of letting us send more rockets to space and, therefore, speeding up space exploration.

The company was able to achieve this through a combination of factors:

  • In-house manufacturing (as opposed to outsourcing)
  • Simplification of design
  • And reusability

The result is that, as of April 2024, SpaceX was launching a mission every 2.7 days. A number whose significance shines when you consider that from the mid-1980s through the 2010s, a mission was launched every 2.8 days, not just by NASA, but worldwide!

Enters DeepSeek

DeepSeek is freaking the tech world out after it launched its latest R-1 model on January 20th.

You don’t need knowledge of technicalities to understand the brilliance and significance of this launch.

Only three points are worth mentioning:

  • Its performance matches or beats OpenAI’s O1 model on certain AI benchmarks.
  • Usage is free, with API pricing costs one-thirtieth of OpenAI’s.
  • Training one of its models costs $5.6 million, compared to the $100,000 to $1 billion cited as the cost of building a model.

Instead of doing what established players in the AI industry had been doing, DeepSeek chose to do things differently. This helped them reduce costs while achieving almost-equal results as other players in the game. They did at least four things differently:

  1. Used smart optimization to make their existing resources work smarter.

2. Trained only the important parts.

3. Went for smaller memory, leading to faster results and lower costs.

4. Leveraged reinforced learning

The details are more complex than simplistic, but the result is glaring for everyone to see.

. . .

Hopefully, OpenAI will see the need to innovate around cost reduction rather than keep flooding the market with numerous products and charging users exorbitant subscription fees. It can be done.

. . .

collegecourseshow toliststudentVocaltraveladvicebook reviewbusinessfeaturehow tosocial mediatv reviewVocallist

About the Creator

amr

Reader insights

Be the first to share your insights about this piece.

How does it work?

Add your insights

Comments

There are no comments for this story

Be the first to respond and start the conversation.

Sign in to comment

    Find us on social media

    Miscellaneous links

    • Explore
    • Contact
    • Privacy Policy
    • Terms of Use
    • Support

    © 2026 Creatd, Inc. All Rights Reserved.