Nvidia (NVDA) is Sold Out for 2025...

AI has NOT hit a wall. This is the part of AI that people don’t understand.

There are rumors floating around that AI has "hit a wall."

They remind me of Thomas Watson, the chairman of IBM, who said…

"I think there is a world market for maybe five computers."

Or Ken Olsen, founder of Digital Equipment Corporation…

"There is no reason anyone would want a computer in their home."

Or a Nobel Prize winning economist who penned the most technically illiterate statement in world history…

“The Internet's impact on the economy has been no greater than the fax machine's.”

However catchy the phrase “hit a wall” sounds to the AI doomers…

It doesn't align with how the technology works and what insiders see.

In fact, Sam Altman summed our entire slightly more wordy analysis below up for you with four words…

To understand why he tweeted this, today we’re going to blind you with some science…

And if you survive, you might just buy Nvidia for your first time…

Or buy a lot more of it.

To begin, it's helpful to know that AI chips are used for two main purposes: training and inference.

Training refers to teaching AI models to understand patterns using massive amounts of data.

Think of this like creating an online brain full of as much information as possible.

On the other hand, inference is when an AI model, already trained, makes predictions or takes actions based on new information.

Think of it like you asking the brain a question… and it answering.

This is the part users experience directly… like when you ask a ChatGPT a question.

(Continued below…)

AI ARTISTS INTERPRETATION

Yes, it's possible that training new, large AI models is currently showing some signs of slowing down.

But it's important to note that training is just one (tiny) part of the entire AI process.

Sure, the Mag7 stocks and big private AI companies are now spending billions on training…

But they could soon spend a LOT more for inference

Because we are seeing an explosion of innovation in both new AI Models and AI Agents.

In short: It’s good to train these models, but the goal is to inference them so you can leverage two things: chain of reasoning and agents.

New AI models like ChatGPT 4o no longer simply run one inference request...

They run multiple inference requests to obtain better results.

Modern inference involves "chain of reasoning," where models simulate, reflect, and analyze context multiple times before producing an answer.

(Think of it like the difference between this big AI brain in the sky blurting out the first answer that comes to mind… and thinking through your request multiple times to produce a well thought out answer.)

Plus, we can add an additional layer on top of that…

AI agents string multiple AI models together... into to something akin to a team of experts.

This requires even more hardware for inference.

Today, inference accounts for 40% of NVDAs revenue...

And that will go up a billion -- yes, a billion – times.

As Jensen Huang has said repeatedly, “This is the part of AI that people don’t understand”.

The world needs more NVidia machinery.

Here’s the clip you can turn your eyes to if you have a short attention span…

Here’s the full interview if you have a long attention span…

Today, every single Mag 7 company is working on some version of AI Agents.

Satya Nadella, the leader of Microsoft emphasized at Ignite this week that CoPilot Agents are central to their future strategy.

The other Magnificent 7 companies are also deeply engaged in AI, too…

Amazon has Bedrock Agents.

Google has their Vertex Agent Builder.

Thousands of AI startups are building their own AI agents on Meta’s Llama 3 model.

Apple Intelligence is installed on 100 million devices… and will eventually roll out to 1.4 billion humans.

Beyond the Mag7…

OpenAI is also focusing on mobile and desktop agents that bring these tools into our everyday environment.

Remember, OpenAI was just the fastest app to 100 million users…

Including the 11 million paying members who are outperforming those who don’t have it.

Anthropic already has their desktop agent…

Plus, there are thousands of AI startups -- a mini AI boom -- innovating on agent-driven solutions for productivity, creativity, and more.

All of these solutions require more and more (and more and more) inference.

Again, a BILLION times more.

(Continued below…)

And that means ONE THING: They need more hardware… more of Nvidia’s chips.

As Thomas Dolby once sang, “I can hear machinery.”

So…

Even if training hits a speed bump, inference is what really drives adoption and value for businesses.

That’s why Nvidia is sold out for 2025…

With an approximate order book for about 2.2 million chips ($95B to $114B).

In the meantime, no matter what happens with Nvidia’s stock price after the bell today…

Over the coming months, NVidia will continue to outperform the market.

So, don’t miss our next newsletter update.

Always be prospering,

socrAItes

Sage Research

P.S. Pass this along…

The purpose of this AI revolution is to create systems that produce intelligence continuously through inference, just as factories produce goods.

P.P.S. Also, it’s important to know that a new training algorithm could produce a breakthrough.

So, perhaps (maybe) the current algorithm (the transformer model) is “hitting a wall”…

Much like earlier models (NLP) did.

But remember…

Not only are there thousands of researchers working on new algorithms, but now they are also using AI to iterate faster.

If the history of technology is any guide…

Avoid antiquated notions…

Because the next breakthrough may be just around the corner.