Analytics Economy USA

Nvidia Isn’t Wall Street’s Favorite AI Stock Anymore — Is the King About to Come Back?

Nvidia Isn’t Wall Street’s Favorite AI Stock Anymore — Is the King About to Come Back?
A sign for an Nvidia building is shown in Santa Clara, Calif., May 31, 2023 (AP Photo / Jeff Chiu)

CNBC, Fortune, and Forbes contributed to this report.

Nvidia reports earnings on Wednesday, and for once the storyline isn’t “how high can this thing go?”

The chipmaker that became the AI trade is suddenly looking a little… ordinary. The stock is still up nearly 39% this year, but that’s nothing next to its rivals. Advanced Micro Devices has doubled in 2025. Broadcom is up almost 50%. Over the past three months, Nvidia has barely budged — up about 2% — while AMD has surged roughly 37% and Broadcom 12%.

Wall Street is noticing. Citigroup analyst Christopher Danely says investors he talks to now favor AMD and Broadcom over Nvidia, thanks in part to what they see as slower earnings-per-share growth. In the horse race for “hottest AI chip play,” Nvidia has slipped to the middle of the pack.

Yet this week’s earnings report could flip that narrative fast.

Analysts are still far from abandoning Nvidia. Quite the opposite: several have raised their price targets heading into the print. Stifel’s Ruben Roy, for instance, bumped his 12-month target to $250 from $212, suggesting roughly 34% upside from Monday’s close. He’s looking for a “moderate” beat on earnings.

Part of that optimism comes from CEO Jensen Huang’s recent keynote at the GPU Technology Conference in Washington, D.C., where he laid out Nvidia’s vision as the backbone of global AI infrastructure. The company is sitting on more than $500 billion in cumulative orders for its Blackwell and Rubin platforms spanning 2025–26, he said — a staggering pipeline of data center buildouts.

If Nvidia can deliver another blowout quarter and reinforce that long-term story, it could easily recapture its “AI trade” crown and maybe even drag the broader market out of its funk. Lately, AI has been as much a source of anxiety as excitement: the S&P 500 is down more than 2% in November, the Nasdaq Composite more than 4%, as investors fret that they’ve overpaid for anything AI-related — Nvidia included.

Nvidia currently trades at about 29 times forward earnings, versus a little over 21 for the S&P 500. But AMD and Broadcom are even more expensive, at roughly 39 and 36 times forward earnings. If everything is pricey, the one that can actually keep delivering monster profits may win back the spotlight.

For some long-term investors, the current hand-wringing is noise. At Markman Capital, for example, Nvidia’s quarterly report is a biannual reminder of what patient compounding can do.

They started buying Nvidia back in 2015, long before ChatGPT, when the stock was unpopular and widely considered too expensive for a niche business that mostly sold graphics cards to hardcore gamers. The consensus: limited upside, lots of hype.

Jensen Huang insisted everyone was missing the point. He argued that Nvidia’s graphics processing units — GPUs — were about to become ground zero for artificial intelligence and high-performance computing, a market he said could be worth trillions. Skeptics rolled their eyes. Operators of hyperscale data centers — think Google and Amazon — did not. They started buying GPUs and experimenting.

They quickly realized GPUs weren’t just for pretty video game graphics. Plug them into massive data centers, feed them power and data, and they effectively “manufacture” intelligence. Internally, companies like Google used that compute to attack hard math and materials problems and saw just how much AI could reshape their businesses.

That’s the core Nvidia story a lot of people still underestimate: Nvidia doesn’t just sell chips; it makes the machines that make AI. Its customers are building “AI factories” — giant data centers designed to crank out intelligence at scale. If AI really becomes the foundation of how every business operates, that market is enormous.

The numbers show how far this has gone. Nvidia’s sales to data centers in fiscal 2015 were just $317 million. When Huang reports Q3 results, that figure is expected to be around $43 billion, and still climbing. The broader data center industry has swelled into a roughly $170 billion business less than a decade later.

For early buy-and-hold investors, that growth has been life-changing. Adjusted for stock splits, Markman says it started recommending Nvidia at under 50 cents a share. Today, the stock trades in the $180s. That’s roughly a 37,000%-plus gain. Many of its clients, the firm says, have become millionaires multiple times over from relatively small starting stakes — courtesy of decades of compounding, not clever short-term trades.

The philosophy is simple: traders and bears might nail a few calls, but they pay taxes on every sale and face hard limits on how much they can make, especially on short positions. Long-term holders, by contrast, capture the full arc of a compounding growth story — assuming the story doesn’t break.

Nvidia’s current arc is about as extreme as it gets.

At its GTC conference in late October, Huang took the stage and made a typically sweeping claim: Nvidia, he said, is “at the epicenter of the largest industrial revolution in human history,” bigger than the advent of steam engines or electricity. He rolled out a grab bag of partnerships: building 100,000 driverless vehicles with Uber; teaming with Palantir to speed deliveries from warehouses to front doors; and laying out blueprints for “AI factories,” mega data centers optimized around Nvidia hardware and software.

Investors loved it. Nvidia’s market cap jumped 5% in a single day — a roughly $250 billion move, more than 60% bigger than Boeing’s total value. Nvidia briefly became the first company ever to touch a $5 trillion valuation. According to MarketBeat, 46 out of 47 analysts rate the stock a “buy” or “strong buy.”

But a small group of skeptics is increasingly uncomfortable with how Nvidia is trying to keep that dominance.

To understand their concern, it helps to remember how Nvidia got here. For most of its 32-year history, the company was a gaming chip maker that nearly ran out of cash more than once. In 1999, it launched a groundbreaking GPU that could handle complex computations much faster than previous chips. In the 2000s, breakthroughs in deep learning collided with Nvidia’s hardware, enabling AI to do things like translation and image recognition far better than before.

By around 2012, Nvidia bet the company on GPUs and related hardware for AI — plus its custom CUDA software, which developers use to build AI applications specifically optimized for Nvidia chips.

When OpenAI released ChatGPT in late 2022, Nvidia was perfectly positioned. Its GPUs were the most powerful and, crucially, among the most energy-efficient for the massive math needed to train models. In fiscal 2023, Nvidia made a modest $4.4 billion in profit. Over the last four quarters, that figure has exploded to roughly $86.6 billion, making Nvidia the biggest profit machine in tech after Alphabet, Apple, and Microsoft, and ahead of Amazon and Meta.

So what’s the problem?

For one, Nvidia is heavily dependent on a handful of giants. In the second quarter of this year, about 52% of sales came from just three undisclosed customers, widely believed to be Microsoft, Amazon, and Alphabet. Meta and Elon Musk’s ventures are also big buyers. That concentration makes Nvidia vulnerable.

International expansion isn’t straightforward either. Chinese demand has already been hampered by export controls and trade tensions. China previously accounted for more than an eighth of Nvidia’s sales via clients like Baidu and Alibaba. That share is shrinking and could fall further, boosting Nvidia’s reliance on those same big US players.

According to people familiar with the company’s thinking, Huang’s nightmare scenario is simple: Nvidia ends up like every other commoditized hardware maker. In hardware, technological leads are temporary and margins tend to fall over time. Right now, Nvidia is a total outlier, with gross margins near 80% in fiscal 2025, versus just under 50% for AMD and around 30% for Intel.

The big hyperscalers know this — and they want more leverage.

Until now, Nvidia’s dominance has given it the upper hand. Its GPUs are so far ahead that it has effectively enjoyed near-monopoly power, limiting how hard Amazon, Microsoft, and others can push on price. That dynamic is changing.

The major cloud players don’t just want to buy Nvidia chips. Long term, they want to compete with Nvidia by designing their own. Microsoft is developing its Maia chips. Amazon has Trainium and Inferentia. Google is leaning heavily on its own TPU silicon.

All that new competition puts pressure on Nvidia’s pricing power. Amazon CEO Andy Jassy has already complained that today’s AI is too expensive and predicts the cost will come down. Microsoft’s leaders have also made clear that owning more of their own silicon is a strategic priority. As one analyst put it, Nvidia wants to diversify away from the Big Four — and the Big Four want to diversify away from Nvidia.

At the same time, chip rivals and startups are circling. AMD is gaining ground in AI inference, where Nvidia has been strong. Qualcomm and a wave of newer AI chip makers like Groq are targeting the data center market, with some early traction, especially in places like the Middle East. Even OpenAI, arguably Nvidia’s closest partner, is working on its own chips to reduce its reliance on Nvidia — and Microsoft might end up preferring OpenAI’s silicon to its own Maia product.

In short: competition is coming from every direction, and Nvidia’s cozy dominance won’t last forever.

To defend its position, Nvidia isn’t just rolling out better chips. It’s also quietly helping build an alternative AI infrastructure universe — one where it isn’t hostage to the big cloud providers.

The centerpiece is a set of deep, complex relationships with OpenAI and a “neocloud” provider called CoreWeave. Together, they form the heart of what some analysts see as Nvidia’s attempt to “buy demand.”

The strategy relies partly on vendor financing — a perfectly legal, long-standing practice where suppliers help fund their customers’ purchases. Think carmakers offering loans so buyers can afford more vehicles. For Nvidia, the goal is to seed and nurture a parallel ecosystem of data centers that run almost exclusively on its hardware.

With OpenAI, the plan is particularly ambitious. Under a recent agreement, Nvidia is set to invest up to $100 billion in OpenAI equity. The goal: outfit data centers totaling about 10 gigawatts of capacity at an estimated cost of roughly $500 billion. Each tier of the build-out envisions Nvidia putting in $10 billion of financing, OpenAI $40 billion, and around $30 billion going specifically toward Nvidia chips.

The catch? OpenAI still has to find that $40 billion per tier. The company is burning cash fast — losing roughly $13.5 billion on $4.3 billion of revenue in the first half of this year — and is projected to rack up huge operating losses for years before turning profitable. To pull off its infrastructure ambitions, OpenAI will likely need to borrow hundreds of billions of dollars.

On top of that, OpenAI has already committed to roughly $1.4 trillion of AI infrastructure projects, including $300 billion with Oracle, which also implies massive Nvidia GPU purchases. Nvidia, so far, is only an equity investor; it hasn’t pledged its balance sheet to guarantee OpenAI’s debt. And OpenAI itself is trying not to be fully dependent on Nvidia, talking to other chip suppliers and designing its own hardware.

Then there’s CoreWeave. Once a crypto miner, it’s now the largest of the so-called neoclouds racing to build AI-focused data centers. It is, however, highly leveraged. Nvidia has stepped in as both investor and enabler: it put $250 million into CoreWeave’s IPO, owns more than 6%, and has helped the company stack up a $56 billion backlog for AI infrastructure contracts. CoreWeave runs 33 data centers, all filled with Nvidia GPUs.

Nvidia has even guaranteed to buy $6.3 billion of CoreWeave’s computing capacity if the company can’t sell it — a kind of backstop that makes it easier for CoreWeave to raise debt. The idea is that CoreWeave can provide capacity to fast-growing AI startups like Mistral and Cohere, which might one day become major players but couldn’t otherwise secure financing on their own.

This is Nvidia’s “curated demand” machine: fund the customers that will, in turn, buy your chips in bulk.

All of this works beautifully — as long as AI demand lives up to the hype.

But critics see a big risk: these models involve enormous borrowing by companies that have not yet proved they can make sustainable profits from AI. CoreWeave and OpenAI both face large cash flow deficits and are taking on substantial debt to build infrastructure. The data center shells are usually financed by REITs and other property players, who lend against long-term leases. The gear inside, though — including those high-priced GPUs — is often financed with loans tied to the expected profits of AI workloads that don’t exist yet.

If those workloads and profits don’t materialize, the chain can break. Data centers could end up partly empty. Lenders could seize GPUs that were used as collateral and dump them back onto the market. That would be bad news for everyone, including Nvidia, whose chips would suddenly be oversupplied and whose carefully curated ecosystem would be under stress.

And it’s not just Nvidia’s world that’s levering up. The big four hyperscalers — Microsoft, Amazon, Google, and Meta — are collectively planning to spend staggering sums on AI infrastructure. By some estimates, they’ll put around $330 billion into AI capex in 2025 alone, and even more in 2026, pushing total AI infrastructure spending by all hyperscalers toward $490 billion next year. Take a conservative view and you get roughly $700 billion in AI infrastructure over two years.

To earn even a respectable 15% return on that level of investment, those companies would need to generate an extra $105 billion in AI-related profits every year. That’s nearly one-third of the $350 billion in total net profits they produced over the past four quarters.

Right now, there are strong hints of productivity gains and research breakthroughs. What we don’t have yet is a clear list of “killer apps” that obviously justify that much spending. As one veteran AI infrastructure investor put it:

“We know the tech is impressive. We don’t yet know where all the real money is going to come from.”

If the return on all this capital disappoints, the question becomes: who eats the losses — the hyperscalers, the neoclouds, the lenders, or the suppliers like Nvidia?

For the moment, the AI hype train is still moving, and Nvidia’s cash machine is still humming. The company keeps pushing out new products on shorter cycles, staying ahead of rivals and making it hard for anyone to catch up. Many customers remain firmly in the “true believer” camp.

But Wall Street is no longer treating Nvidia as an untouchable miracle stock. AMD and Broadcom have stolen the momentum. Questions about debt, demand, and long-term returns are getting louder.

Wednesday’s earnings report won’t settle all those debates. But it will give investors a fresh look at two critical things:

  1. Can Nvidia keep growing into its sky-high valuation while rivals and customers push back?
  2. Is the AI buildout turning into a sustainable profit engine, or just an incredibly expensive experiment?

If Nvidia can keep answering “yes” to the first — and convince investors the second will eventually pay off — the former king of the AI trade might yet reclaim its crown. If not, this could be the chapter where the compounding legend meets the limits of its own success.

Wyoming Star Staff

Wyoming Star publishes letters, opinions, and tips submissions as a public service. The content does not necessarily reflect the opinions of Wyoming Star or its employees. Letters to the editor and tips can be submitted via email at our Contact Us section.